Share of AI Voice (SAIV) is the percentage of AI generated answers that cite your brand for a defined query set, divided by the total brand citations returned for those same queries. It measures how often ChatGPT, Perplexity, Google AI Overviews, Copilot, and Gemini recommend you when buyers ask category questions. If your buyers ask AI before they ask Google, SAIV is the only metric that tells you whether you exist.
I coined this metric because every client I work with at AEO Hunt asks the same question: "How are we doing in AI?" Until now, the honest answer was "we cannot tell you in a number." SEO has rankings. Paid has impressions. AI search had vibes. Share of AI Voice gives marketers a hard percentage. It is the scoreboard for Answer Engine Optimization.
What Is Share of AI Voice?
Share of AI Voice is a single percentage that captures how often AI engines cite your brand against the total brand citations available for the questions your buyers ask. The "voice" in the name is intentional. AI answers are spoken: ChatGPT does not return ten blue links. It returns one synthesized response that names some brands and ignores others. SAIV tells you whether your brand is one of the brands it names.
The metric works the same way share of voice did for traditional advertising, but the channel is different. Instead of measuring paid impressions in television and print, SAIV measures unpaid citations inside large language models. Instead of competing for attention against ten ads in a magazine spread, you are competing for attention against every brand the AI considers when constructing its answer.
If buyers ask ChatGPT before they ask Google, the brands cited inside that ChatGPT answer own the category. Share of AI Voice is the scoreboard.
Why Traditional Share of Voice Broke
Legacy share of voice was built for a world where attention was bought. You spent on television, radio, print, and outdoor, and you measured your spend against the total category spend. The math was simple. The data sources were established. Nielsen, Kantar, and comScore made a business out of selling the answer.
That model assumed buyers consumed advertising at predictable touchpoints. When the buying journey moved to search, the metric adapted. Share of search emerged as the digital era replacement: branded search volume divided by total category search volume, pulled from Google Trends or Search Console. It worked because Google was where buyers went first.
Buyers do not all start at Google anymore. A growing share of category research happens inside ChatGPT, Perplexity, and Copilot. The funnel now begins inside an LLM, and traditional share of voice cannot see inside that box. You can buy every billboard in your category and still get zero citations when ChatGPT answers "what is the best CRM for solo founders." The metric that tracks paid presence cannot measure organic AI recommendation. We covered this shift in detail in how AEO differs from SEO fundamentally, but the short version is this: SAIV tracks the part of the funnel that the old metrics never see.
The SAIV Formula
AEO Hunt developed this formula as a standardized measurement framework. There are two views: per platform and aggregate.
Per Platform SAIV
For each platform you measure, the formula is:
SAIV (platform) = (Your brand citations on platform X) ÷ (Total brand citations on platform X for tracked query set) × 100
Worked example. Acme CRM tracks 30 queries on ChatGPT. Across those 30 queries, ChatGPT names 40 distinct brands a total of 142 times. Acme is named 22 of those 142 times. Acme's ChatGPT SAIV = 22 / 142 = 15.5 percent.
Aggregate SAIV
Most marketing leaders want a single headline number. The aggregate formula weights each platform by its share of your category's AI search volume:
SAIV (aggregate) = Σ (SAIV_platform × traffic_weight_platform)
Default platform weights as of April 2026 for general B2B and consumer categories: ChatGPT 0.55, Google AI Overviews 0.20, Perplexity 0.15, Gemini 0.07, Copilot 0.03. These weights shift as platform usage shifts. We recalibrate them quarterly. If your category skews enterprise, weight Copilot higher. If your category is technical, weight Perplexity higher.
The Four Inputs You Need to Measure SAIV
You cannot calculate SAIV without four inputs in place. Skip any of them and the number is meaningless.
- A locked query set. Twenty to fifty queries that your buyers actually ask AI engines. Mix unbranded category questions ("best CRM for solo founders"), comparison queries ("HubSpot vs Pipedrive"), and problem queries ("how to track my sales pipeline"). Lock the set. If you change the queries between measurement cycles, you have not measured a trend, you have measured noise.
- Platform coverage. Decide which engines you measure. ChatGPT and Google AI Overviews are non negotiable. Perplexity, Copilot, and Gemini are recommended. The platforms you skip are platforms where you have no data on whether you exist.
- Citation extraction. A defined process for parsing brand mentions out of each response. A "citation" includes named mentions, hyperlinked references, and recommended brands. We covered the parsing approach in detail in our guide to tracking where your brand appears in AI responses.
- A competitor set. Three to five named competitors you benchmark against. Without a comparison set, your SAIV is a number with no context. With one, you can answer the only question that matters: are we winning or losing the AI conversation in our category?
How to Calculate Share of AI Voice Step by Step
Here is the full method. It takes a half day to set up and roughly three to four hours per measurement cycle once the system is running.
- Define your query set. Sit down with a notebook and write the 30 questions your buyers actually ask. Mix branded ("Acme CRM reviews"), unbranded ("best CRM for solo founders"), comparison ("Acme vs HubSpot"), and problem ("how to follow up with leads automatically"). Drop branded queries from the SAIV calculation, but keep them logged as a separate visibility check.
- Pick your platforms. ChatGPT, Perplexity, and Google AI Overviews at minimum. Add Copilot and Gemini if you have the bandwidth.
- Run each query at least three times per platform. LLM outputs vary run to run. A single response is a sample of one. Three runs gives you enough sample to detect signal from noise. For high stakes queries, run five or seven times.
- Log every brand cited. One row per citation. Columns: query, platform, run number, brand name, citation type (named, linked, recommended), position in response. Use a spreadsheet for under 100 queries. Use a database past that.
- Count brand citations across the query set. For each platform, sum your brand's citations and the total brand citations across all runs.
- Apply the SAIV formula. Per platform, then aggregate. Report both.
- Compare against the prior cycle. SAIV is most useful as a trend. The first month's number sets the baseline. The second month's number tells you whether your AEO work is moving the needle.
You cannot measure SAIV without a fixed query set. Change the prompts and you change the metric. Lock 30 questions and run them every month. The discipline of using the same query set is more important than the perfect query set.
Per Platform vs Cross Platform SAIV
Per platform SAIV tells you where you are winning and where you are losing. Aggregate SAIV tells the leadership team a single headline number. Most teams need both, reported monthly.
Per platform views matter because the same brand can score 30 percent on ChatGPT, 12 percent on Perplexity, and zero on Google AI Overviews. Each platform pulls from different source mixes. ChatGPT leans on its training data and live retrieval. Perplexity weights real time web sources heavily. Google AI Overviews biases toward sources that already rank in classic Google Search. A brand strong in long form blog content might dominate ChatGPT and miss AI Overviews entirely because the underlying pages do not rank in classic Google.
This is why per platform reporting is the operational view. It tells your team what to fix. If your AI Overviews score is zero, the gap is classic SEO. If your Perplexity score is zero, the gap is fresh, citation worthy content on third party sites. If your ChatGPT score is zero, the gap is entity authority and training data presence.
What Is a Good Share of AI Voice Score?
SAIV benchmarks vary by category maturity. Use these as starting brackets and refine based on your specific competitor set.
| Position | SAIV Range | What It Means |
|---|---|---|
| Category leader | 35 to 60 percent | AI engines name you on a clear majority of category queries. Buyers see your brand recommended before they see anyone else's. |
| Strong challenger | 15 to 35 percent | You are in the conversation but not winning it. AI engines mention you alongside the leaders. Tactical SAIV growth is achievable. |
| Established player | 5 to 15 percent | You appear sometimes, on specific queries. Your AI presence is partial. The path to challenger status runs through entity authority and source coverage. |
| Emerging | 1 to 5 percent | You exist in AI training and retrieval data but barely register on tracked queries. Foundational AEO work, not advanced tactics, is the priority. |
| Invisible | Under 1 percent | Your brand has no material entity signal in AI training or retrieval data. AI does not know you exist. Start with the basics. |
One rule of thumb. If your top three competitors collectively hold over 70 percent SAIV, the category is consolidated and you are competing for position four. The strategic question becomes whether you can carve out a sub category where the consolidated leaders do not appear, then dominate that. Your AEO maturity stage is the strongest predictor of what SAIV range is realistic in the next twelve months.
SAIV vs Share of Search vs Share of Voice
Three metrics, three different jobs.
| Dimension | Share of Voice (legacy) | Share of Search | Share of AI Voice |
|---|---|---|---|
| Channel | Paid media, PR | Google Search | ChatGPT, Perplexity, Gemini, Copilot, AIO |
| Math | Spend / total category spend | Branded search volume / category total | Your citations / total citations on tracked queries |
| Data source | Nielsen, Kantar, comScore | Google Trends, Search Console | LLM response logs across platforms |
| Refresh cadence | Quarterly | Weekly | Monthly |
| What it measures | Paid presence and earned media | Demand interest | AI recommendation rate |
| When to use | Paid campaign planning | Demand forecasting | Top of funnel AEO strategy |
Share of voice still matters for paid brand campaigns. Share of search remains useful as a Google demand signal. Share of AI Voice is the new top of funnel, because the funnel itself moved.
A 20 percent Share of AI Voice today beats a 60 percent share of search five years from now. Buyer attention moved. The metric that tracks it should too.
How Often to Measure SAIV
Monthly is the right cadence for established brands. Weekly during an active AEO sprint or product launch. Daily measurement adds noise without insight.
Here is why daily fails. LLM outputs vary across runs because the underlying models use temperature settings that introduce randomness. The same query asked twice in a row will return slightly different brand mixes. To detect a real change in citation rate, you need sample size, which means more queries and more runs, not more frequent runs of the same query set. A monthly cadence with 30 queries and three runs per query gives you 90 data points per platform, which is enough to call a trend. A daily cadence with the same query set drowns the signal in run to run variance.
The exception is launch windows. When you ship a major content push, a high authority press placement, or a Wikipedia or Wikidata entry, weekly measurement helps you detect when the citation pickup hits. We have seen brand citation rates double inside ten days of a single high authority Reddit thread or Search Engine Land feature. Weekly tracking catches that. Monthly tracking averages it away.
How to Grow Your Share of AI Voice
Six levers move SAIV. They compound when you stack them.
- Source authority. AI engines cite the sources they already trust. The fastest path to SAIV growth is earning placements on the sites those engines rely on. For most categories that means industry publications, trade media, and high authority Reddit threads, not your own blog.
- Entity clarity. An unambiguous brand identity in structured data and schema markup, plus consistent NAP and SameAs connections across the web, makes it possible for AI engines to recognize you as a distinct entity. Without clarity, your citations get attributed to a different company with a similar name.
- Answer density. Publish content that directly answers the questions AI engines are being asked. Not generic blog posts. Specific, structured answers in FAQ format with definition boxes, comparison tables, and numbered steps. Our breakdown of getting cited by ChatGPT covers the format in depth.
- Third party mention density. Coverage on independent sites multiplies your citation rate. Two or three strong mentions per month from authoritative third party domains will move your SAIV faster than ten new pieces on your own blog.
- Forum presence. Reddit, Hacker News, Quora, and category specific communities feed Perplexity and ChatGPT directly. A well argued comment on an active thread can drive citation pickup within days.
- Freshness. AI engines prefer current information. Date your content. Update statistics. Remove stale claims. A 2024 guide loses to a 2026 guide on every query where recency matters.
Paid search spend does not move SAIV. Content depth and entity signals do. We covered the full AEO operating model on the services page.
Common Mistakes When Tracking SAIV
Five traps that kill SAIV measurement quality.
- Tracking branded queries. Queries that name your brand inflate the score and tell you nothing. SAIV measures unbranded category presence. Drop branded queries from the calculation, or track them separately as a brand awareness check.
- Single run measurement. One response per query is a sample of one. LLMs vary. Run each query at least three times.
- Ignoring platform weights. A 50 percent SAIV on Copilot and a 5 percent SAIV on ChatGPT is not a winning position. Weight the aggregate by actual platform usage in your category.
- Changing the query set. The most common mistake. Marketers add new queries every cycle to track the latest content launches. The metric loses comparability. Lock the set quarterly. Add a new locked set, do not modify the old one.
- Confusing citations with mentions. A brand listed in a "here are the options" sentence is not the same as a brand recommended as the answer. Track citation type. Recommendations carry more weight than mentions, and your scoring system should reflect that.
The Future of Brand Visibility Metrics
By 2027, Share of AI Voice becomes a standard line item in marketing dashboards next to CAC and LTV. Boards will ask for it. Tools will commoditize it. Agencies will pitch it. The brands that started measuring in 2026 will have eighteen months of trend data while their competitors are still defining their query set.
The reason is structural. AI assistants are absorbing the top of funnel from search engines, and search engines are absorbing AI generated answers into their own results. The boundary between "search" and "AI" is dissolving. The metric that survives that dissolution has to measure what AI says about your brand, not just what users searched for. SAIV is built for that future. Share of search is not.
I expect three things to happen. First, the major analytics platforms will add SAIV style measurement to their default dashboards within twelve to eighteen months. Second, a new generation of dedicated AI visibility tools will emerge to compete on accuracy and platform coverage. Third, the brands that built early SAIV trend data will use that data as evidence in board meetings, IPO filings, and acquisition conversations. The metric becomes a moat. The brands without the metric look like the brands without web analytics looked in 2005.
Getting Your SAIV Baseline
You can run a SAIV baseline yourself with a spreadsheet, ChatGPT, Perplexity, and a free afternoon. The instructions above are the full method.
If you want it done in a day with platform automation across ChatGPT, Perplexity, Google AI Overviews, Copilot, and Gemini, AEO Hunt offers SAIV baseline measurement as part of AI Visibility and AEO services. The baseline includes:
- Custom 30 to 50 query set built from your buyer journey and competitor research
- Per platform SAIV across ChatGPT, Perplexity, Google AI Overviews, Copilot, and Gemini
- Aggregate weighted SAIV with category specific weights
- Three to five competitor benchmarks side by side
- Citation type breakdown (named, linked, recommended) per platform
- A prioritized roadmap for the next 90 days, ranked by SAIV impact
You walk away with the number, the gap analysis, and the playbook to close it. Tracking your SAIV at scale is part of AI visibility analytics reporting, with monthly trend lines and competitor movement.