Need help with B2B Marketing?
Let the smarketers’ team drive your pipeline with data-led campaigns and AI-powered growth strategies.
A CMO asked me last month how her AEO work was performing. I opened Google Search Console. It showed her organic traffic, which was flat. It said nothing about ChatGPT, Perplexity, Gemini, or AI Overviews. ‘Is that the whole story?’, she asked. It was not.
B2B brands with no AEO measurement stack are flying blind through the biggest channel shift since mobile. Your organic traffic can be flat while your AI citation count is tripling, and you would not know from Google Search Console alone.
73% of B2B buyers use at least one AI engine during research (Gartner 2025). Only 14% of B2B brands have any form of AI citation tracking in place (The Smarketers GEO Audit Q1 2026). The measurement gap is wider than the optimisation gap.
The 4 metrics of AEO
1. Citation count
The number of times your brand’s URL or name is cited in AI engine responses per week. Track separately for ChatGPT, Perplexity, Gemini, and Google AI Overviews. This is the leading indicator: citations rise first, traffic and pipeline follow.
2. Share of Model
Your brand mention share across a defined set of category queries (typically 20 to 100). If you run ‘best B2B CRM’ on ChatGPT and HubSpot is mentioned 8 times in the 20-response sample and you are mentioned 2 times, your Share of Model on that query is 10%. Aggregate across all monitored queries for a category-level score.
3. AI referral traffic
Visitors arriving from AI engine referrers. Configure GA4 with a custom channel grouping and monitor weekly. Most B2B brands see AI referral traffic as 3 to 12% of total traffic in 2026, and it is growing.
4. AI-influenced pipeline
Opportunities where AI-sourced touches appear in the attribution model. Configure HubSpot or Salesforce to flag AI referrals as first-touch or assist-touch events. Report quarterly. This is the lagging indicator that ties AEO work to revenue.
The measurement stack
Profound
Most comprehensive multi-engine tracker. Runs thousands of queries weekly across ChatGPT, Perplexity, Gemini, and AI Overviews. Reports brand mention rate, citation URLs, and competitive benchmarking. Best-in-class for enterprise teams, cost around $2,000 to $10,000 per month.
AthenaHQ
ChatGPT-focused with deeper ChatGPT query monitoring. Strong for teams where ChatGPT dominates the buyer base. More affordable entry point, around $500 to $2,000 per month.
Otterly
Mid-market multi-engine tracker. Covers the big 4 engines at a lower cost, around $200 to $800 per month. Less deep than Profound but sufficient for most B2B teams starting out.
Manual tracking for small teams
If budget is constrained, run a weekly manual audit. Define 10 to 20 category queries. Run each on ChatGPT, Perplexity, and Gemini every Friday. Log citations and brand mentions in a spreadsheet. 2 hours per week. Not as comprehensive as paid tools but produces a defensible trend line.
IF YOU HAVE TO PICK ONE METRIC, PICK SHARE OF MODEL. CITATION COUNT CAN BE INFLATED BY MANY LOW-VALUE QUERIES. TRAFFIC AND PIPELINE LAG BY 8 TO 16 WEEKS. SHARE OF MODEL IS THE LEADING INDICATOR WITH REAL COMPETITIVE SIGNAL.
Setting up GA4 for AI traffic
In GA4, create a custom channel grouping with a new channel called ‘AI Search’. Include these referrers: chat.openai.com, chatgpt.com, perplexity.ai, www.perplexity.ai, gemini.google.com, bard.google.com, claude.ai, copilot.microsoft.com. Add bing.com URLs with ‘/chat’ path patterns for Copilot in Bing.
Report AI Search alongside Organic Search, Paid Search, Direct, Email, Social. Within 2 to 3 months you will have a baseline. Within 6 months you will have trend data that informs content investment decisions.
Setting up HubSpot or Salesforce for AI attribution
Add a custom field to the contact and deal records called ‘First AI Touch’. Populate it via a workflow trigger when a contact’s first-touch source matches AI referrer patterns. Build a dashboard that shows opportunities and closed-won revenue segmented by First AI Touch vs traditional channels.
This produces a quarterly view of AI-influenced revenue that you can share with the CFO. Expect it to be 8 to 22% of new-customer revenue within 12 months of a focused AEO programme.
Competitive benchmarking
Your Share of Model is only useful in context. Track 3 to 5 competitors on the same query set. Most AEO tools (Profound, AthenaHQ) support competitive tracking natively. For manual tracking, log competitor mentions alongside your own.
Common patterns we see in B2B benchmarking: category leaders hold 25 to 45% Share of Model on brand-generic queries. Mid-market competitors hold 5 to 15%. Invisible brands hold <3%. The gap between ‘<3%’ and ‘10%’ is usually the result of 2 to 3 quarters of structured AEO work.
Weekly AEO reporting cadence
Monday morning report to the marketing team: Citation count this week vs last 4 weeks. Top 5 queries where we gained mentions. Top 5 queries where competitors gained mentions. Share of Model % this week. AI referral traffic this week. Pages cited most often. Pages losing citations.
Monthly report to leadership: Share of Model trend (12-week). AI referral traffic trend. AI-influenced pipeline (quarterly rolling). Top 3 content investments of the month and their citation impact. Competitive benchmark update.
The honest limitation
AEO measurement tools are 18 months old as a category. They are improving fast but still have gaps. Query coverage is finite. Response variability is real (the same query can produce different responses hour-to-hour). Attribution is imperfect.
Treat AEO measurement as a directional signal, not a precision metric. The trend matters more than the point value. 12 weeks of rising Share of Model means the work is working. 12 weeks of flat Share of Model means you need to change something. Do not obsess over weekly fluctuations.
Frequently Asked Questions
What is Share of Model?
Share of Model is the percentage of times your brand is mentioned in AI engine responses for a defined set of category queries. It is the AI-era equivalent of Share of Voice in traditional media. Track weekly for 20 to 100 category queries per brand.
Which tools should I use to track AI citations?
Profound (most comprehensive, ChatGPT + Perplexity + Gemini + AI Overviews), AthenaHQ (strong ChatGPT focus), and Otterly (multi-engine, lower cost). Free alternatives include manual weekly audits with ChatGPT and Perplexity and spreadsheet tracking for small teams.
How do I measure AI referral traffic in GA4?
Create a custom channel grouping in GA4 that captures referrers from chat.openai.com, chatgpt.com, perplexity.ai, gemini.google.com, and claude.ai. Many AI tools also pass referrer headers. Report weekly alongside organic search.
Is there an AEO equivalent of Google Search Console?
Not officially. OpenAI and Perplexity do not publish SEO-equivalent reporting. Third-party tools (Profound, AthenaHQ) reconstruct the picture by running thousands of test queries weekly and logging which sources are cited.
How often should I check AI citation data?
Weekly for citation count and Share of Model. Monthly for trend analysis and competitive benchmarking. Daily is overkill except for brand crisis scenarios.





