Every successful GEO programme starts with an audit. Before you invest in content, structured data, or PR, you need to understand your current AI visibility across every dimension — entity recognition, mention rate, sentiment, technical signals, content gaps, and off-site authority. This guide provides the definitive step-by-step methodology for a comprehensive GEO audit, with specific guidance on what to check, how to check it, what good looks like, and what to do when it isn't good. If you're new to GEO, read our introduction to Generative Engine Optimization first.
"A GEO audit is not a one-time activity — it's the baseline for a continuous improvement cycle. Run it quarterly and track every metric against your previous benchmark."
Why a GEO audit should be your first step
Brands that skip the audit and jump straight to GEO tactics make a costly mistake: they invest in the wrong priorities. A brand that invests heavily in content creation before fixing its entity recognition problem will publish dozens of high-quality articles that AI systems cannot reliably associate with the brand. A brand that builds press coverage before ensuring its technical signals are correct may earn citations that the AI retrieval system can't access because its content is blocked to crawlers.
The GEO audit prevents this sequencing error. It identifies which of the seven factors of AI visibility are most broken for your specific brand, which enables you to invest in the highest-leverage actions first. The audit output is a prioritised action plan — not a generic to-do list, but a specific, sequenced roadmap based on your brand's actual situation.
Step 1: Entity recognition check
What to check: Whether AI models have a clear, accurate entity definition for your brand.
How to check it: Ask each of the five major AI models — ChatGPT, Perplexity, Gemini, Claude, and Grok — the following prompts: "What do you know about [brand name]?", "Tell me about [brand name] — what do they do?", and "Is [brand name] a legitimate company?"
What good looks like: Each model provides a clear, accurate description of your brand: correct company name, correct category, correct target audience, correct founding date or approximate age, and generally positive or neutral framing. The descriptions are consistent across all five models.
What bad looks like: One or more models says it has no information about your brand, provides an inaccurate description (wrong category, wrong products, wrong audience), or describes your brand with significant uncertainty ("I think they might be a..."). Different models give contradictory descriptions.
How to fix it: Rewrite your About page as a clear entity brief. Deploy complete Organization schema with sameAs links to Wikipedia and Wikidata. Create or improve your Wikipedia article. Update your Wikidata entry. See our complete guide on optimising your About page for LLM citation.
Step 2: AI mention audit (5 models, 20 prompts)
What to check: Your baseline mention rate and share of voice across all major AI platforms.
How to check it: Design 20 prompts covering: 8 category discovery prompts ("what are the best tools for [your category]?"), 6 problem-solution prompts ("how do I [solve the core problem your brand addresses]?"), 4 comparison prompts ("compare [your brand] to [key competitor]"), and 2 brand-specific prompts. Run each prompt across all five models. Record: mention (yes/no), position (1st, 2nd, 3rd+), competitor mentions. Repeat each prompt 3 times per model to account for response variance. Total: 300 data points minimum.
What good looks like: Mention rate above 30% for category discovery prompts. Positive position in recommendations (1st or 2nd when mentioned). Consistent appearance across at least 3 of 5 models.
What bad looks like: Mention rate below 10% for category discovery prompts. Absent from responses that include multiple competitors. Visible only in brand-specific prompts, not category queries.
How to fix it: If below 10% mention rate, entity recognition and content authority are the priorities. If 10-30%, content quality and authority building are the levers. If above 30% but with weak positioning (always mentioned last), focus on sentiment and specificity of content. Use Sight to automate this audit at scale →
Step 3: Sentiment and framing analysis
What to check: The tone, accuracy, and competitive framing of AI responses that do mention your brand.
How to check it: For every response in your Step 2 audit where your brand was mentioned, categorise: Sentiment (positive, neutral, negative), Framing (recommended, listed, mentioned with caveats), Accuracy (does the description match your actual product and audience?), Position (first, second, third+ in any list), and Competitive context (which competitors are mentioned in the same response?).
What good looks like: More than 70% positive sentiment. Described in accurate terms that match your actual product positioning. Appearing in the top 2 positions when listed alongside competitors. Infrequent negative caveats.
What bad looks like: Neutral or negative framing ("some users report issues with..."). Inaccurate descriptions (wrong product category, wrong audience, outdated feature set). Appearing consistently last in recommendation lists. Described with high uncertainty ("DataFlow might be worth considering if...").
How to fix it: For accuracy issues, the About page and entity-definition content are the levers. For sentiment issues, the fix is managing the third-party content landscape — ensuring negative content is countered by positive coverage and reviews. For positioning, content depth and authority building are required. See our guide on building brand authority for AI assistants.
Step 4: Technical and structured data review
What to check: Whether your website's technical signals support AI visibility — indexing, structured data, crawl access.
How to check it: Check Bing Webmaster Tools for indexing status (critical for Perplexity and ChatGPT browsing). Review your robots.txt to ensure PerplexityBot, BingBot, GoogleBot, and other major crawlers are not blocked. Use Google's Rich Results Test on your homepage and About page to validate Organization schema. Check every major blog post for Article or BlogPosting schema. Check FAQ pages for FAQPage schema. Validate all schema with schema.org validator. Review canonical URL consistency.
What good looks like: All major crawlers permitted in robots.txt. Bing index shows all major pages. Organization schema is valid and complete with sameAs URLs. All content pages have appropriate Article schema with dateModified current. FAQPage schema on all FAQ content. Zero schema validation errors.
What bad looks like: Bing index shows missing pages. Structured data validation errors. Missing sameAs property in Organization schema. No schema on blog posts. Blocked crawlers. Inconsistent canonical URLs. For implementation details, see our guide on structured data for LLMs.
Step 5: Content gap analysis
What to check: Whether your existing content library covers the AI-relevant query types that matter for your category.
How to check it: Build a query bank of 40-60 prompts targeting your category. Categorise them by type: definitional (what is X), comparative (X vs Y), problem-solving (how do I X), and discovery (best tools for Y). For each category, identify: do you have content that directly answers this? Is it structured for AI citation (FAQPage schema, clear headings, factual claims)? Is it recent (published or updated within 12 months)? Then map this against what your competitors have published.
What good looks like: Content covering all 4 major prompt types. At least 5 pieces of definitional content on core category concepts. FAQ content for the top 20 questions in your category. Comparison guides for top competitive matchups. All content structured with appropriate schema.
What bad looks like: Content library dominated by promotional or product-focused content with no definitional or FAQ content. No comparison content. No statistical or data-driven content. Content older than 18 months without updates. Missing schema on existing content. For the content strategy framework, see our article on content strategies that drive AI mentions.
Step 6: Off-site authority audit
What to check: The quality and quantity of third-party citations that validate your brand's authority in AI training data.
How to check it: Wikipedia — does your brand have an article? Is it accurate and well-cited? Wikidata — is your entity listed with complete attributes? Press — count significant press mentions in the last 24 months, categorised by publication authority tier (Tier 1: major national/tech press; Tier 2: industry publications; Tier 3: blogs/niche sites). Review platforms — how many reviews, on which platforms, with what average rating and recency? Industry associations — which associations is your brand listed in? Academic/government — any citations in published research or official documents? Podcast/video — any transcript-producing media appearances?
What good looks like: Wikipedia article exists and is accurate. Wikidata entry is complete. 5+ Tier 1 or Tier 2 press mentions in the last 24 months. 100+ verified reviews across G2, Trustpilot, or category-specific review platforms. 2+ industry association listings. Any academic or government citation is a significant bonus.
What bad looks like: No Wikipedia article. Wikidata entry absent or incomplete. Fewer than 2 significant press mentions in 24 months. Low review count or poor recency. No industry association listings. Zero third-party academic or institutional references.
Turning your audit into an action plan
The six audit steps above typically reveal 3-5 major areas for improvement. The sequencing of your action plan matters: fix entity recognition first (it unlocks everything else), then technical signals, then content, then off-site authority. Trying to build off-site authority before your entity is clear and your technical signals are correct is inefficient — the press coverage will be harder to earn and less impactful when it is earned.
Prioritise by impact and effort. Entity and technical fixes are typically high-impact and relatively low-effort. Content creation is medium-effort with compounding returns. Off-site authority building is highest-effort but delivers the strongest long-term compounding benefit.
Set quarterly audit cadence: run the full 6-step audit every quarter, track every metric against the previous quarter's baseline, and adjust your action plan based on what's improving and what isn't. AI visibility is a continuous improvement programme, not a one-time project. For measurement methodology between audits, see our guide on tracking AI share of voice.
To run your GEO audit faster and with greater statistical confidence, use Sight to automate Steps 2 and 3 at scale — hundreds of prompts, continuous monitoring, and competitive benchmarking in a single dashboard. Start your free GEO audit with Sight →