Sign in Run Free Scan
The AI Visibility Stack

Three layers. Each depends on the one below.

The AI Visibility Stack is the conceptual model AIVZ uses to organize all 93 AEO factors into three dependency layers — Access, Understanding, Extractability. It's the most important methodology choice in the product. It tells you what to fix first, and why fixing it second is wasted work.

The Framework

The dependency stack.

The layers stack vertically, and the dependency runs upward: each layer depends on the one beneath it. A failure at any layer collapses the layers above it.

Layer 3 — Extractability
Output Layer

Can AI cleanly isolate and reuse the answer?

When this layer fails, AI reads you but quotes someone else. The most common citation-failure mode for content-heavy sites.

FAQ Summary Content Richness Content Quality Speakable
Failure mode: AI reads you but quotes someone else.
↑ depends on
Layer 2 — Understanding
Semantic Layer

Can AI parse the structure and meaning?

When this layer fails, AI reaches your content but can't reliably interpret it. Citations don't fire because the system isn't confident about the source identity.

Schema Structure Entity Schema Accuracy Author E-E-A-T
Failure mode: AI reaches you but can't reliably interpret you.
↑ depends on
Layer 1 — Access
Foundation

Can AI crawlers physically reach the content?

Without this layer working, no other layer matters. AI can't optimize for content it can't reach.

Crawl Access Feed & Discovery Bot Blocking
Failure mode: AI can't reach you at all.

If your robots.txt blocks GPTBot, your perfectly-structured FAQ schema does literally nothing for ChatGPT visibility. The schema is correct. The content is excellent. ChatGPT will never see it.

The Stack tells you what to fix first. Bottom-up. Always.

The Structural Argument

Independent vs dependent factor models.

Most AEO tools treat factors as independent contributors. The Stack treats them as dependent. The difference changes how you sequence work.

Independent factor model says

"Your page scored 47. To improve, fix any of these 23 issues — they each contribute to the total."

User picks the easiest fix. Adds FAQ schema. Score moves 47 → 51. Marginal gain.

Dependent factor model says

"Your page scored 47. The score is capped at 65 because Layer 1 is failing — your robots.txt is blocking GPTBot. Fix that first. Projected score after Layer 1 fix: 78."

User fixes Access. Score moves 47 → 78. Step-function gain — because the upstream work was already done; it just wasn't counting.

The dependency model captures something real about how AI systems behave: they fail gracefully at the boundaries you expect (no access → no citation), and they compound success at the boundaries where multiple signals reinforce each other (clean schema + structured answer + named author = high citation rate). A flat scoring model can't represent this. The Stack can.

The Foundation

Layer 1: Access.

The question this layer answers: Can AI crawlers physically reach your content?

Layer 1 measures whether AI crawl agents — GPTBot, ClaudeBot, PerplexityBot, GoogleOther, Bingbot — can access, discover, and render your pages. Without this layer working, no other layer matters.

Three sub-scores

Sub-scoreWhat it measures
Crawl AccessWhether AI bots can fetch your pages — server response codes, redirect chains, JavaScript rendering requirements, page render time.
Feed & DiscoveryWhether AI bots can find your content — XML sitemaps, RSS/Atom feeds, internal linking, llms.txt declarations.
Bot BlockingWhether bot-protection infrastructure (robots.txt, WAF rules, CDN configurations) is permitting or blocking AI crawlers.

Most common failures

  • robots.txt blocks AI crawlers (often unintentionally)
  • WAF or CDN bot-protection challenges or blocks AI crawlers
  • JavaScript-only rendering with no SSR or pre-rendering
  • Slow TTFB — bots time out before reading the page
  • Missing or stale XML sitemaps
  • No llms.txt published
  • Aggressive rate-limiting that throttles AI bots disproportionately

Layer 1 fixes

  • Audit robots.txt for AI bot disallows; explicitly allow if discovery is desired
  • Review WAF and CDN rules; whitelist AI crawler user agents
  • Implement SSR or pre-rendering for content that matters
  • Optimize TTFB to under 600ms for crawl-priority pages
  • Maintain accurate XML sitemaps; submit to search consoles
  • Publish llms.txt at site root with crawl preferences

Confidence: Most Layer 1 factors are Established — well-supported by web standards. A few (e.g., specific AI-bot user-agent identification, llms.txt adoption) are Emerging / Experimental because the standards are still settling.

The Semantic Layer

Layer 2: Understanding.

The question this layer answers: Can AI parse the structure and meaning of your content?

Layer 2 measures whether AI systems can identify what your page is about, who wrote it, what entities it discusses, and whether the metadata accurately represents the content. This is where structured data does its work.

Five sub-scores

Sub-scoreWhat it measures
SchemaPresence and correctness of JSON-LD structured data — Organization, Person, Article, FAQPage, and other Schema.org types.
StructureDocument structure — heading hierarchy, semantic HTML, content organization.
EntityEntity grounding — sameAs linking, Wikidata presence, knowledge graph alignment, cross-page consistency.
Schema AccuracyWhether structured data matches the actual content — no hallucinated fields, no stale metadata, no schema-content drift.
Author E-E-A-TAuthor identification — named authors, credentials, bio pages, expertise signals, publication history.

Most common failures

  • No JSON-LD structured data at all
  • Schema present but with errors (missing required fields, type errors)
  • No Organization schema (entity grounding fails for the publisher)
  • No Person schema for authors
  • No sameAs linking to authoritative profiles
  • Articles without identifiable authors
  • Schema drift — schema metadata doesn't match what's on the page
  • Inconsistent entity naming across the site

Layer 2 fixes

  • Add comprehensive JSON-LD structured data to every meaningful page
  • Validate schema with Google's Rich Results Test and Schema.org's validator
  • Implement Organization schema sitewide; Person schema for every author
  • Link entities via sameAs to Wikidata, LinkedIn, Crunchbase
  • Audit schema-content alignment — fix any drift
  • Standardize entity naming across the site
  • Add author bio pages with credentials and external profile links

Confidence: JSON-LD and core Schema.org types are Established. Entity grounding via sameAs is Strongly Inferred. E-E-A-T author signals are Strongly Inferred for most platforms, Established for Google-derived AI surfaces.

The Structural Layer

Layer 3: Extractability.

The question this layer answers: Can AI cleanly isolate and reuse the answer?

Layer 3 measures whether AI systems can extract a clean, citable answer block — the kind of structured, concise, fact-dense passage that AI includes in generated responses. When Layer 3 fails, AI reads you but quotes someone else.

Five sub-scores

Sub-scoreWhat it measures
FAQPresence and quality of FAQ structures — proper <dl>, FAQPage schema, scannable Q&A blocks.
SummaryDefinitional and summary density — explicit definitions, summary paragraphs, key-takeaway blocks at the top of long content.
Content RichnessDensity of citation-friendly elements — lists, tables, statistics with sources, structured comparisons, numbered procedures.
Content QualityReadability, factual accuracy, originality, depth on topic, citation formatting.
SpeakableVoice-extractability — speakable schema, voice-friendly answer length, conversational phrasing.

Most common failures

  • Answer is buried in paragraph 4 instead of front-loaded
  • Answer blocks are 200+ words instead of the 40–60 word sweet spot
  • Headings don't ask the questions readers ask
  • Skipped heading levels (H1 → H3) break document parsing
  • No FAQ structure on pages that should have it
  • Statistics presented without sources
  • Comparison content in prose paragraphs instead of HTML tables
  • No speakable schema (voice assistants can't extract)

Layer 3 fixes

  • Front-load direct answers — first 40–60 words of any answer-bearing page should be the answer
  • Refactor verbose answer blocks down to citation-extractable lengths
  • Convert decorative headings to question-based headings
  • Audit and fix heading hierarchy
  • Add FAQ structures with proper FAQPage schema
  • Source every statistic; format citations consistently
  • Convert comparison content from prose to <table> markup
  • Add speakable schema with voice-friendly answer phrasing

Confidence: Front-loaded answers, concise blocks, and HTML structure factors are Strongly Inferred. FAQPage schema is Established. Speakable schema is Emerging / Experimental — the standard is settling and platform support is uneven.

The Composite Output

How layer scores aggregate.

Each layer produces its own score (0–100). The three aggregate into a composite AI Visibility Score. The aggregation is not a simple average — lower-layer weakness caps the contribution of upper layers.

Score Tier What it means
90–100
AI Authority Healthy across all three layers. AI systems can find you, understand you, and cite you reliably. Maintenance mode.
70–89
AI Extractable Strong foundation. Layer 3 needs refinement — you're being read but not always cited. Most fixes are content-structural.
40–69
AI Readable AI can access and partly understand you, but answers aren't extractable cleanly. Most sites land here on first scan.
0–39
Failing in early layers. AI can't reliably reach or interpret your content. High-leverage fix opportunity — large gains available with focused work.

The score is a leading indicator. Citation outcomes — whether AI systems actually cite you — are the lagging indicator. Improvement in the score should be followed by improvement in observed citation rate, on a lag of weeks-to-months as AI systems re-crawl and re-evaluate.

One Stack, Multiple Platforms

The platforms don't agree.

Different AI systems weight the three layers differently. A single composite score under-represents what's actually happening — the platforms surface different visibility patterns from the same underlying content.

PlatformTends to weight
ChatGPTTraining-data exposure (older, well-cited content); structured answer extractability; entity grounding
Google AI OverviewsE-E-A-T signals; existing search ranking; Schema.org compliance; Google-aligned freshness
PerplexityLive crawl access (Layer 1 disproportionately); citation-friendly formatting; recency
Microsoft CopilotBing-derived ranking; Office-integration content surfaces; entity disambiguation
GeminiGoogle-derived signals + Gemini's specific extraction model; multimodal content readiness
Voice assistantsSpeakable schema; concise answer blocks; conversational phrasing

A site can be in the AI Extractable tier overall but Invisible to AI on Voice — typical for content-heavy sites that haven't added speakable schema. The composite hides the gap; the per-platform breakdown surfaces it.

Per-platform factors and readiness scoring
A Real Score, Explained

What a 47 looks like.

The AI Visibility Score by itself is a number. The interesting part is what produced it. Here's how a real (anonymized) domain scored on first scan.

Anonymized worked example

Domain: B2B SaaS company, ~200 pages, content-heavy blog, professionally produced. Mid-market. Composite score: 47 AI Readable

Layer 1 — Access
92

Verdict: Layer 1 was not the bottleneck. Most of the score was inherited from this layer being healthy. One small WAF issue with GoogleOther — fixed in 30 minutes.

Layer 2 — Understanding
58

Verdict: The primary bottleneck. Article schema present but inconsistent. Author identification on ~60% of posts. No sameAs. No Wikidata entity. The score capped because Layer 2 weakness propagated upward.

Layer 3 — Extractability
31

Verdict: Structurally underdeveloped. Answers buried in paragraph 4–6. Few question-based headings. No FAQ structures. No tables. No speakable schema. Capped by Layer 2 — until that improved, fixing this wouldn't show.

The fix sequence

  1. Layer 2 work first. Standardized Article schema. Person schema for every author. Added sameAs to LinkedIn. Added Wikidata entity for the company (manual edit; took two weeks to verify). Fixed three pages with stale metadata.
  2. Layer 3 work second. Refactored top 30 posts to front-load answers. Converted three comparison posts from prose to tables. Added FAQ structures to ten pages. Added speakable schema.
  3. Re-scan after 6 weeks (allowing AI re-crawl).
47 78
Composite score · 6 weeks · Tier upgrade to AI Extractable
Layer 1: 94 · Layer 2: 84 · Layer 3: 71

Step-function gain. The Layer 3 work hadn't been counting until Layer 2 gave it something to anchor to. A flat scoring system would have surfaced 23 issues at 47 and let the team pick whichever was easiest. The Stack told them: Layer 2 first, Layer 3 second, ignore Layer 1.

Communication Patterns

How agencies and in-house teams use the Stack.

The Stack is a methodology framework, but it also serves as a communication framework. It changes how AEO work gets sold internally — to executives, to clients, to budget owners.

Pattern 01 · Triage call

"Your composite score is 42 and the limiter is Layer 1. Specifically, your robots.txt is blocking GPTBot. This is a 30-minute fix. After we ship it, your score will likely jump 15–25 points without changing anything else, because all the existing Layer 2 and Layer 3 work suddenly counts."

Pattern 02 · Quarterly review

"Last quarter: 56 average across the client portfolio. This quarter: 73. Most of the gain came from a focused Layer 2 schema standardization push across 14 client sites. The Layer 3 work is in flight; we project another 10–12 points next quarter."

Pattern 03 · Pitch / sales call

"We measure visibility across three dependency layers — Access, Understanding, Extractability. Each depends on the one before. We can show you exactly where in this stack your visibility is breaking, what the highest-leverage fix is, and what the projected score gain looks like. Want to run the scan?"

Multi-tenant agency dashboards, white-label portals, and stakeholder-ready reporting
Your Next Step

Three paths from here.

See your own score

Get the composite score, the three-layer breakdown, the failing factors per layer, and the prioritized fix path — in 60 seconds.

Run a free scan

Read the full taxonomy

The 93 factors that compose the three layers — organized into 9 categories with confidence labels.

Read the 93-factor taxonomy

The Citation Core 11

The eleven factors with disproportionate impact on citation outcomes — drawn from across all three layers.

Read the Citation Core 11

Or — return to the methodology hub: Read the AEO methodology →

Ready When You Are

See where you stand on each layer.

Run a free scan. Get the three-layer breakdown plus per-platform readiness across the 6 AI surfaces — in under 60 seconds.

Enter your domain
Free No signup Results in 60 seconds
Methodology · AI Visibility Stack