Answer Engine Optimization is the discipline of optimizing for AI answer engines — ChatGPT, Google AI Overviews, Perplexity, Gemini, Microsoft Copilot, voice assistants. AIVZ measures it across 93 factors organized into a three-layer dependency stack. This page is the methodology canon.
Search Engine Optimization answers the question "how do I rank for this query?" The output is a ranked list of links. The user clicks. The user reads.
Answer Engine Optimization answers a different question: "how do I become the source AI cites when generating an answer?" The output is not a list. It's a synthesized response. The user may never click through — but the cited source gets the brand mention, the authority signal, and the implicit recommendation.
These are different problems. They share infrastructure — both run on crawlable, structured, well-authored web content — but the signals AI systems weight when selecting citation sources are not the same signals search engines weight when ranking results.
Want the deeper breakdown? AEO vs SEO| What SEO optimizes for | What AEO optimizes for |
|---|---|
| Keyword rankings | Citation selection |
| Click-through rate | Answer extraction |
| Backlink count | Entity recognition |
| Page authority | Schema clarity |
| Search intent matching | Answer structure |
| Crawl budget | AI bot access |
| Featured snippets | Generated answer attribution |
In 2023, "search" meant Google. By 2026, the question "where do people get information?" has multiple right answers. ChatGPT alone handles hundreds of millions of weekly conversations. Google AI Overviews appear above the traditional results for an increasing share of queries. Perplexity has built a search-replacement product. Microsoft Copilot is embedded in Office, Bing, and Edge. Voice assistants — Alexa, Google Assistant, Siri — return answers, not links.
The behavioral pattern has shifted. Users ask questions in natural language. AI systems generate answers. The answers cite sources. Visibility now means being the cited source.
This is the surface AEO addresses. It is not a replacement for SEO — most of your traffic will still come from search results for a long time — but it is a new optimization surface that operates on partially overlapping but distinct signals.
Three keywords typed into a search bar, ten ranked links, the user clicks.
A natural-language question, a synthesized paragraph, two-to-four cited sources.
Citation rate · AI mention rate · brand surface in generated answers.
The AI Visibility Stack organizes all 93 AEO factors into three dependency layers. You can't fix Extractability if Understanding is broken. You can't fix Understanding if Access is broken.
Failure mode: AI reads you but quotes someone else.
Front-loaded answers · concise blocks · question-based headings · heading hierarchy · definition density · FAQ structure · stats with sources · bullet lists · HTML tables · citation formatting · speakable schema
Failure mode: AI reaches you but can't reliably interpret you.
JSON-LD structured data · Organization/Person/Article schema · FAQPage schema · sameAs linking · schema graph completeness · entity density · named author presence · author bio & credentials · freshness
Failure mode: AI can't reach you at all.
robots.txt AI bot permissions · WAF/CDN bot blocking · SSR/pre-rendered HTML · TTFB · XML sitemap · canonical tags · RSS/Atom feeds · llms.txt
Most AEO tools surface flat checklists. The Stack tells you what to fix first, and why fixing it second is wasted work.
No competitor has published or operationalized anything comparable. Most surface 30–60 factors, often without dependency models or confidence calibration.
Can AI bots reach your content? robots.txt, WAF blocking, JS rendering, page speed, sitemaps, llms.txt.
Can machines understand your content? JSON-LD, Schema.org types, sameAs linking, schema completeness.
Can AI isolate clean answers? Front-loaded answers, FAQ structure, headings, lists, tables, citations.
Does AI recognize your entities? Density, KG presence, Wikidata, disambiguation, cross-page consistency.
Does AI trust your source? Author credentials, freshness, original research, YMYL handling, factual accuracy.
Do external sources validate you? Authority signals across organizational and personal levels — the AuthorityGraph surface.
Does your content match how people ask? Conversational alignment, topical depth, intent matching.
How ready are you per platform? ChatGPT, Google AIO, Perplexity, Gemini, Copilot, voice.
Can you track and improve over time? AI crawler analytics, citation simulation, score history.
These are the eleven factors most directly correlated with citation outcomes in observed AI generation — the ones where presence-or-absence makes the largest difference.
AEO is a young discipline. Every factor in the taxonomy carries a confidence label. We don't bury uncertainty in marketing copy.
| Label | Meaning |
|---|---|
| Established | Well-supported by web standards, platform documentation, or broadly accepted technical practice. JSON-LD, robots.txt, schema.org types fall here. |
| Strongly Inferred | Not always formally documented, but strongly supported by research or repeated industry observation. Front-loaded answers, concise answer blocks, citation patterns. |
| Indirect / Correlated | Likely influences AI visibility indirectly through search prominence, authority, or trust. Off-site authority signals, social presence, brand mention frequency. |
| Emerging / Experimental | New or evolving factors not yet stable or universally adopted. Speakable schema treatment, IndexNow, platform-specific freshness weighting, NavBoost-class signals. |
When the methodology evolves — and it will — the Emerging factors are where the change lands first. We update factor confidence labels in the public changelog as evidence accumulates.
If you know SEO, you have most of what you need. The infrastructure overlaps substantially — but the signals that get weighted and the outcomes that count are different enough that pure SEO playbooks underperform AEO over time.
| Carries over directly | Partially carries over | Doesn't carry over |
|---|---|---|
| Crawlability fundamentals | Keyword research → entity research | Keyword density |
| Site speed and Core Web Vitals | Backlinks → off-site authority | SERP-rank tracking |
| Mobile-friendliness | Featured snippets → answer blocks | Click-through rate optimization |
| HTTPS and security | Page authority → entity authority | Title tag optimization for CTR |
| Internal linking | Schema markup (expanded scope) | Meta description for SERPs |
| XML sitemap basics | E-A-T → E-E-A-T → AI trust | URL slug keyword stuffing |
| Indexability | Topic clusters → semantic matching | Bounce rate signals |
The 93-factor taxonomy, the AI Visibility Stack, the Citation Core 11, the confidence labels — these are documented in the open. The methodology is the canon, and the canon is public.
Every factor we measure is documented. Every confidence label is calibrated against evidence. Every scoring layer has a published rationale.
Methodology changes are versioned and announced. When confidence labels move, when factors are added or deprecated, the changelog records it.
Public changelogEvery score in AIVZ is paired with the factors that produced it. See why a page scored what it scored, click through to the underlying factor explanation, verify the methodology against the result.
When we can score a factor with regex, DOM parsing, schema validation, or readability rules, we do. LLM-judged scoring is reserved for semantic questions, weighted lower in composite outcomes.
Every factor is inexpensive to compute, fast at scale, and produces a user-facing explanation. We don't measure things we can't explain.
Recommendations are ordered Layer 1 → Layer 2 → Layer 3. We don't tell you to add FAQ schema if your robots.txt is blocking GPTBot.
Single-page signals before site-wide crawl signals. Most citation outcomes are page-level; site-level signals are aggregations of page-level work.
We only plan around publicly available, stable APIs. If a platform doesn't expose data we'd need to verify a factor, we don't claim to measure it.
On-page signals live in the scanner. Off-site authority lives in AuthorityGraph. Different methodologies, different infrastructure, different surfaces.
Every factor carries one of four confidence labels. We don't ship factors without confidence calibration.
Every factor we ship has been tested against real citation outcomes from real AI platforms. We don't ship measurement methodology that hasn't been correlated against outcome data.
Category-defining page. Plain-English definition with concrete example. Disambiguates AEO from GEO, LLMO, AI SEO. Start here if AEO is new.
Open spokeThree-layer dependency model in depth. Sub-scores per layer, failure modes, per-platform variation, and a worked example walking a real composite score from 47 to 78.
Open spokePer-category factor breakdown. Confidence labels per category. Implementation phasing across five build phases. Tier-coverage map.
Open spokePer-factor depth on the highest-impact eleven. What it is, why it matters, what compliance looks like in HTML, common mistakes, and how to fix.
Open spokeRun a free scan. Get your AI Visibility Score across 6 platforms. See the top 3 blockers and the prioritized fix path — in under 60 seconds.