AIVZ is a Command Center, not a measurement tool. Every scan produces a prioritized fix path — and where the adapter is Live, AIVZ runs the fix itself. Where deeper content work is needed, AIVZ delegates execution via MCP to Claude, GPT, or Gemini.
A scan output without a fix path is a problem report. AIVZ surfaces fixes alongside the score — every factor that scored below threshold appears in the recommendation queue with the context needed to execute the work or delegate it.
Which of the 93 factors triggered the recommendation.
Established / Strongly Inferred / Indirect-Correlated / Emerging-Experimental.
Layer 1 / 2 / 3 — drives the prioritization order.
The 0–100 score for that specific factor at scan time.
How much the composite score will move when this factor is fixed.
What to do, in plain language. No vague directional advice.
Where the surface adapter is Live (one-click fix).
For fixes requiring manual implementation or content work.
For content work that should go to Claude / GPT / Gemini via MCP.
What you won't see. AIVZ does not surface vague directional advice ("improve content quality"; "add more structure"). Every recommendation traces to a specific factor with a specific scoring dimension and a specific implementation path. If a recommendation can't be made concrete, it doesn't surface. This is the discipline that separates the recommendation surface from generic AEO consulting output.
The AI Visibility Stack establishes that Layer 1 (Access) precedes Layer 2 (Understanding) precedes Layer 3 (Extractability). The recommendation queue enforces this order — Layer 1 fixes appear above Layer 2 fixes above Layer 3 fixes, regardless of score-impact magnitude.
Fix crawl access, bot policies, server response, render mode. Without L1, nothing downstream gets read.
Schema, structure, entity, schema accuracy, author E-E-A-T. Once read, content needs to parse cleanly.
FAQ, summary, content richness, content quality, speakable. Once understood, answers need extraction.
A Layer 3 fix on a site with broken Layer 1 is wasted work. If AI bots can't crawl the page, beautifully formatted answer blocks don't get cited because they're not getting read. The score-impact of the L3 fix is real — but only after L1 is fixed.
Reordering by score-impact alone would generate fix queues that look maximally productive but produce minimum actual citation lift.
When score-impact and Stack layer disagree (a high-impact L3 fix and a low-impact L1 fix), the L1 fix takes priority by Stack discipline. AIVZ surfaces this explicitly: the L3 fix is visible but marked "deferred until Layer 1 fixes complete." Practitioners can see the full picture; the order enforces the methodology.
The Stack methodology behind this prioritizationThe execution capability surfaces through platform-specific adapters. Adapter coverage varies by platform; AIVZ discloses status honestly. Live adapters carry full execution claims; Beta and Roadmap adapters disclose their status visibly.
| Platform | Status | What AIVZ executes natively |
|---|---|---|
| WordPress | Live | Schema markup (Organization, Article, FAQPage, Speakable, Person, HowTo, Product); meta descriptions; FAQ blocks; summary blocks; llms.txt manifests; robots.txt updates; sitemap optimization; structured data validation; AEO score widgets; content rewrites for L3 factors; per-page recommendations panel in editor. |
| Shopify | Beta | Product schema; collection schema; FAQ blocks for product detail pages; meta description optimization; structured data validation. (Subset of WordPress capability; deeper coverage on roadmap.) |
| Wix | Beta | Schema markup; meta descriptions; structured content blocks via the Wix Studio integration. (Subset.) |
| Webflow | Beta | Schema markup via embedded JSON-LD; FAQ blocks via component substitution; meta description optimization. (Subset.) |
| BigCommerce | Beta | Product/category schema; meta description optimization; FAQ blocks for product pages. (Subset.) |
| Headless / Custom | Beta | Via MCP server + CLI integration; bring-your-own-stack execution path. Output is structured-data updates, content recommendations, and orchestration calls — applied by the consuming application. |
| Squarespace | Roadmap | Native adapter planned; not yet shipped. Recommendations available; native execution not yet operational. |
| Marketplaces | Documented | Marketplace-listing optimization framework documented; native adapter build pending. Recommendations surface; execution requires manual implementation. |
The "We Do It" claim grammar is inviolable canon. AIVZ does not claim execution capability on a platform where the adapter is Beta — it discloses Beta status explicitly and surfaces the actual capability scope. This protects the credibility of every other claim AIVZ makes. When AIVZ says "Live" on WordPress, that means full execution; when AIVZ says "Beta" on Shopify, the disclosure is part of the claim.
Full platform integration directoryNative execution covers structural fixes — schema, FAQ blocks, meta descriptions, summary blocks. Some AEO work requires deeper content production: long-form rewrites, brand-voice adjustments, complex content reorganization. AIVZ delegates this work via MCP server integration to your LLM of choice — Claude, GPT, or Gemini.
AIVZ identifies a content-rewrite or brand-voice fix triggered by a specific factor failure.
The AEO factor, the original content, target output structure, brand voice constraints, style guide constraints.
The package goes to your configured LLM (Claude / GPT / Gemini) via the MCP protocol.
The LLM produces the rewrite or content based on the constraints you supplied.
AIVZ validates the LLM output against the original AEO factor that triggered the work.
If the output passes validation, it's surfaced for user approval. If not, it loops back.
Capable doesn't mean unbounded. AIVZ explicitly refuses three categories of content work — even when the underlying scan suggests they'd technically improve scoring on some factor. These aren't capability gaps; they're discipline boundaries.
AEO is structural-content optimization for AI citation. Social posts (LinkedIn, Twitter/X, Instagram, Facebook) are an adjacent content category with different optimization mechanics, different platform behaviors, and different brand-voice requirements. AIVZ doesn't cross into social-content work.
If you need social content with AEO awareness, dedicated social-content tools (Buffer, Hootsuite, Sprout Social, Later) are the right surface. AIVZ stays in its lane.
A scan might note Statistics with Sources is failing — but the fix is to add real, attributed statistics, not to generate plausible-sounding numbers. AIVZ refuses to generate statistics, percentages, or quantitative claims even when delegated execution would technically produce them.
AI systems are increasingly able to detect fabricated stats. Even when fabricated stats temporarily improve a factor score, they degrade long-term citation likelihood (and credibility) once detected.
Marketing copy where the goal is sales conversion (paid ad headlines, landing-page sales prose, email-sequence pitch copy) is a distinct discipline. AEO factor optimization affects citation behavior; ad copy optimization affects conversion behavior. The two don't cleanly overlap.
If you need ad copy with AEO awareness in the underlying landing page, the AEO work happens on the landing page. The ad copy stays with copywriters.
The end-to-end workflow from scan trigger to fix verification runs through six stages. Most users execute the workflow per-page; agencies and enterprises run it across batches of pages or entire domains.
User initiates scan via dashboard, scheduled scan, API call, MCP request, or CLI invocation. Scan runs against a single URL or batched URLs.
Scanner produces composite AI Visibility Score, three-layer breakdown, all 93 factor scores, six per-platform readiness scores, and (at Agency tier+) Authority Rank. Confidence labels attached.
Failed factors trigger recommendations. Each includes factor identity, confidence label, Stack layer, score-impact estimate, recommendation text, and implementation path. Queue sorted in Stack dependency order.
Live adapter present → direct-execute. Beta adapter or manual fix → generate-instructions. Content work → delegate-to-LLM via MCP.
AIVZ re-scans the affected URL(s) to verify the score-impact estimate matches actual outcome. Confirmed fixes move to the completed queue.
Citation Event Monitoring continues running across the AI platforms. As citation events are detected on the fixed URLs, they surface in the dashboard.
| Capability | Free | Pro | Agency | Enterprise |
|---|---|---|---|---|
| Recommendations queue (per scan) | High-impact subset | Full | Full | Full |
| Stack-ordered prioritization | ● | ● | ● | ● |
| Score-impact estimates | — | ● | ● | ● |
| Native execution: WordPress | ● | ● | ● | ● |
| Native execution: Beta surfaces | — | ● | ● | ● |
| Delegated execution via MCP | — | ● | ● | ● |
| Brand-voice prompt configuration | — | — | ● | ● |
| Auto-execute scheduled fixes | — | — | ● | ● |
| Verification re-scans | — | ● | ● | ● |
| Bulk fix execution (cross-page) | — | — | ● | ● |
| Custom adapter development | — | — | — | ● |
Top fixes prioritized in Stack order. Direct-execute buttons where the WordPress adapter is Live. Generate-instructions buttons for everything else.