Portland, OR · National AEO practice

The answer engine optimization agency
built to be cited.

Ad-Apt is the Portland AEO and GEO agency for brands that want to show up inside ChatGPT, Perplexity, Claude, Gemini, and Google AI Overviews — not just on a blue link. Schema engineering, llms.txt, AI crawler allowlisting, and citation-first content. Founded 2011 in Lake Oswego, Oregon.

Founded
2011
Brands shipped
50+
AI crawlers allowlisted
14+
AEO Toolkit
Live on wp.org
The definition

What answer engine optimization actually is.

Answer Engine Optimization is the practice of structuring a website so a large language model retrieves it, trusts it, and cites it when answering a user's question. The "answer engine" is any LLM-powered search surface — ChatGPT Search, Perplexity, Claude with web access, Google's AI Overviews, Gemini, Microsoft Copilot, and the agentic browsers replacing tabs.

The classic SEO loop is query → ranked list of blue links → click. The AEO loop is query → synthesized answer with a handful of cited sources → sometimes a click, sometimes not. The answer is the SERP. Citation is the new ranking.

SEO and AEO overlap but are not the same job. SEO optimizes for a deterministic ranker. AEO optimizes for a probabilistic retriever that re-ranks chunks, summarizes them, and decides per query which sources to surface. The signals overlap (quality, schema, authority). The mechanics, formatting, and measurement do not.

AEO vs GEO

Generative Engine Optimization (GEO) is the broader sibling. AEO targets retrieve-and-cite engines that link out. GEO targets what the model says about a brand at all, including answers where no link surfaces and answers grounded in training data. AEO is measurable in citations; GEO in brand-mention share of voice across an LLM panel. We run them together. Longer breakdown: SEO, AEO, and GEO — what the acronyms actually mean.

Under the hood

How LLMs find, rank, and cite your content.

Every modern AI assistant that links out runs the same loop: a user query is rewritten into one or more search queries, the assistant hits an index (its own, a partner's, or a real-time fetcher), retrieves the top chunks, re-ranks them, and writes an answer grounded in what it kept. That stack is retrieval-augmented generation, or RAG. Citation is what the model emits when a sentence in its answer is directly traceable to a chunk it retrieved.

The retrievers differ in important ways:

  • ChatGPT Search — OpenAI's own crawl (GPTBot, OAI-SearchBot, ChatGPT-User) plus a Bing fallback. Citations render as inline links with a side panel of every source consulted.
  • Perplexity — its own crawl (PerplexityBot, Perplexity-User), freshness-weighted. Numbered footnotes plus source cards across the top of each answer.
  • Claude — Anthropic's ClaudeBot plus Brave Search as a partner index. Web-enabled answers cite inline with named links.
  • Google AI Overviews — sits on the Google index used for organic search, governed separately by the Google-Extended token. Citation cards stack alongside the overview.
  • Gemini — Google index, sources surfaced beneath answers.

A brand is cited when its content is indexed, retrievable, and structurally easy to lift. Most AEO failures happen at one of those three layers.

Layer 1 — Foundation

Structured data is the highest-confidence signal a model has.

An LLM parses two streams in parallel: unstructured prose and machine-readable schema. Schema is the cheat sheet — it tells the model what entity the page describes, how it relates to other entities, and which facts are canonical. Sites with comprehensive schema get cited more often because their content is easier to map into an answer. The schema types we ship on every Ad-Apt AEO build:

Layer 2 — Access

llms.txt and robots.txt — the allowlist nobody publishes.

llms.txt at the root

llms.txt is an emerging convention — a plain-markdown file at /llms.txt that describes the site to language models the way sitemap.xml describes it to crawlers. It lists canonical pages, products, pricing, and policies. Ad-Apt ships ad-apt.com/llms.txt as a live working example, and deploys a tailored llms.txt on every client engagement.

The AI crawler allowlist

Default robots.txt files written for traditional SEO say nothing about AI agents, and some hosting platforms block them outright. The result is invisible: the site never enters the citation pool, and ChatGPT answers questions about the brand using a competitor's content instead. The fix is an explicit allowlist of the major AI bots:

OpenAIGPTBot
ChatGPT-User
OAI-SearchBot
AnthropicClaudeBot
anthropic-ai
Claude-Web
PerplexityPerplexityBot
Perplexity-User
GoogleGoogle-Extended
Googlebot
AppleApplebot-Extended
Applebot
Amazon & MetaAmazonbot
Meta-ExternalAgent
Mistral & CohereMistralAI-User
cohere-ai
Long tailBytespider
CCBot
Diffbot
DuckAssistBot

Allowlisting these tokens is a five-minute change. Failing to allowlist them is the single most common reason a brand is invisible in AI search.

Layer 3 — Format

Citable content is built in liftable chunks.

An LLM does not read a page front to back. It chunks the document, embeds each chunk into vector space, retrieves the chunks closest to the query, and writes its answer from those. A 2,000-word essay that buries the answer in paragraph nine loses to a 200-word FAQ block that puts the answer in the first line. The rules that drive citation rate:

Layer 4 — Trust

Authority is what makes a model pick you.

When two pages answer a query equally well, the model picks the source the rest of the web agrees on. Authority is built across three surfaces:

  • Cross-source agreement. Wikipedia, Crunchbase, G2, Clutch, named-author mentions on third-party publications. The cheapest filter a model has for "real."
  • Entity identity. Clean Organization schema with sameAs links to every authoritative profile. Wikidata punches above its weight.
  • Named experts. Author bylines with named Person entities and credentialed bios. Anonymous content loses to attributed content.

Authority is the slowest AEO layer but it compounds. Brands with a six-month head start on Wikipedia and third-party profiles routinely outrank larger competitors in citation share.

The method

The seven-step Ad-Apt AEO implementation.

This is the playbook we run on every engagement. The schema below it mirrors the steps exactly, so an AI assistant reading this page sees the same HowTo a human does.

Audit current AI visibility.

Query ChatGPT, Perplexity, Claude, and Google AI Overviews for the brand, the category, and the top ten buyer questions. Record cited sources, brand mentions, and zero-mention queries. The free AEO audit tool automates this for a single domain.

Lay the structured-data foundation.

Organization, Service, FAQPage, HowTo, Article schema in a single @graph with stable @id references. Wire sameAs into every authoritative profile.

Publish llms.txt.

Plain markdown at the root, indexing the canonical surfaces of the site. Our own llms.txt is the working template.

Allowlist AI crawlers in robots.txt.

The full list above. Audit every six months — new tokens land regularly.

Restructure content for citation.

Answer-first openings, Q&A blocks, comparison tables, named sources, date-stamped claims.

Build third-party authority.

Wikipedia (where eligible), Wikidata, Crunchbase, G2, Clutch, named-author placements. Long-lever, compounds for years.

Measure and iterate monthly.

Citation tracking against a fixed query panel, AI Overview impressions in Search Console, referrer traffic from chatgpt.com, perplexity.ai, claude.ai, and gemini.google.com.

Measurement

How to actually measure AEO performance.

AEO measurement is harder than SEO because the SERP is generative — two users asking the same question get different answers and different cited sources. The right approach is a fixed query panel re-run on a schedule.

  • Citation rate. Percentage of a fixed 50-200 query panel where the brand surfaces as a cited source in each assistant. Tracked monthly.
  • Brand-mention share of voice. When the model answers without citing, does it name the brand — head-to-head against competitors?
  • AI Overview impressions in Search Console. Google reports these under standard Performance. Compare to total impressions over time.
  • Referrer traffic. GA4 sessions from chatgpt.com, perplexity.ai, claude.ai, gemini.google.com, copilot.microsoft.com. Low volume, high intent.
  • Branded search lift. The downstream signal — brand queries rising on Google as the brand becomes a default AI answer.

We package the panel, dashboard, and monthly review into every Ad-Apt AEO retainer. For brands running their own, the free AEO Toolkit WordPress plugin (live on wp.org) ships the schema and llms.txt layers; the Pro tier adds citation tracking against a custom query panel.

Proof

What our own AEO work has shipped.

Four representative engagements — including the AEO work we run on our own site.

ad-apt.com — dogfood case

Full AEO rebuild on our own site: @graph schema everywhere, llms.txt at the root, AI crawler allowlist, citable content structure. The reference implementation we ship to clients.

Read our work →

AEO Toolkit plugin — wp.org

Our free WordPress plugin distributes the AEO foundation layer to any WP site — schema, llms.txt, AI crawler allowlist. Pro tier adds citation tracking. Live in the WP directory.

AEO Toolkit →

Carcin Connector — AEO infra on WP

Write-capable plugin for enterprise WordPress that lets our agentic systems deploy schema, llms.txt, and citation-ready content programmatically.

Read the work →

Circle K — knowledge-base build

Structured-data overhaul and citation-engineered FAQ build across a multi-property enterprise footprint — built to surface in AI Overviews on category and location queries.

Read case study →
Why Ad-Apt for AEO

The case in five lines.

  • 01
    We dogfood every layer.Our own site, our own llms.txt, our own AEO Toolkit live on wp.org. The reference implementation is public.
  • 02
    Marketing plus engineering.AEO is a content problem and an infrastructure problem. We ship both, in-house, without a vendor handoff.
  • 03
    15 years, 50+ brands.Founded in Portland in 2011. Enterprise references on request — Circle K, Netflix, True Blue Car Wash, TastyTrade.
  • 04
    Free tools, not gated leads.The AEO audit and AEO Toolkit free tier exist because most foundation work shouldn't be retainer-priced.
  • 05
    Outcome-obsessed.We report on citation rate, AI Overview impressions, and pipeline — not vanity rank trackers.
FAQ

Questions buyers actually ask.

What is Answer Engine Optimization (AEO)?+

Answer Engine Optimization is the practice of structuring a website so large-language-model search assistants — ChatGPT, Perplexity, Claude, Gemini, and Google AI Overviews — retrieve and cite its content when answering user questions. It uses structured data, llms.txt, citable Q&A formatting, and third-party authority signals to make a brand the source the model picks.

AEO vs SEO — do I need both?+

Yes. SEO targets blue-link rankings. AEO targets citations inside AI-generated answers. The underlying surface area — schema, quality, authority — overlaps roughly 70 percent, but the optimization targets and measurement systems are different. Most enterprise programs now run them as one practice. See our deeper write-up: SEO, AEO, GEO — what the acronyms mean.

AEO vs GEO?+

AEO focuses on retrieve-and-cite assistants — the ones that link out (ChatGPT Search, Perplexity, AI Overviews). GEO is the broader goal of influencing what the model says about your brand, including answers with no citation and answers grounded in memorized training data. AEO is measurable in citations; GEO is measurable in brand-mention share of voice across an LLM panel. In practice we run them together.

Will AI search replace Google?+

No, not on the timeline most predictions imply. Google still processes the majority of search queries and now ships AI Overviews on a growing share of them. The real shift is that the SERP itself is becoming an answer engine. Brands optimize for citation inside Google's AI Overview the same way they optimize for ChatGPT — both run on retrieval-augmented generation against indexed content.

How do I know if my content is being cited by ChatGPT?+

Three methods. Manually query ChatGPT, Perplexity, Claude, and Gemini for your brand and category questions and log cited sources. Use third-party AI-visibility tools that automate the panel. Monitor referrer traffic from chatgpt.com, perplexity.ai, claude.ai, gemini.google.com in GA4. Ad-Apt's free AEO audit runs the first method against your domain in under a minute.

Do I need to block AI bots to protect my content?+

No. Blocking AI crawlers removes the site from the citation pool entirely. The model still answers questions about the brand — using a competitor's content. Allowlist AI agents, then structure content for citation rather than for copy-paste.

How is AEO pricing structured at Ad-Apt?+

AEO is scoped to the site, the content surface, and how it's bundled with SEO. The free AEO audit and the free AEO Toolkit cover most foundation work for a single site. Talk to a strategist for a tailored proposal.

The close

Tell us where you should be
cited and aren't.

Ten-minute intro call. We'll run a live AEO query against your domain on the call and tell you honestly whether Ad-Apt is the right team for the job.

Talk to a strategist