CASE STUDY · DEVTOOLS · 12-WEEK AEO BUILD

3x organic mentions across ChatGPT, Claude and Perplexity.

A Series B DevTools company was investing heavily in SEO and getting nothing from AI search. We ran a 12-week AEO build that triangulated structured content, evaluator-grade citations, and on-platform presence to get them quoted by the LLMs developers actually ask.

Industry
DevTools / Series B
Engagement
12 weeks
Outcome
3x AI-search mentions
Surface
ChatGPT · Claude · Perplexity

Challenge

The client ranked well on Google for their category — but Google traffic to category pages had plateaued, and an increasing share of their developer audience was discovering tools through ChatGPT and Perplexity instead. Their competitors were getting cited by name in AI-search answers. They weren't.

Their internal hypothesis was that AI search needed "more SEO." It didn't. AEO and SEO share substrate but not strategy. SEO optimizes for crawl, parse, rank. AEO optimizes for citation: did the model actually quote you when a developer asked the question that puts you in consideration?

Approach

Audit the AEO surface as it exists

We started by sampling. Twenty representative developer questions. Each asked through ChatGPT (with web search on/off), Claude, Perplexity, and Google AI Overviews. We logged who got cited, where citations originated (product docs? blog post? Reddit? G2?), and what kind of answer the model preferred. The pattern was specific: the client's blog wasn't being cited because it was thin and self-promotional. Their docs were cited rarely because they were unstructured. Reddit and Stack Overflow mentions of the client were dated.

Re-architect for citation

We rebuilt the citation surface in three places:

  • Product docs — restructured to put the answer first, code sample second, caveats third. Schema on every page. Each how-to became something an LLM could quote in two paragraphs.
  • Comparative content — honest, well-structured "X vs Y" pages that the client had refused to publish. Models prefer these to vendor monologues.
  • Third-party presence — fresh, on-topic answers on Stack Overflow and developer subreddits, written by named engineers, not anonymous accounts. Real content, not seeding.

Schema discipline

Every published page got Article + FAQPage + HowTo schema where applicable. Validated post-hydration in CI because Google's structured-data report and the actual rendered DOM diverge often enough to be a problem. We also added speakable-spec markers on the parts of pages that lend themselves to voice/AI output.

Measure citations, not impressions

We built a weekly citation tracker — a defined set of 80 prompts run through ChatGPT, Claude and Perplexity, with named-mention detection and source-link logging. Citation rate, not impression count, was the goal. The client could see week-over-week whether the work was moving the actual number we were paid on.

"AEO is not SEO with extra schema. It's a different unit of measurement. The metric is whether the model said your name out loud."

What we built

  • AEO audit document — 20 representative prompts, baseline citation rate, gap analysis vs four named competitors.
  • Doc rewrite — top 30 product-doc pages restructured for answer-first format and tagged with schema.
  • Comparative content suite — 12 honest competitor-comparison pages, each with structured pro/con tables.
  • Engineer-bylined content — eight long-form articles authored by named engineers, published with author schema.
  • Third-party presence — sustained, named contributions on developer forums (no anonymous accounts, no spammy links).
  • Citation tracker — automated weekly run of 80 prompts across three AI surfaces, named-mention detection, source-link logging, dashboarded.
  • JSON-LD validators in CI — hand-off so the in-house team can keep schema discipline after we step off.

Results

By week 12, the citation tracker showed a 3x increase in named mentions across the three AI surfaces vs the baseline. The client's category-comparative pages started appearing as sources in Perplexity answers. ChatGPT (with web search) began naming the client in tool-recommendation queries where it had previously omitted them. Claude reliably surfaced the client in their narrowest category, where they have a genuine technical edge.

SEO traffic also rose — the structural rewrites helped Google as well — but that wasn't the engagement's metric. The bigger result is a measurement habit: the client now has a weekly AEO dashboard owned by their content lead.

Stack & tools

SCHEMAJSON-LD generators (Article, FAQPage, HowTo)
VALIDATIONPlaywright post-hydration JSON-LD checks in CI
CITATION TRACKINGCustom prompt suite, three AI surfaces, weekly cron
CONTENT CMSClient's existing CMS + structured-content templates
SEO BASELINEGSC + Ahrefs (kept for context, not as primary metric)
FORUM PRESENCEEngineer-led, named, contributory (no spam)

What this engagement looks like for you

If you sell into developers, security engineers, finance ops, or any audience that has migrated to LLM-search for tool discovery, your SEO investment is plateauing for the same reasons. AEO is the next compounding loop — and it is measurable, if you measure citation rather than impression.

◆ START GROWING

Want similar results?

AEO audits + citation builds for B2B SaaS, AI tools, DevTools. NDA available.