AI Agents Are Learning to Remember. AEO Just Got a Lot More Competitive.
Most businesses still think of AEO as a content formatting problem. Add FAQ sections. Use question headings. Get schema markup in place. That is a fine start — but it is describing last year's game.
Right now, a structural shift is happening underneath the surface of AI search. Agents are developing persistent memory. The research has been published. The code is open source. And the brands that understand what this means — and act on it now — are going to be significantly harder to displace in 12 months.
Key Takeaways
- 80% of the URLs that AI models cite do not rank in Google's top 100, meaning SEO and AEO are two separate games running simultaneously
- AI agents today have no memory between sessions — Google's ReasoningBank (April 2026), alongside parallel research from Mem0 and others, establishes the clear direction that is changing
- Once agents carry persistent memory, sources that consistently help them succeed will get a structural citation advantage
- Content with data, citations, and quotations already earns 30 to 40% higher visibility in AI responses — that gap will widen
- The brands that get established in AI citation patterns now are training the models to keep them there
The Search Game Has Already Split in Two
!AEO vs traditional search - two different games
Here is the number that should stop most marketing teams cold: only 12% overlap between the brands AI models cite and the brands ranking in Google's top 10, based on a study of 15,000 prompts using Ahrefs Brand Radar. For ChatGPT specifically, that overlap drops to 8%.
This means that almost everything a business has done to win in traditional search has no direct connection to whether it shows up in AI answers. Two separate games, two separate scoreboards.
The scale of the AI channel makes this urgent. ChatGPT alone now handles over 2 billion queries daily. Google AI Overviews reach 1.5 billion monthly users. AI referral traffic grew 527% year over year for publishers who optimized for it. And the traffic quality is different — AI-driven visitors convert at 4.4 times the rate of standard organic visitors and spend 68% more time on site.
The brands capturing that traffic are not necessarily the ones with the strongest SEO. They are the ones whose content AI models can extract, trust, and cite.
Why Current AEO Strategies Have a Ceiling
The basics of AEO are well established at this point. Direct-answer openings. Question-based headings. FAQ schema. Short, modular sections. These are the table stakes.
But research is showing that structural optimization alone hits a ceiling pretty quickly.
Content with statistics, citations, and quotations achieves 30 to 40% higher visibility in AI responses compared to content without them. Pages updated within two months earn 28% more citations than older content. And across platform analysis, citation rates vary by up to 615 times between platforms for the same brand — meaning a brand appearing well on Perplexity might be virtually invisible on Claude or Grok.
| AEO Signal | Visibility Impact |
|---|---|
| FAQ schema + structured headings | Baseline |
| Data-backed content with citations | +30 to 40% |
| Fresh content updated within 60 days | +28% |
| Strong third-party presence across directories and reviews | Compounds all other signals |
| Multi-platform optimization | Required — platforms behave very differently |
The ceiling is not schema. The ceiling is whether your content is genuinely useful enough that an AI model trusts it, returns clean answers from it, and names it as a source.
AI Agents Are Starting to Learn From Experience
This is the part of the story most AEO coverage has not gotten to yet.
AI agents — the systems that power search tools like Perplexity, ChatGPT Search, and Google AI Mode — currently start every session fresh. They have no record of which sources gave them good information before. Every source competes on equal terms every single time.
That is changing. In April 2026, Google Research published ReasoningBank, a memory framework that allows agents to distill lessons from past tasks — including both successes and failures — into structured reasoning patterns they carry into future sessions. Tested on web browsing and software engineering benchmarks using Gemini 2.5 Flash, it produced an 8.3% improvement in task success rates and reduced the steps required per task by nearly 3 — which matters because every step is an LLM call with a cost attached.
This is not a Google-only direction. The Mem0 research team published a comprehensive benchmark at ECAI 2025 comparing ten distinct AI memory approaches against the LOCOMO dataset. The conclusion: agent memory architecture is now consequential enough that the right choice produces up to 15-point accuracy differences on complex queries. And Deloitte estimates that by 2027, 50% of companies using generative AI will be running agentic AI pilots — up from 25% today — all of which will need production-grade persistent memory.
The research consensus across multiple teams and institutions points the same direction: agents are going to remember.
What Agent Memory Does to Citation Dynamics
Right now, being cited by an AI model is a real-time evaluation. The model reads your page in the moment and decides whether it answers the query well enough to cite.
Once agents carry memory across sessions, that evaluation becomes cumulative. An agent that retrieved information from your page, used it to complete a task, and assessed that the task went well — that positive outcome gets stored. Future sessions on related queries will start with your source already trusted.
The inverse is also true. A source that consistently produces ambiguous answers, forces the agent to rewrite dense prose, or leads to failed task completion will accumulate a negative signal.
| Content Approach | Current Behavior | With Agent Memory |
|---|---|---|
| Direct, task-resolving answers | Gets cited when retrieved | Builds a trust record across sessions |
| Schema-optimized but vague | Sometimes cited | Trust record stays flat or declines |
| Dense but accurate prose | Rarely extracted cleanly | Harder to build positive memory signal |
| Thin, keyword-optimized | Rarely cited | No utility, no memory signal |
This is why the brands establishing strong citation patterns now are not just winning today's traffic. They are building the memory record that will give them structural priority once agent memory becomes standard.
The Window That Is Actually Closing
AEO adoption data tells a pretty clear story about where things stand.
Early 2025 was the experimental phase — brands testing what worked, mostly by accident. Mid-2025 became the awareness phase — deliberate early movers starting to pull ahead. By early 2026, patterns are calcifying. In competitive categories, two or three brands have established consistent citation presence. The rest are largely absent.
NerdWallet's revenue rose 35% despite a 20% decline in traditional search traffic because they diversified into AI-visible content early. Walmart saw ChatGPT account for 20% of total referral traffic between June and August 2025. Ahrefs' own data showed that while AI search represented just 0.5% of their visits, it drove 12.1% of their signups.
The other side of the ledger: Chegg saw a 49% decline in non-subscriber traffic. Business Insider lost 55% of its organic traffic. Forbes dropped 50% year over year. These were not companies that failed at SEO. They were companies that succeeded at SEO in a market that shifted beneath them.
Getting into AI citation patterns now is relatively straightforward. Getting in six months from now, against established citation positions — and eventually against an agent memory bank that does not include you — is a materially harder problem.
What to Do Right Now
The starting point is understanding where you currently stand in AI answers, not in Google rankings.
A basic self-audit:
- Run your 10 most important category queries across ChatGPT, Perplexity, Gemini, and Google AI Mode
- Log which brands are being cited and how they are described
- Pull one of those pages and look at what makes it extractable — it is almost always more direct and specific than equivalent pages that are not cited
- Build a Week 1 baseline to measure against
From there, the work splits into three tracks that consistently drive citation rate improvement:
Restructure content for extraction. Every key page should open with a direct two to three sentence answer to the main question. Headings should be questions or direct statements. Each section should work as a standalone answer, not a chapter that only makes sense in context.
Build on-site credibility signals. Named authors with credentials. Outbound citations to verifiable sources. Schema markup — FAQPage and HowTo at minimum. Accurate publish dates and regular refreshes. AI models evaluate trustworthiness the same way a careful reader would.
Expand third-party presence. AI models do not limit their retrieval to your domain. They read Reddit, review platforms, directories, LinkedIn, and earned media. The brands that get cited most consistently have a presence across multiple independent sources all reinforcing the same narrative.
Indexy's AI Visibility Monitoring tracks citation rates automatically across all major AI platforms. The Content Engine builds and restructures content to the format that earns citations. Third-Party Authority builds the off-site presence AI models use to validate who you are.
- What Is AEO? The Complete Guide to Answer Engine Optimization in 2026
- How to Get Your Business Recommended by ChatGPT, Perplexity, and Gemini in 2026
- Why Your Website Is Invisible to AI Search (And How to Fix It in 2026)
Frequently Asked Questions
What is agent memory and why does it matter for AEO? Agent memory refers to frameworks that allow AI agents to retain and apply lessons from past sessions rather than starting fresh every time. Research published in 2025 and 2026 from Google, Mem0, and others shows this is the clear direction of agent development. Once deployed at scale, agents with memory will preferentially retrieve sources that have helped them succeed before — which changes citation dynamics significantly.
Is AEO the same as GEO? AEO (answer engine optimization) and GEO (generative engine optimization) are two labels for the same discipline. Both refer to optimizing content so it gets cited in AI-generated answers. The goal, tactics, and metrics are the same.
How much does content format actually matter? Research from multiple sources shows content with data, citations, and quotations earns 30 to 40% higher AI visibility compared to content without them. Fresh content updated within two months earns 28% more citations than older content. Format matters, but usefulness matters more.
How quickly can citation rates improve? Most businesses that make structured changes — direct-answer content, question headings, FAQ schema, third-party presence — start seeing measurable citation improvements within 30 to 60 days when tracking systematically across platforms.
Is it too late to start? Not too late, but the window for easy gains is shrinking. In most competitive categories, two or three brands have established consistent citation patterns. Starting now is significantly more straightforward than starting in six months with established competitors and eventually agent memory working against you.
Noah Kanji
Team Indexy
The Indexy editorial team covers AI search visibility, generative engine optimisation, and the strategies brands use to get cited and selected in AI answers.
Start being the answer.
AI selects a few sources. Indexy helps you become one of them.