Optimising for compare X to Y prompts in AI search
Compare X to Y prompts are where buyers form preferences. Here's how we structure pages and signals to win them across LLM surfaces.
There is a particular type of prompt that decides more deals than people realise. “Compare
This post is the playbook we use with clients to win the compare X to Y prompts across ChatGPT, Claude, Gemini, Perplexity and Microsoft Copilot. It is opinionated and we will be honest about what is still unsettled.
Why these prompts matter disproportionately
A single compare X to Y query sits closer to the purchase decision than nearly any other query a buyer makes. The buyer has done discovery, they have heard of both vendors, they are pressure-testing which one fits. The model’s answer at this point shapes the next sales conversation.
The traditional SEO equivalent has been a “vs” page. Vendors publish them, sometimes well, often badly. In AI search, the dynamics shift in three ways:
The model summarises rather than ranking pages. The first comparison page on Google still gets the click. In AI search, the model lifts content from multiple sources and synthesises, so being one of three or four cited sources is the realistic ambition.
Independent sources weigh more. G2 comparison pages, Reddit threads and trade press evaluations get cited alongside or above vendor-authored pages. The vendor’s own content has to be accurate or it gets ignored.
The prompt is conversational. A buyer rarely asks just one comparison question. They ask follow-ups, and the model uses the same retrieved sources to answer the whole conversation. So your content has to hold up across a sequence, not just a single query.
For the broader citation mechanics, our piece on how LLMs choose what to cite is the foundation.
What a comparison page that gets cited looks like
We have done enough of these to recognise the structure that works. The pattern is consistent across categories:
A clear positioning paragraph at the top. Two or three sentences that state honestly when this product fits better and when the alternative does. Models lift this paragraph, so the words have to be quotable and accurate.
A side-by-side comparison table. Feature, capability or attribute on the left, both products’ positions on the right. Avoid the temptation to list every feature. Twelve to twenty rows is the right ballpark. The model reads the table.
Use case framing. A section that names the buyer profiles where each product fits best. “If you are a 200-seat firm needing X, this product is the right fit. If you need Y at enterprise scale, the alternative is better.” This honesty is what gets the page cited rather than dismissed.
A clear call on price and packaging. Buyers ask. Models pull. Saying “starts at £15 per seat per month” is more useful than saying “competitive enterprise pricing”.
A strengths and limitations summary. Both products. Yours and theirs. If you only list your strengths and their limitations, the model recognises the bias and weights independent sources higher.
Customer evidence. A short section linking to relevant case studies or testimonials. The model uses this to ground the qualitative claims.
The structure is non-negotiable. We have tried variations and the pages that get cited reliably are the ones that follow this shape.
On honesty in comparisons
This is the hardest part for most B2B marketing teams, and the most important. A comparison page that is transparently biased toward your product gets cited less in AI search, not more.
We have run controlled tests with clients where we shipped a balanced comparison page replacing a one-sided one. Citation share rose. The intuition is simple. Models recognise patterns of biased framing and weight balanced sources higher when summarising for a prospect who is genuinely comparing.
The corollary, which sales teams hate to hear, is that there are some comparisons where you should not publish a page. If your product genuinely loses to the alternative for the majority of buyers, do not write a comparison page that pretends otherwise. The model will source the truth from elsewhere and your page will hurt.
Our piece on comparison content that ranks covers the wider editorial principles.
Beyond the page itself
The vendor page is one input to the model’s answer. Often not even the dominant one. The other inputs we work on:
G2 and Capterra comparison views. G2 builds vendor-vs-vendor pages automatically. These are cited heavily for compare X to Y prompts. Our piece on why G2 and Capterra matter more for AI than for SEO covers the platform side. Make sure your profile is complete, reviews are recent and the comparison page is populated.
Reddit threads. Threads asking “X or Y for our use case” get cited disproportionately. Being present in the thread, with disclosure of affiliation, often shifts the answer. Our piece on why Reddit is now critical to AI search citations covers the playbook.
Trade press comparisons. Industry publications writing head-to-head reviews are gold. These are slow to land but cited for years.
Independent blog reviews. A consultant or community leader writing about both products independently. Hard to seed, but valuable when it lands.
The combined picture is what gets cited. The page on your own site has to hold its corner of that picture without overreaching.
Prompts you should be tracking
For each material competitor, we recommend tracking at least these prompts in your monthly audit:
- “Compare
to ” - ”
vs ” - ”
vs for ” - “Is
or better for ” - “Alternatives to
” - “Switch from
to ” - “Is
worth it”
The last three sit on the displacement edge of comparison and discovery. They are some of the highest-leverage prompts in AI search and most teams under-track them. Our piece on tracking AI citations through Profound versus manual prompt audits covers the tooling.
What the comparison page should not do
A few common mistakes we see and have stopped doing:
Listing only your strengths in the comparison table. Models read this as marketing copy and weight it down.
Generic “we’re better” prose. Specific claims with evidence get cited. Generic ones do not.
Hidden pricing. If the page says “contact us for pricing” while the competitor states a number, the model uses the number. Even an approximate range beats nothing.
Missing schema. Comparison pages benefit from FAQ and Product schema. Our piece on structured data for AI search covers the patterns.
Stale content. A comparison page from 2023 with old features and old pricing gets cited but for the wrong claims. Either keep it fresh or unpublish it.
How follow-up prompts shape strategy
A buyer who asks “compare X to Y” usually asks two or three follow-ups. “What’s the pricing difference”, “Which has better support”, “Which integrates with
Write the comparison page knowing those follow-ups are coming. If your page anticipates them with sectioned content, the model keeps using your page across the conversation. If it does not, the model switches sources and you lose the thread. This is the under-discussed difference between SEO comparison content and AI-search comparison content. The latter has to support a sequence, not just one query.
Where comparison content fits in a wider programme
Comparison pages do not exist in isolation. They sit at the bottom of a content stack that includes category positioning, sector-specific landing pages and product detail. If your category positioning is unclear, the comparison page has to do too much work and tends to fail.
For the wider stack, our AI search optimisation primer and our piece on SEO for SaaS product pages are the foundational reads.
What we are still calibrating
The exact weighting models give to vendor versus aggregator versus community pages varies by category and shifts over time. We monitor it but we do not pretend to have a stable formula. The interaction between branded search and comparison content is muddier than it used to be, and our piece on how AI search shifts the branded/unbranded query split covers some of the consequences.
The pragmatic position is that comparison content is one of the highest-leverage AI search investments a B2B tech firm can make, and firms doing it well are getting cited where deals are decided.
If you’d like a second opinion on your comparison content or your AI search strategy, drop us a line. You can also see how we approach this on our content marketing services page or AI SEO services page.
Frequently asked questions
Does an honest comparison page really get cited more than a one-sided one?
What does a comparison page that AI search will cite actually look like?
Which comparison prompts should we track in our monthly audit?
More on AI SEO
-
AI SEO
Google AI Overviews: how MSPs can win the cited-source spot
What we've learned about getting MSPs cited in Google AI Overviews in 2026, including page structure, schema and the local search dimension.
By Paul Clapp -
AI SEO
AI search optimisation for IT services firms
How MSPs and IT services firms can show up in AI search answers, with a practical playbook covering pages, citations and the bits we still don't know.
By Paul Clapp -
AI SEO
AI search optimisation: a 2026 primer for tech marketers
A grounded primer on AI search optimisation for B2B technology marketers in 2026, covering what's known, what's emerging and where to focus first.
By Paul Clapp