ChatGPT keeps recommending your competitor. Here's how we fix that
A practitioner's guide to displacing the competitor ChatGPT keeps naming when prospects ask for recommendations in your category.
A founder we work with ran the obvious test on a Monday morning. He asked ChatGPT, “Who are the best managed IT providers for mid-market firms in the UK?” His company was not in the list. A direct competitor was. He had spent six years building the better business and the model had quietly decided otherwise.
This is the conversation we now have most weeks. The good news is that displacing a wrongly-cited competitor is achievable. The bad news is that it is not a single fix, and most of the work happens off your own site.
Why ChatGPT picked them, not you
Before you can change the answer, you need to understand how ChatGPT arrived at it. There are two paths. The model either has the recommendation baked into its training data, in which case the citation appears without any web retrieval, or it ran a live search and pulled the answer from a handful of pages it found. You can usually tell which path applied by looking at whether sources are linked beneath the response. If they are, you are dealing with retrieval-time citation, which is the easier problem to solve.
In our experience, four factors decide who gets named:
- The volume and quality of third-party mentions of the competitor in places the model trusts.
- Whether their name appears alongside the category language a prospect would actually use.
- The presence of comparison and listicle content that places them inside curated sets.
- Whether their own site reinforces those signals with clear positioning, schema and quotable copy.
If you have read our piece on how LLMs choose what to cite, you will recognise this picture. The mechanics have not changed in the last twelve months. The leverage has just shifted further away from your homepage and further toward the broader web.
Audit the actual prompts that matter
The first mistake we see is teams optimising for a prompt no buyer ever uses. “Best managed IT provider in the UK” sounds important. In practice, prospects ask narrower questions: “Best MSP for a 200-seat law firm in Manchester”, “Who do I use for Microsoft 365 security if I’m regulated by the FCA”, “Alternatives to
Build a list of forty to sixty prompts that match how your real prospects ask. Run them across ChatGPT, Claude, Gemini, Perplexity and Microsoft Copilot. Log who gets cited, in what order and what sources the model used. Our walkthrough on auditing visibility across Copilot and ChatGPT covers the methodology in detail. The output of this exercise is your real target list, not what your sales team thinks you should be ranking for.
Work backwards from the citations
Once you have the prompt list, look at the source pages the models linked to. Patterns emerge fast. If your competitor is being cited, you will usually see one or more of these sources doing the heavy lifting:
- A G2, Capterra or TrustRadius listing where they sit in a curated set
- A Reddit thread where someone asked a similar question and they were named in the replies
- A category roundup on a publisher site, often eighteen months old or older
- Their own comparison pages, written for ”
alternatives” or “X vs Y” queries - A podcast transcript, conference write-up or partner blog that mentioned them in passing
This is the actual map. Whatever is on that map needs to start mentioning you, ideally in the same phrasing and category.
Get added to the third-party sets
The fastest win for most of our clients is getting added to the curated lists already being cited. That means claiming and properly populating G2 and Capterra profiles, asking happy customers for reviews on those platforms within a defined time window and pitching publisher roundups for inclusion. We will go deeper on this in our forthcoming post on why G2 and Capterra matter more for AI than for SEO, but the headline point is straightforward. If a list ranks high enough to be cited and you are not on it, you cannot be cited.
Reddit deserves its own paragraph. Models pull from Reddit threads disproportionately, partly because the data licensing deal with OpenAI made that content first-class for ChatGPT. We are not suggesting you spam subreddits. We are suggesting that the people in your team who know your category should be participating in genuine conversations there, and that customer success should be flagging when a thread asks the kind of question your product answers. We cover the playbook in why Reddit is now critical to AI search citations.
Tighten your own positioning
External work moves the needle most, but your own site has to be ready when retrieval brings the model to your door. Three things we check on every client engagement:
Category language. Does your homepage and key service pages use the words a prospect actually searches with? Many tech firms describe themselves in vendor-speak that no LLM will ever associate with the prompt “best MSP for a 200-seat law firm”. Fix that first.
Quotable claims. LLMs lift sentences. If your differentiators are buried in marketing prose, they will not be lifted. State the claim, attribute it where you can and structure the page so the claim is near the top.
Comparison content. If a prospect asks “X vs Y” and your competitor has a page answering it, they own that prompt. Build comparison pages that name names, even when it feels uncomfortable. Our piece on comparison content that ranks walks through the structure we use.
You will also want schema sorted. Organisation, Product and FAQ schema all help the retrieval layer pick you up cleanly. If you need a primer, structured data for AI search covers the patterns we use.
How long does this take
Honestly, longer than anyone wants to hear. We see meaningful citation movement in eight to twelve weeks once the third-party work starts landing. Training-data baked recommendations are slower, because they only shift when models retrain or refresh their underlying corpora. The pragmatic answer is to focus on the retrieval-time prompts first, because those respond to fresh signals, and to treat training-data presence as a longer compounding game.
We are also conscious that the field is genuinely young. We are honest with clients about the bits we do not yet have clean data on, and we update our approach as the engines change. If you want a baseline view of the discipline, our AI search optimisation primer is the place to start.
What to ask your agency or in-house team
A short checklist if you are pressure-testing your current programme:
- Do we have a documented prompt set we audit monthly across at least four LLM surfaces
- Do we know which third-party sources are being cited for our category
- Do we have a workstream actively pursuing inclusion in those sources
- Are our G2 and Capterra profiles current, with reviews in the last quarter
- Does someone own monitoring the Reddit and forum conversations in our space
- Do our own pages use the prospect’s language, not ours
If most of those answers are no, that is your roadmap.
This is one of the most common briefs we are taking on at the moment, and it never quite looks the same twice. The shape of the answer depends on your category, your existing footprint and your competitor’s head start. If a competitor keeps showing up where you should, tell us about your business and we will take a look at where the gap actually sits. You can also see how we approach this work on our AI SEO services page.
Frequently asked questions
How long does it take to displace a competitor in ChatGPT recommendations?
How do I know if ChatGPT is using training data or live retrieval to recommend my competitor?
Should we publish a comparison page that names our competitor directly?
More on AI SEO
-
AI SEO
Google AI Overviews: how MSPs can win the cited-source spot
What we've learned about getting MSPs cited in Google AI Overviews in 2026, including page structure, schema and the local search dimension.
By Paul Clapp -
AI SEO
AI search optimisation for IT services firms
How MSPs and IT services firms can show up in AI search answers, with a practical playbook covering pages, citations and the bits we still don't know.
By Paul Clapp -
AI SEO
AI search optimisation: a 2026 primer for tech marketers
A grounded primer on AI search optimisation for B2B technology marketers in 2026, covering what's known, what's emerging and where to focus first.
By Paul Clapp