techmarketing . agency

Guide / AI SEO

AI search optimisation for B2B technology companies: the definitive guide

Technical buyers now research with AI assistants before they touch Google. Being the source LLMs cite is the new top of funnel. Here's how we get our clients there, with all the caveats of a discipline that's still being defined.

A finance director researching SAP partners no longer opens ten tabs. They ask Claude to shortlist three. An IT manager scoping a co-managed service provider asks ChatGPT what good looks like before a single sales call. We have watched this shift play out across our client base over the last eighteen months, and the pattern is consistent: technical buyers are starting their research inside AI assistants and arriving on supplier sites with the questions already half answered.

That changes what we have to optimise for. Ranking on page one of Google still matters, but it is not enough. If the LLM does not know your client exists, or knows them but does not cite them, the buyer never sees the link to click. AI search optimisation is the discipline of fixing that, and it is genuinely new. There is no twenty-year playbook. There is what we have tested, what we have measured, and what we are honest enough to call uncertain.

Why AI search matters for B2B tech right now

We have always argued that B2B technology marketing rewards specificity. Generic SEO playbooks aimed at e-commerce or local services rarely transfer cleanly to a firm selling Microsoft 365 governance, NetSuite implementations or co-managed SOC services. AI search amplifies that. Buyers ask LLMs questions they would never type into a Google search box: “we are a 250-seat manufacturer in the Midlands moving off on-prem Exchange, who should we shortlist for a managed migration?” The answer is a list of three to five companies, with reasoning, and the buyer often takes that shortlist seriously.

We covered the full case in our AI search optimisation primer for B2B tech, but the short version: when an LLM cites your client, it functions as a recommendation from a trusted advisor. When it cites a competitor instead, you have lost the deal before the RFP went out. This is happening today, in measurable volume, for every one of our MSP and SaaS clients we have instrumented for it.

The implication for marketing leaders is straightforward. AI search is not an optional channel. It is becoming the layer that sits above Google for considered B2B purchases, and unlike search rankings, you cannot just write more content and hope. The mechanics are different.

How LLMs choose what to cite

There are two things to separate. There is the model itself, trained on a snapshot of the public web plus licensed data, which carries an opinion about your client baked into its weights. And there is the retrieval layer, which fetches live sources in response to a query. Most consumer-facing AI assistants now use both. Claude with web search, ChatGPT with browsing, Perplexity natively, Gemini with grounding, Microsoft Copilot via Bing. The retrieval part is what we can most directly influence.

Our working model, refined across roughly thirty B2B tech sites we have audited, is that LLMs cite sources that are technically accessible, semantically relevant to the prompt, factually consistent across sources, and recognised by the underlying search index they are grounded in. We dig into the mechanics in how LLMs choose which sources to cite, but it is worth noting that different assistants behave very differently. We compared two of the most prominent in Bing Chat vs ChatGPT citations and found citation behaviour, source diversity and freshness all diverged meaningfully.

Here is the rough shape of how the major assistants currently treat citations, based on our own audit work:

AssistantRetrieval sourceCitation densityFreshness bias
ChatGPT (with browsing)Bing index plus partner dealsMedium, often 2 to 4 per answerModerate
Claude (with web)Brave plus partner sourcesLower, sometimes 1 to 2Moderate
PerplexityMixed, own crawler plus partnersHigh, 5 to 10Strong
GeminiGoogle indexVariableStrong
CopilotBing indexMediumModerate

These numbers shift quarter on quarter. What does not shift much is the underlying logic: the assistant looks for sources that confirm the answer it wants to give, with preference for ones it perceives as authoritative.

What “optimising for AI” actually means

Most of the advice we read on this topic is either vague or recycled SEO content with the word “AI” pasted on. The honest version is more boring and more useful. Optimising for AI means writing content that an LLM can quote cleanly, structure that a retrieval system can parse, and a brand footprint that confirms the cited claim across multiple independent sources.

We worked through the writing side in writing content LLMs actually cite. The patterns we see citation-rewarded include: clear definitional sentences early in the page, named-entity precision (write “Microsoft 365 Business Premium” not “the Premium plan”), structured comparisons, original data with sourcing, and direct answers to the specific questions buyers ask. We also found that many tech sites have huge gaps in the topics they cover well enough to be cited, which we mapped in how to find AI search content gaps.

A practical rule we apply for client work: every important page should contain at least one paragraph that, lifted verbatim, would be a defensible answer to a real prompt. If you cannot identify that paragraph, the page is unlikely to be cited.

The other side is structural. LLMs tend to be better than search engines at reading messy HTML, but they are not infinitely patient. Pages that bury the answer behind tabs, accordions and JavaScript-heavy layouts get cited less. We see this consistently and it surprises clients who have invested heavily in interactive layouts.

Structured data is one of the few areas where AI SEO and traditional SEO genuinely overlap. Schema markup gives any retrieval layer, search engine or LLM, an unambiguous statement of what a page is about. We covered the practical setup in structured data for AI search and went deeper on SaaS-specific schema in schema markup for SaaS websites, which sits in our broader SEO guide.

For B2B technology sites we typically deploy:

  • Organization with full sameAs references to LinkedIn, Crunchbase, G2 and any industry directories
  • Service schema for each productised offering, with provider, areaServed and serviceType filled out properly
  • Product schema for SaaS plans, including offers and pricing where it is public
  • FAQPage on questions where the answer would be cited
  • Article and Author markup on every long-form piece, with proper sameAs on authors
  • BreadcrumbList because it is cheap and helps

We do not believe schema alone gets you cited. We do believe the absence of it makes it materially harder for LLMs to confirm what you do, who you serve and how authoritative you are. It is table stakes.

llms.txt: the emerging standard

llms.txt is a proposed plain-text file, modelled loosely on robots.txt, that gives a curated map of a site’s most important content for LLMs to consume. It is not a standard yet in the formal sense. It is an emerging convention, and adoption is uneven. We have written about it at length in llms.txt for tech sites and worked through the harder cases in complex llms.txt for large sites.

Our current view, and we will update this if the evidence shifts, is that llms.txt is worth implementing properly for any client serious about AI search. The file does not yet seem to drive citation directly, but it does two useful things. It forces a discipline of identifying which pages on your site you most want the model to read, and it positions you well if and when the major retrieval layers begin honouring it more formally.

A workable llms.txt for a B2B tech site is short. A homepage description, a list of core service pages with one-line descriptions, a list of pillar guides, a few flagship case studies, and links to the most important comparison and pricing pages. Avoid dumping the whole sitemap. The point is editorial selection.

Brand mentions, third-party sources and listings

This is where AI search diverges most clearly from classic SEO. Backlinks still matter, but for LLM citation purposes, unlinked brand mentions in authoritative sources are doing serious work. We unpicked the mechanism in brand mentions vs backlinks for AI search and we think it is one of the most important shifts to understand.

When an LLM is asked to recommend, say, three SAP partners in the UK, it composes its answer by aggregating evidence from many sources. A G2 listing with reviews, a Capterra entry, a Reddit thread on r/sysadmin, a Computer Weekly article, a podcast appearance, the supplier’s own site. The more independent sources confirm the same claim (“X is a credible SAP partner with manufacturing experience”), the more likely the model is to surface it.

That makes third-party listings unusually valuable. We covered the specifics for B2B software in G2, Capterra and AI search citations. And we devoted a piece to a source LLMs cite far more than most clients expect: Reddit’s role in AI search. Reddit is in the training data of every major model and continues to be retrieved heavily. For MSPs in particular, presence in r/msp and r/sysadmin discussions, even just unlinked mentions, shows up in our audits.

This does not mean astroturfing Reddit. That backfires. It means making sure that when your client is mentioned organically, the surrounding context is accurate, and that your client is genuinely active in communities where their buyers spend time.

Optimising for comparison and shortlist prompts

A specific subset of prompts deserves its own treatment because the commercial value is so concentrated. Comparison prompts (“compare Datto to N-able for managed backup”) and shortlist prompts (“best ERP consultancies for mid-market manufacturing in the UK”) are where deals are won or lost in AI search.

We dug into the comparison side in optimising for compare X to Y prompts and into the painful experience of losing the slot in why your competitor is being cited instead. The high-leverage move for both prompt types is the same: publish credible, specific, original comparison content on your own domain and make sure third-party listings reinforce the same positioning.

For one of our SAP clients we mapped fifty real comparison prompts a buyer might issue, identified which ones the LLMs were already answering, and which had no clear cited source. The gaps were the opportunity. Within two quarters we had moved from being cited in roughly fifteen per cent of relevant prompts to over forty per cent, mainly by writing the comparisons that did not yet exist anywhere credible.

AI Overviews and how to win the cited-source spot

Google AI Overviews sit slightly apart from the conversational assistants because they are bolted onto an existing search results page. The mechanics of getting cited there are closer to traditional SEO than to ChatGPT optimisation, but with a meaningful twist. We worked through a real example in winning the AI Overview citation slot for an MSP.

The pattern we see: AI Overviews tend to cite a small number of sources, usually high-ranking organic results that contain a direct, well-structured answer to the query. If your page ranks fourth but answers the question more cleanly than the page ranking first, you can often take the cited spot. Schema helps. Heading hierarchy helps. Putting the answer in the first hundred words helps a lot.

Where AI Overviews behave unlike traditional SERPs is in click-through. Buyers often read the overview and do not click. That is a real cost, and it changes the value calculus on producing content purely for top-of-funnel queries. We pick this up later in the section on KPIs.

AI search by sector: IT services, SaaS, infrastructure

The mechanics of AI search are general, but the priorities shift by sector. We wrote a sector-specific deep dive in AI search for IT services and MSPs, and the contours apply more broadly.

For IT services and MSPs, geographic and vertical specificity dominates. Buyers ask LLMs for “MSPs serving manufacturing in the North West” and the model needs to be able to confirm both the geography and the vertical from public sources. Case studies with named industries, location pages with actual content rather than templated filler, and accreditations like Cyber Essentials Plus listed in machine-readable form all matter.

For SaaS, comparison content and pricing transparency are the dominant levers. LLMs are constantly being asked to compare tools, and they cite pages that lay out feature differences clearly. We have seen SaaS clients gain citation share simply by publishing honest, specific comparison pages, the kind we covered in comparison content that ranks.

For infrastructure (data centre, hosting, network), proof-points are the bottleneck. LLMs need to verify uptime claims, certifications, locations and capacity from independent sources. Press coverage, industry awards and detailed case studies do disproportionate work here.

Measurement: tracking AI citations and traffic

This is the part of AI SEO most clients underestimate. Measurement is hard, the tooling is immature, and we have changed our minds twice in the last year about which methods are worth the effort.

Our current measurement stack for client work is layered. We pull referral traffic by source from GA4, segmenting out chat.openai.com, perplexity.ai, copilot.microsoft.com, gemini.google.com and the assorted Bing chat referrers. We use Cloudflare logs where we have them to catch bot crawls from GPTBot, ClaudeBot, PerplexityBot and Google-Extended. We run scheduled prompt audits, manually or via tooling, to track which queries surface the client and which surface competitors. And we cross-reference branded search volume in Search Console because it is one of the best leading indicators of LLM-driven brand exposure.

We covered the practical set-up in tracking AI search traffic for B2B tech and the harder problem of mapping the visibility surface in auditing AI visibility on Copilot and ChatGPT.

There are dedicated tools now. Profound and Athena are the two we have used most. They automate the prompt-tracking work and give comparable visibility metrics across assistants. They are useful but not magic. We compared the trade-offs in Profound vs manual audits. Our view: tooling is helpful at scale, manual audits are still indispensable for high-stakes client work, and the discipline of writing and rewriting your own prompt set is itself a form of insight.

MethodWhat it tells youConfidenceEffort
GA4 referral segmentationDirect traffic from AI assistantsHighLow
Cloudflare bot logsCrawl frequency by AI vendorHighLow
Manual prompt auditsCitation share for chosen promptsHighHigh
Profound/AthenaCitation share at scale across assistantsMedium-HighMedium
Search Console branded queriesIndirect demand signalMediumLow
Survey buyersWhether AI shaped the shortlistHighMedium

No single method is sufficient. Triangulation is the only honest answer.

The branded vs non-branded distinction is even more important in AI search than it is in classic SEO. We worked through it in branded vs non-branded queries in AI search, and it parallels our broader take in branded vs non-branded SEO.

For branded prompts (“tell me about Codestone’s NetSuite practice”), the LLM is essentially summarising what it knows about your client. The optimisation here is about ensuring the public-facing summary is accurate, complete and reflects current positioning. Every LLM is reading your homepage, your About page, the LinkedIn page, the latest press coverage. If those are inconsistent, the model’s summary is mush.

For non-branded prompts (“which IT support providers are good for higher education in the UK”), you are competing for inclusion. This is where the bulk of the strategic work sits, and where comparison content, third-party listings and topical authority pay off.

A practical split we recommend for clients: spend roughly twenty per cent of AI SEO effort on the branded surface (it is high leverage and quick to fix) and eighty per cent on non-branded visibility (it is slower but where new pipeline comes from).

What changes about content KPIs

If a buyer reads an AI Overview, gets the answer, and never clicks through, your content has done its job, and your traffic graph has not noticed. That is uncomfortable for marketers whose KPIs are pegged to sessions and pageviews. We argued in content KPIs in the AI search era that the dashboard needs reworking.

The metrics we now run for AI-savvy clients include citation share for a defined prompt set, branded search volume trend, AI assistant referral traffic in absolute and relative terms, qualified pipeline attributed to AI-discovered first touch, and content-level “answer extractability” scores from our own audits. Pageviews are still on the dashboard. They are no longer the headline.

This is also where content marketing strategy more broadly has to shift. The pieces we cover in our content marketing guide need rebalancing toward formats that are quotable in machine-mediated answers, not just readable by humans. Original research, structured comparisons, authoritative explainers and customer case studies do disproportionate work. Generic thought-leadership essays, less so.

What we do not know yet

We are eighteen months into a discipline that did not exist three years ago. There is plenty we have working theories on but cannot yet defend with the kind of evidence we would want before charging confident money for it. Some honest uncertainties:

  • We do not know how heavily authorship and author authority weigh in citation decisions across assistants. We see correlations. We do not have causation.
  • We do not know how durable current llms.txt behaviour is, or whether the major assistants will converge on it as a standard.
  • We do not know whether paid licensing deals between LLM vendors and publishers are quietly distorting citation patterns in ways that disadvantage smaller B2B sources. We suspect they are.
  • We do not know how aggressively models will deduplicate near-identical content across sites, which has implications for syndication and PR.
  • We do not know what happens when assistants begin transacting on behalf of users. The shortlist prompt today becomes the procurement decision tomorrow. The strategic implications are large and largely unresearched.

We also do not know how much of what works today will still work in twelve months. Models retrain, retrieval layers change, search partners come and go. Anyone selling you a definitive five-year AI SEO strategy is selling you confidence they do not have. We try not to.

Adjacent disciplines also matter and we cover them properly elsewhere. Site architecture and Core Web Vitals still feed LLM perception of authority, which we tackle in our web design guide and the case for hand-coded websites in 2026. Migration risk is non-trivial because LLMs cache learned associations between URLs and entities, which we touched on in our B2B website migration guide.


Our agency view, after running this discipline across a portfolio that includes managed services firms, ERP consultancies, IT support specialists and SaaS businesses, is that AI SEO is real, measurable, valuable and unfinished. The clients we have moved from invisibility to consistent citation in their core prompt set are seeing pipeline effects we can defend. The discipline that gets them there is partly old (technical SEO, schema, content quality, brand building) and partly new (prompt audits, llms.txt, citation-share measurement). What does not work is treating it as a one-off project. AI SEO is now a quarterly rhythm, like any serious marketing programme.

If you want help building that rhythm into your own marketing, the AI SEO service page covers how we work. Or get in touch and we can have a sensible conversation about where your client’s visibility stands today and what is worth doing about it.

Frequently asked questions

Is AI search actually driving B2B tech buyers, or is it hype?
It's real, but the volume is still small and the influence is large. We see anywhere between two and twelve per cent of new business enquiries citing ChatGPT, Claude, Gemini or Perplexity as part of their research path, depending on the client and the buyer profile. The number is growing month on month. More importantly, AI assistants are over-indexed in the discovery phase: the buyer asks an LLM who they should shortlist before they ever reach Google. If you're not in the citation, you don't enter the shortlist. The volume number understates the strategic stakes.
How do we measure something we can't see in Google Analytics?
Imperfectly, but more is possible than most teams assume. Direct AI traffic shows up in GA4 as referrals from chat.openai.com, perplexity.ai, gemini.google.com and similar. We layer on form-question triggers ('how did you hear about us'), call recording analysis, manual citation logs and tooling like Profound, Otterly and AthenaHQ that track LLM mentions of branded and unbranded queries. None of it is as clean as Google Search Console. Together they paint a real-enough picture to defend continued investment. We share what we measure, what we infer and what we don't yet know in monthly reporting.
Will our existing SEO work translate to AI search?
Mostly yes, but with caveats. The fundamentals overlap: clear semantic structure, accurate structured data, factual content, authoritative citations from third-party sources. But AI search rewards a few things classical SEO doesn't: extractable answer formatting (clean lists, definitions, tables), entity-level clarity (who you are, what you do, where you operate, expressed in unambiguous language) and a pattern of being cited by sources LLMs already trust. We build on a solid SEO foundation and layer AI-specific work on top. Skip the SEO floor and the AI work has nothing to stand on.
What's the difference between AI SEO and generative engine optimisation?
Mostly marketing. The terms AI SEO, GEO (generative engine optimisation), AEO (answer engine optimisation) and LLMO (large language model optimisation) all describe the same broad discipline: making sure AI assistants discover, understand and cite your content accurately. Different agencies use different acronyms, often to differentiate themselves. The substance is the same. We use 'AI SEO' because it's the term most clients already use and because it makes clear that this is an evolution of search optimisation rather than something separate. The work doesn't change because the acronym does.
Do we need our own LLM-readable feed (NLWeb, llms.txt)?
It's becoming worth doing, especially for B2B tech firms competing on technical depth. NLWeb is Microsoft's natural-language web framework that lets LLMs query structured site content directly. The llms.txt convention is a lightweight cousin that lists key URLs in a machine-readable format. Neither is a silver bullet. Together they make your site easier for AI agents to discover, parse and cite, and they're both early enough that being on them gives a small differentiation advantage. We deploy both for clients where the content surface is large enough to justify the engineering work, typically anything from 50 substantial pages upward.

Last updated 29 April 2026

More to read

All AI SEO posts

Ready to put this ai seo thinking into practice?

Tell us about your business. We'll come back with an honest assessment of where you'd see the fastest wins.