Negative keyword strategy for B2B tech advertisers
How we build and maintain negative keyword lists for B2B technology accounts on Google and Microsoft Ads, with categories, tools and review cadence.
The single fastest way to improve a B2B tech paid search account is rarely a new keyword, a new ad or a new landing page. It is a serious negative keyword review. Most accounts we audit have either no negatives at all beyond the obvious (“free”, “jobs”), or thousands of historical negatives applied at the ad-group level with no governance and no logic. Both are expensive.
Negative keywords matter more in B2B tech than in most categories because the search vocabulary is full of overlap. “ERP” pulls academic queries. “Cloud” pulls weather. “Service desk” pulls furniture. Without a structured approach to negatives, smart bidding will confidently spend the budget on the wrong audience. Below is the framework we apply when rebuilding accounts.
The four categories of negatives we build
We sort negatives into four buckets, and review each on a different cadence. Treating them as one undifferentiated list is how things drift.
- Universal negatives: irrelevant under any circumstances (“free”, “salary”, “course”, “tutorial”, “wikipedia”, “reddit”).
- Industry-mismatch negatives: terms that pull the wrong sector (a managed IT firm wants to exclude “construction”, “estate agent”, “automotive” if they do not serve those industries).
- Funnel-stage negatives: terms that signal the wrong stage of the journey for a given campaign (“what is”, “definition”, “examples” excluded from BoFu campaigns).
- Brand-protection negatives: terms that should not pull through Performance Max, Demand Gen or broad-match search (“[your brand]” excluded from non-brand campaigns).
Each list is applied at the right level. Universal negatives go on the account-level shared list. Industry-mismatch goes at the campaign level. Funnel-stage negatives go at the campaign level, often with cross-campaign exclusions to prevent BoFu and ToFu cannibalising each other. Brand-protection goes on PMax, Demand Gen and any non-brand search campaign.
Building the universal list
Some negatives are non-negotiable for B2B tech accounts. We start every account with a baseline list of around 200 to 400 terms covering:
- Job-seeking modifiers: “jobs”, “salary”, “career”, “vacancy”, “recruitment”
- Education and DIY: “course”, “tutorial”, “training”, “certification”, “exam”, “learn”
- Discovery and definitions: “what is”, “meaning”, “definition”, “examples”, “wikipedia”
- Pricing irrelevance: “free”, “open source” (unless the offer matches)
- Reseller and second-hand: “used”, “refurbished”, “second hand”
- Comparison sites and aggregators: “g2”, “capterra”, “trustradius” (unless you specifically want comparison traffic)
- Geographic exclusions: regions or countries you do not serve, with the geo trade-offs covered more fully in Google Ads for MSPs and geo targeting
This is a starting point, not a finished list. The real work is the next step.
Mining the search terms report
Once the baseline is in place, we move to the search terms report. For an account at moderate scale, we would aim to review the report weekly for the first two months of a rebuild, then monthly thereafter. The mechanics are simple: sort by spend, look for queries that converted poorly or burned budget without conversion, and add the irrelevant root term as a negative.
The judgement call is around match types. We default to phrase-match negatives for most cases because exact-match negatives leave too many adjacent variants alive. A phrase-match negative on “free trial” will block “best free trial software” and “free trial of [category]” but will leave “trial” alone, which is what we want.
For terms that should never appear under any combination (“free”, “jobs”), we use exact and phrase together. Belt and braces.
The cross-campaign exclusion problem
A subtle problem we see in nearly every multi-campaign B2B account is keywords competing across campaigns. The BoFu “ITSM software” exact campaign and the MoFu “ITSM” phrase campaign both fire on a query like “ITSM software for finance”. The auction picks one, and it might not be the one with the right landing page.
The fix is cross-campaign negatives. Every keyword that lives in BoFu as exact also goes in as an exact-match negative on MoFu and ToFu. Every keyword in MoFu as phrase also goes in as a phrase-match negative on ToFu. The result is a clean hierarchy: queries land in the most specific campaign that targets them, and the broader campaigns only pick up genuinely upper-funnel variants.
This is closely related to the funnel structure we cover in Google Ads for SaaS. Without it, smart bidding cannot tell the difference between a research query and a buying query.
Negatives for Performance Max and Demand Gen
PMax and Demand Gen are particularly vulnerable to cross-campaign cannibalisation. By default, PMax will happily eat brand traffic, BoFu category traffic and any other search query it can match against your asset groups. Without negative discipline, your brand campaign’s metrics will look strong because PMax is stealing the brand auction.
Our default PMax exclusions:
- Brand keyword list as account-level negatives (now that Google supports them on PMax)
- Competitor brand names if you do not want to bid on them
- Job-related and free-trial-related terms via the brand exclusion list
Demand Gen has fewer levers, but at minimum we exclude existing customer audiences and brand searchers from prospecting campaigns, then allow them in retargeting layers. The retargeting logic we cover in retargeting tech buyers without burning the brand.
Tooling and review cadence
We use a combination of Google’s native search terms report, n-gram analysis (clustering search terms by recurring tokens to spot themes faster), and where the account scale justifies it, third-party tools that surface query-level patterns automatically. The point is not the tool. It is the cadence.
Our standard review schedule:
| Frequency | Action |
|---|---|
| Weekly | Search terms report mining for the first eight weeks of a new build |
| Fortnightly | Search terms report mining once the account is stable |
| Monthly | n-gram review and cross-campaign cannibalisation check |
| Quarterly | Universal negative list audit, removing anything no longer relevant |
Without the cadence, lists go stale. A negative added to block a 2023 trend is still excluding queries in 2026, sometimes blocking legitimate ones.
The trap of over-negation
A final note. We have seen accounts so heavily negated that the keyword footprint becomes invisible. Smart bidding starves of impressions, learning resets and CPCs rise as the auction sees the account as low-quality. Negation is a precision tool, not a defensive one. The volume-starvation problem is one we tackle from the bidding side in bidding for low-volume B2B keywords.
Our test before adding any negative: “Does this term, in any reasonable interpretation, signal a buyer who could buy?” If the answer is “maybe”, we leave it in and let the bid strategy decide. If the answer is a clear “no”, it goes to negative. The line moves with conversion data, not gut feel.
If your search account is generating clicks but no pipeline, the chances are the negative keyword work has slipped. We usually find a 15 to 30 per cent budget recovery in the first two months of a rebuild before we touch any positive keyword work. That budget then gets reallocated into BoFu campaigns or Microsoft Ads, which often outperforms the source campaign on cost per opportunity.
If you’d like a second opinion on attribution or budget split, drop us a line. You can also see how we run paid search programmes on our paid media service page.
Frequently asked questions
How often should we review the search terms report?
Should we use phrase-match or exact-match negatives by default?
Can you over-negate a Google Ads account?
More on Paid Media
-
Paid Media
Account-based ads on LinkedIn: targeting specific companies
How we run account-based LinkedIn campaigns for B2B tech firms, from list building and creative to measurement against the actual sales pipeline.
By Nathan Yendle -
Paid Media
Attribution models for tech companies with multi-touch journeys
How we choose attribution models for B2B tech firms with long multi-channel journeys, comparing data-driven, position-based and modelled approaches.
By Nathan Yendle -
Paid Media
Auditing a paid programme that's plateaued
How we audit paid media programmes that have stopped scaling, including account structure, attribution, creative fatigue and the questions to start with.
By Nathan Yendle