Server-side tagging for B2B tech: setup and pitfalls
How we set up server-side tagging for B2B tech firms, including GTM Server containers, consent handling, match rates and the implementation pitfalls.
Server-side tagging is one of those topics where the marketing community has split into two camps. One group treats it as an essential 2026 setup, the other as an over-engineered solution to a problem most B2B firms do not have. Both views are too clean. The reality, in our experience, is that server-side tagging is genuinely useful for B2B tech accounts above a certain volume, and a reasonable waste of money below it.
We have set up server-side GTM for a number of B2B tech clients and walked others through why they did not need it yet. Below is the framework we use to decide, and the implementation pitfalls that catch most teams when they do go ahead.
What server-side tagging actually does
A standard client-side tagging setup has the user’s browser firing pixels and tags directly to Google, LinkedIn, Meta and any other platforms in scope. Each tag is a separate request from the browser, blockable by privacy extensions, ad blockers, ITP/ETP browser restrictions and increasingly aggressive cookie policies.
A server-side setup routes those events through a server you control (typically a Google Cloud Run container running GTM Server) before they go on to the platforms. The browser sends one request to your server, the server does the enrichment and forwarding. Match rates improve, you can hash and clean data before it leaves your infrastructure, you control the cookie domain and you can manage consent more cleanly.
The technical setup is not the hard part. The hard parts are the decisions about what to track, how to consent and how to defend the change to a privacy review.
When server-side tagging is worth doing
We typically recommend server-side tagging when three conditions are met.
First, the account has enough volume that match-rate improvements meaningfully change the bidding outcome. For Google Ads accounts running offline conversion imports already, server-side enrichment of the user data signals (Enhanced Conversions for Web with the user data hashed and sent server-side) reliably improves match rates by 15 to 30 per cent. On a £20,000 a month account, that is real money. On a £3,000 a month account producing twelve conversions, the impact is statistically noise.
Second, the firm has the technical capacity to maintain it. Server-side GTM is a real piece of infrastructure. Containers need updating, hosting costs need monitoring, debugging is harder than client-side and the person who built it cannot leave without documentation. Firms that do not have either an internal developer or an external partner committed to ongoing maintenance should not run it.
Third, the firm cares about the privacy posture. Server-side tagging makes consent enforcement, data minimisation and IP anonymisation much easier to defend. For B2B tech firms whose clients are regulated (financial services, healthcare, public sector), this is often the strongest reason to bother. Our page-speed checklist for tech sites is a useful adjacent read, since fewer client-side tags also helps performance.
When it is not worth doing
Conversely, we have advised clients against server-side tagging when their volume is low (under 20 to 30 conversions a month), their tracking is otherwise clean, their consent handling is already well-managed at the client side and the maintenance overhead would distract from work that would actually move pipeline.
Some agencies push server-side tagging because it is a chargeable project. We are honest with clients that the work has a real ceiling of value, and below that ceiling, the right answer is “not yet”.
A typical setup, end to end
For a B2B tech client where we have decided to go ahead, the build looks roughly like this.
- Provision GTM Server on Google Cloud Run, with a custom subdomain (say
metrics.client.com) that resolves to the container - Migrate Google Analytics 4, Google Ads conversion linker and conversion tags through the server container
- Migrate the LinkedIn Insight Tag via the LinkedIn Conversions API (CAPI) flow
- Add Microsoft Ads UET via its server-side equivalent where available
- Set up Meta Conversions API where Meta is in scope
- Configure consent mode v2 (or equivalent) so consent state propagates correctly to all destinations
The Conversions API destinations are the meaningful upgrade. Once events are flowing server-side with hashed user data, the platforms can match them to ad clicks even when the client-side cookie has been blocked or expired.
This connects to the work in our conversion tracking for long B2B sales cycles piece, which covers the offline side. Server-side handles the online events. Offline imports handle the CRM-to-platform loop. Together they account for most of the data quality improvements available to B2B tech advertisers.
Pitfalls that catch teams out
Five things we see go wrong on B2B server-side implementations.
Consent mode misconfiguration
Consent mode v2 has tightened the rules on what data can be sent before user consent. Implementations that relied on consent-mode “advanced” features without thinking through the legal basis often end up either over-blocking (no events at all) or under-blocking (events fire regardless of consent). Both versions cause problems. We recommend a consent management platform (CMP) that integrates cleanly with GTM Server, and a test plan that walks through every consent state.
Domain and cookie scope
Setting up the server container on a subdomain (metrics.client.com) rather than a separate domain is essential for cookie persistence on Safari and Firefox. Implementations that put the container on a third-party domain to “make it easier” lose much of the match-rate improvement that justified the project.
Forgetting the consent linker
Cross-domain tracking still needs the GA4 cross-domain or conversion linker logic. Server-side does not magically solve cross-domain. We see implementations that drop the linker and then wonder why pipeline-attribution fell apart between the marketing site and a separate booking subdomain.
Tag duplication during migration
The migration period, where some tags are client-side and some are server-side, is risky. A common mistake is leaving the original client-side LinkedIn Insight Tag firing alongside the new CAPI events. The platform de-duplicates only some events, and the discrepancy can either inflate or deflate reported conversions for weeks. We run a strict cutover plan with a test environment before any tag is genuinely turned off.
Not measuring the lift
The point of the project is usually match-rate improvement and resilience to browser-side blocking. If the team does not measure those things, leadership has no way to evaluate whether the project worked. We capture baseline match rates per platform (Google Ads Enhanced Conversions match rate, LinkedIn CAPI match rate, Meta CAPI match rate) before the migration, and track the lift afterwards. Our attribution models for multi-touch B2B piece covers how the resulting data feeds into reporting.
Cost and ongoing overhead
Hosting a small GTM Server container on Google Cloud Run typically costs £30 to £150 a month depending on traffic. The build itself is a 30 to 80-hour project for a competent agency or developer, plus another 10 to 20 hours of CMP work and testing.
Ongoing, the maintenance is real. Containers need updating quarterly, new tag templates appear, platform APIs change. We typically build in a couple of hours a month of maintenance into the retainer for clients running server-side GTM.
The reporting bonus
A side effect that often surprises clients: server-side tagging makes downstream reporting cleaner. Because all events are landing on a single server endpoint before they are forwarded to the platforms, you can also send a copy to a data warehouse (BigQuery, Snowflake) for honest cross-channel reporting. This is genuinely useful for the kind of pipeline-tracking work we cover in the LinkedIn-specific piece.
It also makes tracking AI search traffic easier, because the raw event data can be attributed to AI referrer sources before any platform-side reporting compression kicks in.
When to start, when to wait
A pragmatic decision rule we use with clients: if you are spending less than £10,000 a month on paid media, the maintenance overhead of server-side tagging is likely to outweigh the benefit. If you are spending more than that and you have not yet implemented Enhanced Conversions or LinkedIn CAPI, the right next step is the basic offline conversion plumbing first. Server-side comes after that, when the data flows are otherwise stable.
If you are spending more than £30,000 a month on paid and you are still on a fully client-side stack with adblocker-affected match rates, server-side tagging is genuinely overdue and is probably the single biggest data-quality lever available to you.
If you’d like a second opinion on attribution or budget split, drop us a line. The broader paid context sits on our paid media service page.
Frequently asked questions
When is server-side tagging worth implementing for B2B tech?
How much does server-side GTM cost to run?
What is the most common server-side tagging implementation mistake?
More on Paid Media
-
Paid Media
Account-based ads on LinkedIn: targeting specific companies
How we run account-based LinkedIn campaigns for B2B tech firms, from list building and creative to measurement against the actual sales pipeline.
By Nathan Yendle -
Paid Media
Attribution models for tech companies with multi-touch journeys
How we choose attribution models for B2B tech firms with long multi-channel journeys, comparing data-driven, position-based and modelled approaches.
By Nathan Yendle -
Paid Media
Auditing a paid programme that's plateaued
How we audit paid media programmes that have stopped scaling, including account structure, attribution, creative fatigue and the questions to start with.
By Nathan Yendle