Attribution models for tech companies with multi-touch journeys
How we choose attribution models for B2B tech firms with long multi-channel journeys, comparing data-driven, position-based and modelled approaches.
Attribution is the question that ends most paid media conversations badly. The CFO wants a clean number for “what does Google Ads produce”. The marketing team explains that the buyer touched LinkedIn, Google, Microsoft, an SDR email, a webinar, two retargeting ads and a third-party review site before converting. The CFO repeats the question. Everyone leaves the room frustrated.
For B2B tech firms with multi-channel journeys spanning six to eighteen months, no single attribution model gets it right. The work is choosing models that are useful for different decisions, being honest about what each one can and cannot tell you, and structuring reporting so the conversation moves forward instead of in circles. Below is how we approach it.
What attribution can and cannot do
It is worth being clear about scope. Attribution models distribute credit for a conversion across the touchpoints that preceded it. They do not measure incremental impact. A model can tell you that LinkedIn contributed to 35 per cent of opportunities last quarter. It cannot tell you what would have happened if you had cut LinkedIn entirely.
For incremental measurement you need geo holdouts, audience holdouts or proper marketing mix modelling, none of which are quick or cheap. For most B2B tech firms, multi-touch attribution is the practical compromise: not perfect, but informative enough to make budget decisions that are better than gut feel.
The mistake to avoid is treating an attribution model’s output as truth. It is a lens. Different lenses show different pictures of the same journey.
The models worth understanding
The models that come up in real B2B tech reporting are a smaller set than the academic literature suggests.
Last-click. The simplest, the default in Google Ads’ single-touch reporting until recently, and almost always misleading for B2B. Credits the final touchpoint, ignores everything that warmed the buyer.
First-click. Credits the first touchpoint. Useful for understanding which channels source new audience, useless for conversion optimisation.
Linear. Splits credit equally across all touchpoints. Easy to understand, easy to explain to a CFO, but tends to overweight low-value touches.
Time-decay. More credit to recent touches, less to older ones. Reasonable for shorter cycles, less so for the six-to-twelve-month journeys typical in enterprise software.
Position-based (40-20-40 or U-shaped). Credits first and last touch heavily, the middle touches collectively. Acknowledges that opening and closing channels do different jobs.
Data-driven (DDA). Google Ads’ machine-learned model that distributes credit based on observed conversion paths in your account. Now the default for Google’s reporting, with the caveat that it requires conversion volume to learn properly.
We tend to lead with DDA inside Google Ads for the campaigns it can see, and use a position-based or custom modelled view for the cross-channel picture. Linear works well as a sense-check.
Why no single platform’s attribution is enough
The structural problem with relying on Google Ads’ attribution (or any single platform’s) is that it only sees what it can see. Google Ads sees Google clicks and on-site events. It does not see the LinkedIn touch, the Microsoft Ads click, the SDR email or the offline conversation that closed the deal.
Each platform reports a flattering, partial view of its own contribution. Add up the reported attribution across Google, LinkedIn, Microsoft, Meta and HubSpot and you will routinely find yourself crediting 200 per cent of the actual revenue. Each platform claims the conversion under its own logic.
The fix is a single source of truth that sees all channels. For most B2B tech firms that means GA4 (with proper UTM hygiene), or for clients running larger spends, a dedicated attribution layer like Dreamdata, Bizible (now part of Adobe) or HubSpot’s revenue attribution. The platform-level numbers then become inputs to the consolidated view, not standalone reports.
What we actually report on
The reporting we tend to land on across most B2B tech clients has three layers.
First, channel-level cost per opportunity, modelled multi-touch. This is the headline number we use for budget decisions. It uses the consolidated attribution view, with credit distributed across touches, mapped to opportunities created in the CRM.
Second, channel-level pipeline-influenced revenue. A binary view: did this channel touch the buying-committee at any point in the journey? This complements the modelled view because it answers the “did this channel help?” question without having to argue about credit weighting.
Third, single-touch views for diagnostic purposes only. Last-click and first-click reports are useful for understanding what the channels look like at the extremes, but they do not drive budget decisions on their own.
The third layer matters because there are still platform optimisation conversations that need single-touch views. Smart bidding inside Google Ads, for instance, optimises against Google’s data-driven attribution within the account. That is an in-platform decision, not a cross-channel one.
Tying CRM data into the attribution layer
The work that makes any of this possible is the CRM-to-platform plumbing. Every form fill, every CRM stage progression and every closed-won deal needs to flow back into the attribution view, with the original touchpoints preserved. The mechanics are covered in detail in our conversion tracking guide for long B2B sales cycles.
The non-trivial parts are usually:
- GCLID, MSCLKID, fbclid and LinkedIn member ID capture on the original visit
- UTM persistence across multi-session journeys (a buyer who first lands via LinkedIn and converts six weeks later via direct traffic should still have the LinkedIn touch credited)
- Multi-contact attribution at account level, not just contact level. A B2B opportunity has a buying committee, not a single buyer
- Offline conversion imports from the CRM back to ad platforms, so smart bidding optimises against opportunities not raw form fills, including pushing pipeline data back to LinkedIn where the channel justifies it
The attribution layer is only as honest as this plumbing is complete.
Picking a primary model for budget decisions
If we had to recommend one model as the “primary” lens for a typical B2B tech firm’s budget decisions, it would be a position-based model (40-20-40) on the cross-channel view, sense-checked against linear and DDA. The reasons:
- Position-based gives meaningful credit to the channels that source new audience (often demand-gen channels) without making them dominant
- It also rewards the channels that close (often retargeting and brand search) without giving them all the credit
- The middle 20 per cent acknowledges that nurture touchpoints matter without overstating their impact
For accounts where data-driven modelling has enough volume to be reliable, DDA on the cross-channel view is a stronger choice. Below the volume threshold (roughly 1,000 conversion paths per month for a stable DDA model), position-based is the safer default.
The relationship between content and attribution is closer than it looks. When buyers consume value content during their journey, those touches feed the model. We expand on this in content KPIs for the AI search era.
What this changes in the budget conversation
When attribution is set up properly, three things change.
First, the demand-gen versus lead-gen budget conversation becomes data-led rather than philosophical. Demand-gen channels show their contribution as opening and middle-position touches. Lead-gen channels show theirs as closing touches. The ratio adjusts based on what the model says, not what the loudest voice in the meeting wants.
Second, channel mix decisions get made on cost per opportunity, not cost per click. Channels that look expensive on a CPC basis (LinkedIn, programmatic) often look reasonable on a CPO basis when the multi-touch picture is honest.
Third, retargeting gets sized correctly. The retargeting layer almost always looks like it is producing magic under last-click. Under multi-touch, it looks like the harvester it is. Budget moves accordingly.
If your attribution conversation has been stuck in last-click reporting and the budget decisions are being made on the back of it, the fix is upstream of the media work. We’ve also written about diagnosing the next layer down in auditing a paid plateau. If you’d like a second opinion on attribution or budget split, drop us a line. You can also see how we run attribution work alongside the wider paid programme on our paid media service page.
Frequently asked questions
Which attribution model should we use as our primary lens for budget decisions?
Why does our attribution add up to more than 100 per cent across platforms?
Can attribution measure the incremental impact of a channel?
More on Paid Media
-
Paid Media
Account-based ads on LinkedIn: targeting specific companies
How we run account-based LinkedIn campaigns for B2B tech firms, from list building and creative to measurement against the actual sales pipeline.
By Nathan Yendle -
Paid Media
Auditing a paid programme that's plateaued
How we audit paid media programmes that have stopped scaling, including account structure, attribution, creative fatigue and the questions to start with.
By Nathan Yendle -
Paid Media
When to run branded paid search even if SEO already ranks
How to decide whether to run branded paid search when SEO already ranks first, including competitor bidding, message control and when to switch off.
By Nathan Yendle