The 30/30/40 content mix for B2B tech
How we split a B2B tech content programme between thought, evidence and demand-capture, with examples and a planning template you can reuse.
Most B2B tech content programmes have a balance problem. Either they over-index on opinion pieces that nobody asked for, or they fill the calendar with bottom-of-funnel content that ranks for vendor-name searches and ignores everything earlier. The result is either an audience that respects the brand but never converts, or a pipeline that depends on people who already knew the product existed.
We have built and run content programmes for managed service providers, SaaS firms and ERP partners. The ones that compound tend to settle on a mix that looks something like 30 percent thought, 30 percent evidence, 40 percent demand-capture. It is not a magic ratio. It is a planning prompt that forces a programme to do all three jobs.
What the three buckets actually mean
The labels matter less than the work each bucket does. Here is how we use them.
Thought (30 percent). Pieces that take a position. Opinion essays, reaction pieces, predictions, frameworks the writer has invented. The audience is industry peers and the buyers who care about how the vendor thinks. These pieces rarely rank well on their own but they do the heavy lifting on brand, on inbound speaking invitations and on senior buyer recall.
Evidence (30 percent). Case studies, customer interviews, benchmark reports, original research, technical deep-dives where the vendor shows their working. The audience is buyers in active evaluation. These pieces close deals, get linked from sales emails and turn up in the final shortlist meeting.
Demand-capture (40 percent). Pages that match the questions buyers are searching for. Comparison content, how-to guides, glossary pages, problem-symptom posts. The audience is buyers in research mode, often not yet aware of the vendor. These pieces drive most of the organic traffic and feed the rest of the programme.
The percentages are weighted toward demand-capture because that is the work that compounds. A good comparison post earns traffic for years. A good opinion piece earns attention for two weeks. Both matter, but the maths is different.
Why most programmes are out of balance
The most common imbalance we see is 70 percent thought, 20 percent evidence, 10 percent demand-capture. The marketing team is led by someone with a strong point of view and the calendar reflects it. The pieces are interesting. The pipeline is not. Search Console shows a few hundred clicks a month and most of them are branded.
The second most common imbalance is 10 percent thought, 10 percent evidence, 80 percent demand-capture. The team has discovered SEO and is publishing weekly comparison posts and how-to guides. Traffic is growing. But every conversation with sales reveals that buyers find the brand impossible to distinguish from three competitors. There is no point of view, no proof and no recall.
Both imbalances are fixable. The question we use in the first planning session is simple. Of the next ten pieces on the calendar, how many would still earn traffic in 18 months, how many would close a deal in the next quarter and how many would change a buyer’s mind about the category. If any of those numbers is zero, the mix is broken.
How the buckets feed each other
The reason the mix works is that the buckets are not parallel. They feed each other.
A thought piece on, say, the failure mode of co-managed IT contracts attracts attention from a CIO. The CIO clicks through to a case study where exactly that failure mode is described in the “before” section. The case study links to a comparison page on co-managed versus fully managed IT, which closes the loop on the question the CIO had not realised they had. The thought piece earned attention the comparison page never could have, the case study converted attention into trust and the comparison page captured the demand.
If any one bucket is missing, the chain breaks. We see this all the time when teams ask why their evidence content does not convert. The answer is usually that nobody arrived at the case study with enough trust for it to mean anything. The thought layer was missing.
Picking the right ratio for your stage
The 30/30/40 split is a default, not a rule. We adjust it based on where the programme is.
| Stage | Thought | Evidence | Demand-capture |
|---|---|---|---|
| Brand new, no rankings | 20% | 20% | 60% |
| Established, weak brand | 25% | 25% | 50% |
| Established, strong brand | 35% | 35% | 30% |
| Mature, defending category | 40% | 40% | 20% |
A new programme should over-weight demand-capture because it has no organic surface yet. A mature programme should over-weight thought and evidence because the demand-capture estate is already in place. The mistake is staying on the same ratio for years.
Planning a quarter against the mix
We plan in quarters, not months. A quarter gives the demand-capture content enough time to compound, the thought content enough cadence to feel like a point of view and the evidence content enough lead time to clear approvals. The planning template we use sits in Notion and tracks every piece against its bucket.
Each quarter we ask three questions. What is the one thought piece we want to be cited for. What is the one piece of evidence we want sales to walk into a pitch with. What is the one demand-capture cluster we want to own by year end. Three answers, then the calendar fills in around them. Our note on editorial calendars for tech marketing teams covers the mechanics of running the calendar week by week, and our content strategy primer walks through how the strategy lands in the first quarter.
Where each bucket should live
Thought pieces sit on the blog and get pushed via LinkedIn and the newsletter. Evidence pieces sit in a dedicated case studies area and get used in sales decks, pitch emails and webinar segments. Demand-capture lives in a content hub structured around topic clusters, ideally with a strong pillar page at the centre and supporting posts feeding it.
The placement matters because each bucket has a different distribution job. Thought is pushed. Evidence is handed over. Demand-capture is found. Confusing the channels is how programmes burn budget.
Measuring the mix
The mistake we see most often in measurement is judging every piece by the same metric. A thought piece judged by organic traffic looks like a failure even when it has done its job. A demand-capture piece judged by social engagement looks like a failure even when it ranks page one.
The metrics we track per bucket:
- Thought. LinkedIn impressions, brand search volume in Search Console, inbound speaking and podcast invitations, sales reps quoting it back in calls.
- Evidence. Time on page, sales deck inclusions, pipeline-stage acceleration where attribution is workable.
- Demand-capture. Organic clicks, ranking positions, conversions to next step. This is where Ahrefs, Semrush and GA4 actually earn their keep.
Our measuring content marketing ROI post has the full attribution discussion and how we feed it back into the content marketing service work we do for clients. If your programme is heavily paid-driven, the same logic applies, just with paid media carrying the demand-capture load.
What to drop when budget is tight
When budget is tight, the temptation is to cut the thought layer first because it is the hardest to attribute. We push back on this. The thought layer is what makes the rest of the programme worth reading. The piece to cut is the eighth comparison post in a cluster that already has seven, not the quarterly point-of-view piece that gets cited in pitch decks.
If anything has to be cut, we would rather cut frequency than mix. Two thoughtful pieces a month across all three buckets beats four pieces a month in one bucket and silence in the others.
If you are trying to work out whether your content mix is doing all three jobs, drop us a line and we will walk through your last quarter with you.
Frequently asked questions
Is 30/30/40 a fixed ratio or should we adjust it?
Why is the demand-capture share usually the largest?
Where do most B2B tech programmes get the mix wrong?
More on Content Marketing
-
Content Marketing
AI-assisted vs AI-generated content: where to draw the line
Where we draw the line between AI-assisted and AI-generated content in B2B tech, with the workflow, the editorial rules and the failure modes we see.
By Nathan Yendle -
Content Marketing
Case studies that close: a structure for B2B tech
A structure for writing B2B tech case studies that actually win deals, drawn from how we build them for MSPs, SaaS firms and ERP consultancies.
By Nathan Yendle -
Content Marketing
Feature comparison content that actually ranks
How we build comparison pages for B2B tech that rank for high-intent searches, hold up to legal review and convert better than a generic feature list.
By Nathan Yendle