techmarketing . agency
Happy laptop business people meeting conversation brainstorming with research corporate update software web designer group employees with computer
SEO 2 Apr 2026

Core Web Vitals 2026: what still matters

Where Core Web Vitals stand in 2026, what Google has quietly changed and what actually matters for B2B tech site performance and search rankings.

Core Web Vitals have been part of Google’s ranking signal mix for nearly five years now and the conversation has settled into one of two extremes. Some marketers treat the green lights in PageSpeed Insights as a religion. Others, having watched Google insist Web Vitals are “a small ranking factor”, have stopped paying attention entirely.

The truth in 2026 is more interesting than either position. Some things genuinely matter more than they used to. Others have quietly become non-issues. Here’s where we think a B2B tech marketing team should focus.

A brief reset on what the metrics measure

Three core metrics, each measuring a different aspect of user experience:

  • Largest Contentful Paint (LCP). How quickly the largest element above the fold renders. Target: 2.5 seconds at the 75th percentile of real users.
  • Interaction to Next Paint (INP). How responsive the page feels when a user clicks, taps or types. Replaced FID in 2024. Target: 200ms at the 75th percentile.
  • Cumulative Layout Shift (CLS). How much the page jumps around as it loads. Target: 0.1 or below.

The numbers Google publishes in Search Console and the CrUX report come from real Chrome users, not from your laptop running Lighthouse. That distinction matters more than most marketers realise.

INP is the metric that catches most tech sites

When INP replaced FID, a lot of sites that had been sitting comfortably in the green dropped into the amber or red. The reason is simple: INP measures every interaction, not just the first. And tech marketing sites tend to be heavy with JavaScript that runs on every click.

The usual culprits we find:

  • Tag managers with 30+ tags firing on every interaction. Every “track” call costs a few milliseconds. Stack twenty of them and INP drops below the threshold.
  • Chat widgets that re-render on focus events.
  • Cookie banners that respond to consent toggles by mutating the DOM extensively.
  • React or Vue product pages with expensive rerenders triggered by trivial state changes.

The fix is rarely a single magic change. It’s usually a sequence: prune tags, defer non-essential scripts until idle, replace synchronous handlers with async ones, fix the worst React rerender offender. We treat INP audits the same way we treat code performance work. Measure, identify the worst offender, fix, re-measure.

For the broader page-level perspective, our page speed checklist for tech sites has the workflow we follow.

LCP: the easy wins are mostly done

If you’ve been paying attention to Web Vitals at all, your LCP is probably in decent shape. The remaining wins on B2B tech sites tend to be:

  • fetchpriority="high" on the LCP image. Browsers now respect this aggressively and it can shave 300ms off LCP.
  • Preloading the LCP image when it’s discoverable late, especially on JavaScript-rendered pages.
  • Self-hosted fonts with proper font-display: swap and preload hints. Third-party font CDNs are no longer the safe choice they used to be.
  • Avoiding hero video unless you genuinely need it. A poster image plus an interaction-triggered video almost always wins on LCP and on INP.

What no longer matters as much: aggressive image optimisation beyond a sensible baseline. Modern formats (AVIF, WebP) and responsive images via srcset are now table stakes. Going from “good” to “perfect” image delivery rarely shifts the needle on field LCP.

CLS: still the easiest to fix, still routinely broken

Of the three metrics, CLS is the one most likely to fail because of a single, identifiable element. Cookie banners that load late and push content down. Embedded Twitter feeds that resize after first paint. Newsletter signups that appear when JavaScript runs.

The fix:

  • Reserve space for everything that loads asynchronously. Use aspect-ratio on images and videos. Use min-height on banner containers.
  • Avoid content that loads above the fold after first paint. If it’s above the fold, render it server-side or hold the layout for it.
  • Test on slow connections. CLS issues often only manifest when JavaScript is delayed.

CLS regressions are usually introduced by marketing tag changes. Whoever owns the tag manager should know about Web Vitals. Most don’t. Layout shift on or near the primary CTA is one of the silent reasons request a quote CTAs fail on otherwise decent pages.

What’s changed quietly in the past year

Two things worth knowing.

First, Google’s CrUX dataset now feeds into a wider range of search experience signals than it did initially. The “page experience” badge has gone, but the underlying field data still influences how Google evaluates a site. We’ve seen sites where fixing INP correlated with a broader uplift across non-branded queries.

Second, the threshold-based scoring (green/amber/red) is being applied more strictly. A site that was “needs improvement” two years ago may now be effectively in the red bucket because Google has tightened how it interprets borderline scores. The targets in PageSpeed Insights haven’t changed, but the practical impact of being on the wrong side of them has.

Field data vs lab data: choose wisely

Lighthouse runs are useful for diagnosing issues. They are not useful for measuring whether your site passes Core Web Vitals.

We always start with the CrUX report (via PageSpeed Insights, the Chrome UX Report API or the Search Console Core Web Vitals report) for the field reality. Then we use Lighthouse to diagnose where a problem comes from.

Reporting Lighthouse scores to executives is misleading. A site with a 95 Lighthouse score and failing field metrics is not actually performing well. A site with a 60 Lighthouse score that passes all three field metrics at p75 is fine.

The hand-coded advantage

One thing we’ve consistently found over the past few years: hand-coded or carefully built static sites pass Web Vitals more easily than the average WordPress install. Less JavaScript, fewer plugins, simpler render paths. Our case for hand-coded websites in 2026 post covers this trade-off in more detail.

For sites that need to remain on WordPress or another CMS, the work is harder but achievable. It usually means dropping page builders, auditing plugins ruthlessly and accepting that not every animation or widget is worth its performance cost.

The structured data and SEO connection

Web Vitals interact with the rest of your technical SEO work. A site with poor INP also tends to be the kind of site where rendering issues affect indexing. We’ve seen JavaScript-heavy product pages where Googlebot was indexing the loading skeleton, not the content. Web Vitals are a useful proxy for general technical health.

Our technical SEO audit checklist covers how we tie performance into the broader audit. And our work on schema markup for SaaS websites shows how performance and structured data both contribute to richer SERP results.

What we’d actually prioritise

If you have limited engineering budget for performance work in 2026, in this order:

  1. Get INP under 200ms at p75. This is where most tech sites are failing.
  2. Fix any specific CLS issues caused by late-loading widgets.
  3. Apply fetchpriority and preload hints to LCP elements.
  4. Audit your tag manager and remove anything not actively delivering value.
  5. Move on to other SEO work.

Going from “good” to “perfect” on Web Vitals rarely justifies the cost. Going from “poor” to “good” almost always does.

If you’re staring at a red Search Console graph and not sure which thread to pull on first, get in touch and we’ll share what we’re seeing on similar accounts. Our web design service page has more on how we build sites that pass Web Vitals from day one.

Frequently asked questions

Are Core Web Vitals still a real ranking factor in 2026?
Yes, but the lift is smaller than most marketers think and it kicks in mainly when two competing pages are otherwise close on relevance and authority. We see Web Vitals make a measurable difference on competitive head terms where SERPs are tightly contested. They rarely rescue a page with weak content. Treat them as a tiebreaker that also happens to improve conversion rate, because the same fixes that help LCP and INP also help bounce rate and form completions.
How do we test INP properly when it varies so much?
Use real user data from the Chrome User Experience Report (CrUX) or your own RUM tooling, not Lighthouse. Lighthouse simulates a single interaction on a fast network, which catches almost nothing. We pull CrUX data through PageSpeed Insights or BigQuery, then segment by URL pattern and device. Tools like SpeedCurve and DebugBear give continuous monitoring with the slowest interactions captured. The 75th percentile of real users is the number Google uses, so that's the number we optimise for.
Which Web Vitals fix gives us the biggest improvement on a tech marketing site?
Pruning the tag manager. Most B2B tech sites carry 20 to 40 tags in GTM, many fired on every interaction. Auditing the container and removing unused tags typically lifts INP by 100 to 300ms at the 75th percentile. Deferring non-essential scripts to requestIdleCallback and replacing synchronous third-party scripts with async loaders adds another tier of improvement. LCP fixes are usually image and font work. INP fixes are almost always JavaScript discipline.
Share

Want help putting this into practice?

We work with technology companies on exactly this kind of programme. Tell us about yours.