TheBomb®
TheBomb® Logo
Start Project
Insight
227k Views
552 Shares

Website Performance Budgets: Set Them, Enforce Them, Hit Them in 2026

How to set a real website performance budget in 2026 — metrics that matter, CI enforcement, and the numbers that protect your Core Web Vitals from regressions.

Cody New
Cody New

TheBomb® Editorial

Abstract luminous speedometer and weight scale converging representing website performance budgets

The median desktop page now ships 2.6 MB of content and the median mobile page isn’t far behind, according to the 2024 HTTP Archive Web Almanac. That number has climbed roughly every year since the Almanac started tracking it. If you don’t have a website performance budget in place, you are — mathematically — getting slower every quarter. Not because you’re shipping bad code, but because the web’s default gravity is bloat: new image, new tracking pixel, new “just one more” dependency.

A performance budget is the only tool that reverses that gravity. Set it once, enforce it in CI, and every pull request has to justify the weight it adds. No budget, no brakes.

At TheBomb®, we’ve watched sites gain 400 KB in a single sprint because nobody was counting. This is how you stop the bleeding — the metrics that matter, the thresholds that actually bite, and how to wire the whole thing into your pipeline so it runs without a human babysitter.


What Is a Website Performance Budget?

A website performance budget is a set of numerical limits on how much your pages can weigh, how fast they must render, and how responsive they must feel — enforced automatically on every build. If a change pushes any metric past its ceiling, the build fails. Simple as that.

Google’s performance team has advocated budgets since 2017, and web.dev’s own guidance frames them in three categories:

  • Quantity-based: total page weight, number of requests, JavaScript KB, image KB.
  • Milestone-based: time to a user-visible event (LCP, First Contentful Paint).
  • Rule-based: a Lighthouse score floor, an accessibility threshold, a specific audit that must pass.

The point is not to pick “nice” numbers. The point is to codify what “acceptable” means before a deadline forces someone to negotiate it. A budget you can talk your way around is not a budget — it’s a suggestion.


Which Metrics Belong in Your Budget?

Not every metric deserves a seat. A bloated budget dashboard is ignored just as fast as no dashboard at all. These are the ones that actually predict user experience and SEO outcomes in 2026.

Core Web Vitals (non-negotiable)

Google replaced First Input Delay with Interaction to Next Paint (INP) in March 2024, and it remains a ranking input via page experience signals. The three vitals, per web.dev’s current thresholds:

  • Largest Contentful Paint (LCP): good ≤ 2.5s, poor > 4.0s
  • Interaction to Next Paint (INP): good ≤ 200ms, poor > 500ms
  • Cumulative Layout Shift (CLS): good ≤ 0.1, poor > 0.25

Budget these at the “good” threshold, measured at the 75th percentile of real users — the same bar Google uses.

Lab metrics that catch regressions early

Real-user data takes 28 days to stabilise. Lab metrics catch issues in seconds:

  • Total Blocking Time (TBT): ≤ 200ms in lab conditions correlates with good INP in the field.
  • First Contentful Paint (FCP): ≤ 1.8s.
  • Time to First Byte (TTFB): ≤ 600ms — anything slower and you’ve lost before the race starts.

Weight-based ceilings

Metrics tell you how it feels. Weight tells you why:

  • JavaScript: ≤ 170 KB compressed on mobile for initial load (Alex Russell’s long-standing “performance inequality” math — updated annually).
  • CSS: ≤ 50 KB compressed.
  • Total page weight: ≤ 1 MB on mobile for content sites, ≤ 2 MB for complex apps.
  • Image payload per page: ≤ 500 KB.
  • Third-party requests: ≤ 10 domains.

How Do You Pick Numbers That Are Actually Tight Enough?

Vanity budgets are the most common failure mode — a team sets a 3 MB ceiling on a 2.8 MB site, calls it a day, and congratulates itself for being “data-driven.” That’s not a budget, that’s a participation ribbon.

Real budgets use one of three anchoring strategies:

  1. Beat your fastest competitor. Run Lighthouse or WebPageTest against the top three competitors in your SERP. Set your budget 20% tighter than their median. You cannot out-content a faster site indefinitely.
  2. Use the “good” Core Web Vitals thresholds as hard ceilings, then derive weight budgets from what it takes to hit them on a mid-tier Android device over slow 4G. This is the methodology the Chrome team recommends.
  3. Ratchet down from current baseline. Measure your current p75, set the budget 10% tighter, and hold. When you pass, tighten another 10%. Ratcheting never breaks the site but always improves it.

In our 12+ years building and maintaining sites, the ratchet approach wins for teams without dedicated performance engineers. It is boring, slow, and unreasonably effective.


Enforcing Budgets in CI

A budget that lives in a Google Doc is already dead. Enforcement is what separates real budgets from wishful ones. Three tools dominate the 2026 stack:

Lighthouse CI

Google’s Lighthouse CI runs full audits on every pull request, compares against configured assertions, and fails the build on violations. A minimal lighthouserc.json looks like this:

{
  "ci": {
    "assert": {
      "assertions": {
        "categories:performance": ["error", {"minScore": 0.9}],
        "largest-contentful-paint": ["error", {"maxNumericValue": 2500}],
        "interaction-to-next-paint": ["error", {"maxNumericValue": 200}],
        "cumulative-layout-shift": ["error", {"maxNumericValue": 0.1}],
        "total-byte-weight": ["error", {"maxNumericValue": 1048576}]
      }
    }
  }
}

Wire it into GitHub Actions and every PR gets a pass/fail within three minutes.

bundlesize / size-limit

Lighthouse catches rendered-page regressions. Bundle-level tools like size-limit catch the cause — a dependency that doubled in size overnight. Configure it per-route:

{
  "size-limit": [
    { "path": "dist/home.js", "limit": "60 KB" },
    { "path": "dist/product.js", "limit": "85 KB" }
  ]
}

If someone adds a 40 KB date-picker library to the home bundle, the PR dies before a reviewer sees it.

SpeedCurve / Calibre for field data

Lab tests lie by omission. SpeedCurve and Calibre pull real-user metrics from your analytics and alert on budget breaches in production — the place budgets actually matter. Use them to catch third-party script regressions that never show up in CI.


What to Do When the Budget Breaks

It will break. That is the point — a budget that never fails is too loose. When it does, you have three options and exactly three:

  1. Cut the feature. The most common, the least popular, the most correct. If a new embed costs 80 KB and delivers a 0.3% engagement lift, the math is not close.
  2. Offset the cost. Remove equivalent weight elsewhere. New 40 KB library? Delete the 45 KB library you replaced. Net zero.
  3. Raise the budget — with a written justification. If the feature genuinely ships revenue, raise the ceiling in code, log the decision in the PR description, and set a review date to re-tighten. This path must be visible and auditable, or it becomes the default.

Google Search Central confirms Core Web Vitals remain a ranking signal in 2026 — so a “temporary” budget raise that tanks your field LCP has real traffic consequences. Treat ceiling increases like you treat production deploys.


The Non-Engineering Budget — Images, Fonts, Third-Party Scripts

Most performance regressions are not shipped by engineers. They are shipped by marketing uploading a 4 MB hero PNG, a designer adding three new font weights, or an analyst dropping a new tag into Google Tag Manager. Your budget must cover all three.

Images

Enforce a CMS-level max upload size (500 KB compressed). Auto-convert to AVIF or WebP. Use loading="lazy" below the fold. The 2024 HTTP Archive media chapter found images are still ~40% of total page weight — the single biggest budget line.

Fonts

One family, two weights, font-display: swap, subsetted. Every additional weight is 15-30 KB and an additional render-blocking request. WOFF2 only.

Third-party scripts

Audit quarterly. Every GTM container, chat widget, and A/B testing snippet has a cost that compounds. Tools like Third Party Web rank the slowest offenders by category — use it to justify cuts to stakeholders who love their tools.


Is Your Performance Budget Actually Working?

If you’re not sure where to start — or your current budget is a spreadsheet nobody opens — we can help:

  • Development — budget-first builds with Lighthouse CI wired in from day one.
  • Maintenance & performance — quarterly audits that keep your budget tight as the site grows.
  • SEO strategy — aligning Core Web Vitals with the keyword targets that actually move traffic.
  • Portfolio — sites we’ve built to hit “good” on all three vitals at the 75th percentile.

Stop shipping slower code by accident. Book a performance audit and we’ll tell you exactly which line item is eating your LCP.


Key Takeaways

  • Budgets are guardrails, not goals. The number isn’t the point — the CI check that fails a PR is the point.
  • Budget the metrics that predict UX and SEO: LCP, INP, CLS at “good” thresholds, plus TBT and weight ceilings as early-warning lab checks.
  • Enforce in CI or it doesn’t count. Lighthouse CI + size-limit + a field-data monitor is the 2026 baseline stack.
  • Ratchet, don’t guess. Measure current p75, tighten 10%, hold, repeat. Beats any “aspirational” ceiling.
  • Cover the non-engineering paths — images, fonts, and third-party scripts cause most regressions in production.

Reading Time

8 Minutes

Category

Development