Let's Talk
Geographic Intelligence

TV Incrementality at the DMA Level: What Geo-Based Lift Actually Looks Like

March 10, 2026

TV Incrementality at the DMA Level: What Geo-Based Lift Actually Looks Like

Everyone’s talking about incrementality. 73% of marketing leaders now view incrementality testing as essential, up from 41% in 2023. Over half of US brand and agency marketers are already running incrementality tests. Ad Age named it one of the metrics that will matter most in 2026.

The appetite is there. The methodology is sound. The problem is granularity.

Most incrementality testing happens at the national level — whether it’s TV, radio, programmatic, or CTV. Run a campaign. Measure aggregate lift. Report a number. That number tells you whether the media “worked” in general. It doesn’t tell you where it worked, where it didn’t, or why. And if you’re spending millions across dozens of markets, “it worked in general” isn’t a useful answer.

The more specific question is: what did this stimulus do in this market, at this time, in this context? That’s incrementality at the DMA level. It applies to any medium with geographic variation — broadcast, radio, outdoor, direct mail, even programmatic with geo-targeting. It’s harder than national measurement. It’s also the only version that’s actually actionable.


What incrementality actually means

Incrementality is the cleanest question in advertising measurement: what would have happened if the ad hadn’t run?

The baseline is the activity that occurs without advertising — the organic traffic, the seasonal demand, the weather-driven calls, the brand awareness from years of market presence. Incrementality measures the lift above that baseline. Not the total activity during the campaign. The additional activity caused by the campaign.

This is fundamentally different from attribution, which asks “which ad gets credit for this conversion.” Incrementality asks “did advertising produce conversions that wouldn’t have happened otherwise?”

The distinction matters because attribution models can overclaim. Every platform self-attributes, and the total claimed conversions routinely exceed actual sales. Incrementality sidesteps that problem entirely by measuring the delta — the difference between what happened with advertising and what would have happened without it.

The standard approach is geographic holdout testing: advertise in some markets, hold out others, compare the results. Geo-based incrementality testing has become the industry’s preferred method because it’s privacy-safe, doesn’t require user-level tracking, and produces causal evidence — not correlational estimates.


Why national lift isn’t enough

The 12% that hides everything

Here’s the scenario I’ve seen dozens of times. An advertiser runs linear TV across 40 markets. The aggregate lift measurement shows 12% incremental web sessions above baseline. The report says “TV works.” The budget gets renewed.

But inside that 12% average, the reality is wildly uneven. Some markets are producing 40% lift. Others are flat. A few are actually negative — the advertising appears to be producing less response than the baseline would predict.

The national number hides all of this. It’s like measuring the average temperature across the United States and using it to decide whether to bring a coat in Minnesota. The average is meaningless when the variance is the story.

Every medium varies by geography

This is true for any medium with geographic delivery. A local TV affiliate covers a specific DMA. A radio station covers a specific metro. An outdoor campaign covers specific corridors. A national cable or programmatic buy delivers across every market, but the programming context, competitive landscape, and audience composition vary by geography. The same stimulus in the same daypart can produce dramatically different response in Minneapolis versus Phoenix — because the market conditions are different.

If your incrementality measurement operates at the national level, you can’t see the geographic variation. You can’t identify the markets where TV is driving massive lift and double down. You can’t identify the markets where TV is producing nothing and reallocate. You’re making one decision for 40 markets based on one number that’s an average of 40 different realities.

I’ve seen this lead to the wrong decision in both directions. Advertisers who cut TV spend because the national lift looked modest — when in reality, half their markets were producing outstanding results and the other half were dragging down the average. And advertisers who increased TV spend nationally because the aggregate looked great — when the lift was concentrated in a handful of markets and the rest were wasting money.

Both decisions are wrong. Both are rational given the data they had. The problem isn’t the decision-making. It’s the resolution of the data.


What DMA-level incrementality reveals

When you measure incrementality at the DMA level — comparing each market’s ad-exposed activity against its own baseline — the picture changes completely.

Here’s what the data typically shows. In a national campaign across 40 markets, you find a cluster of markets — maybe 8 to 12 — producing the majority of the incremental lift. These are markets where the programming context aligns with the advertiser’s audience, where competitive presence is lower, where the creative resonates with local demographics. The lift in these markets can be 3x to 5x the national average.

You find another cluster — maybe 10 to 15 markets — performing near the average. These are the middle of the distribution. TV is working, but not dramatically.

And you find a third cluster — maybe 5 to 10 markets — where lift is minimal or absent. These are markets where the ads are airing but the audience isn’t responding digitally. The money is being spent. The impressions are being delivered. But the incremental impact is near zero.

That third cluster is where the budget opportunity lives. Those are the Linear Dead Zones — markets where linear TV spend isn’t producing incremental digital response. I wrote a separate piece about Dead Zones and what to do about them. But the point here is that you can’t find them without DMA-level measurement. They’re invisible at the national level because the high-performing markets mask their underperformance in the average.


The baseline problem

Defining “what would have happened”

The hardest part of incrementality measurement isn’t calculating the lift. It’s establishing the baseline.

What would web traffic look like in this market if no ad had aired? You can’t just use “zero” — there’s always organic activity. Brand awareness from prior campaigns. Seasonal demand. Weather effects. Competitive activity. All of these produce web sessions, phone calls, and conversions that have nothing to do with the current campaign.

The standard approach — geographic holdout — works for national campaigns with enough markets to split into test and control groups. But it requires withholding advertising from control markets, which means forfeiting potential revenue. And for advertisers who can’t afford to go dark in any market, it’s not practical.

A continuous baseline by market

The alternative is what we’ve built: a continuous baseline measured at the DMA level. Before any ad airs, the system knows the expected activity in each market — the pre-airing baseline. When an ad airs, the system measures the response above that baseline. The lift is the difference.

This works because of the three pillars. Context tells you what else was happening — weather, programming, day of week, competitive activity — so the baseline accounts for non-advertising influences. Geography tells you which market to measure. Time tells you when the ad aired and when the response occurred — with the response shape calibrated against nearly a million web sessions so the system knows what genuine response looks like versus coincidental traffic.

The result is continuous incrementality measurement. Not a test you run once a quarter. A persistent measurement that runs on every ad, in every market, every day.


The time problem at geographic scale

Here’s a technical challenge most people don’t appreciate: measuring incrementality at the DMA level means reconciling time across hundreds of geographic units simultaneously — regardless of the medium.

Streaming surpassed linear TV in total US viewership in 2025 — 44.8% for streaming vs. 44.2% for linear. But linear TV still operates on broadcast time, which runs 6 AM to 6 AM and is timezone-naive to the local market. A “prime time” ad airing at 8 PM happens at four different UTC times across the four US time zones. The same syndicated program airs at different local times in different markets.

When you’re measuring lift at the DMA level, you need to compare each market’s response against its own baseline at the correct local time. A web session at 8:02 PM Eastern is relevant to an 8 PM ad in the New York DMA but irrelevant to the same ad’s airing in the Denver DMA, where it’s 6:02 PM and the ad hasn’t aired yet.

Most measurement platforms operate at the national level precisely because the time reconciliation at the market level is genuinely hard. You need to know the local broadcast schedule, the timezone, whether daylight saving applies, and the precise offset between every data source — all at the individual DMA level, across 254 TV markets.

We built a 60-year broadcast calendar (1990 through 2050) with daypart classification and timezone handling for exactly this reason. The time infrastructure exists because without it, DMA-level incrementality measurement is impossible.


What you can do with DMA-level data

Once you have incrementality measured at the market level, the optimization decisions become specific and actionable.

Shift spend where it works

Reallocate spend. Move budget from low-lift markets to high-lift markets within the same flight. This isn’t a quarterly strategic decision — it’s a continuous optimization based on observed performance. The markets where TV produces 40% lift get more impressions. The markets where it produces 2% get supplemented with digital or reduced.

Identify programming effects by market. The same programming genre produces different lift in different markets. News might drive 5x lift in one DMA and 2x in another — because the competitive landscape, demographics, and viewing habits differ. DMA-level incrementality lets you optimize the programming mix by market, not just nationally.

Fill the gaps and prove the value

Activate Dead Zones. Markets where linear TV shows minimal incremental lift become programmatic and CTV targeting opportunities. The audience is there — they live in the DMA, they match the demographic profile — but they’re not responding to linear. Supplementing with digital in those specific markets is more efficient than increasing linear weight that isn’t working.

Feed the results back. When you know which markets produce the highest incremental revenue, that information flows back to your buying platforms. Google Ads gets geographic bid modifiers based on real lift data — bid more aggressively in high-lift markets, conservatively in low-lift markets. Programmatic campaigns get geographic targeting lists based on actual incrementality, not estimated reach. Facebook and your DSP learn which geographies produce real outcomes. The entire media mix adjusts to the geographic reality that only DMA-level measurement can reveal.

Prove TV’s value where it matters. In high-lift markets, you can show the advertiser exactly what TV is producing — not a national estimate, but the specific incremental response in their specific markets. That’s the proof that keeps TV budgets from getting cut when someone asks “is this working?” The answer isn’t “yes, nationally.” It’s “here are the 15 markets where it’s driving 30%+ incremental response, here are the 8 where we should adjust, and here’s the plan.”


The measurement the industry is asking for

TV’s measurement gap

Linear TV ad spending is projected to decline to its lowest share since 2005. Marketers are reallocating to CTV and digital because those channels can show per-impression performance metrics. Linear TV has been losing the measurement argument for a decade — not because it doesn’t work, but because it couldn’t prove how well it works at the granularity buyers demand.

DMA-level incrementality changes that argument. It shows exactly which markets produce incremental lift, exactly how much, and exactly what the response looks like in each geography. It holds TV to the same standard of geographic precision that digital claims — and often exceeds it, because broadcast delivery footprints are physical boundaries, not probabilistic estimates.

The advertiser who sees their DMA-level lift data — the specific markets where TV is driving 40% incremental response, the specific dayparts and programming genres that outperform, the specific Dead Zones where budget should shift — makes fundamentally different decisions than the advertiser who sees a national average.

The infrastructure already exists

Same media. Same spend. Different intelligence. And the intelligence comes from measuring incrementality where it actually varies: at the market level.

The technology to do this exists. The geographic infrastructure — 254 TV markets, zip-level resolution, actual broadcast delivery footprints — is built. The time infrastructure — broadcast calendar, timezone reconciliation, microsecond event ordering — is built. The response model — calibrated against nearly a million web sessions, validated by five published academic studies — is built. The question isn’t whether DMA-level incrementality is possible. It’s whether your current measurement vendor can provide it. Ask them. Ask for the market-level lift data. If they can’t show it to you, you’re making market-by-market decisions based on a national average. And a national average is just a number that’s equally wrong everywhere.