Why Does Attribution Suck So Much?
I’ve been in this industry for 26 years. I’ve watched every version of attribution come and go — last-click, first-click, multi-touch, algorithmic, probabilistic, deterministic. And I keep coming back to the same conclusion: the whole thing is built on the wrong question.
Not the wrong data. Not the wrong algorithms. The wrong question.
Every attribution model I’ve seen starts with: “Which channel gets credit?” That framing poisons everything downstream. It turns measurement into a court case where every vendor is both the defendant and the judge. And it’s why, after two decades of supposedly better technology, 75% of marketers still say their measurement systems are falling short.
The numbers have never added up
This isn’t a 2026 problem. I’ve been watching this same scene play out since the early 2000s.
Google says 50 conversions. Facebook says 40. Your TV vendor says 30. You had 60 actual sales. Add up the claimed conversions and they exceed your real revenue by 2x. I’ve seen this exact scenario play out hundreds of times with real clients spending real money.
Every platform reports from its own vantage point using its own model. They’re not lying — they’re just each grading their own homework. And the math was never designed to add up across platforms.
Forrester found that 64% of marketing leaders don’t trust their own measurement for making decisions. Not 64% of junior analysts — 64% of the people making million-dollar budget calls. AdWeek literally called it a “measurement shit show.” Ad Age says AI won’t fix it if the foundation is broken.
None of this surprises anyone who’s been in the room when the numbers get presented. The CMO looks at the spreadsheet, knows it doesn’t add up, and makes the budget decision anyway — because it’s the only data they have. That’s not measurement. That’s guessing with extra steps.
Everyone is grading their own homework
The structural problem is obvious once you name it: the platform spending your money is the same platform telling you how well it spent.
Incentives baked into architecture
Google’s attribution model is optimized to show that Google works. Facebook’s model is optimized to show that Facebook works. Your TV vendor’s model is optimized to show that TV works. Each platform has a structural incentive to overclaim — because the numbers they report determine whether you keep spending with them.
This isn’t corruption. It’s architecture. Every platform was built to optimize and report on its own activity. Cross-platform truth was never part of the design spec. When you stitch together five vendor reports, each built to make its channel look good, the result isn’t a full picture. It’s a collection of competing press releases.
Nobody trusts the numbers
The IAB’s 2026 measurement priorities report found that 75% of buy-side leaders say core measurement approaches — attribution, incrementality tests, MMM — are underperforming. Not one method. All of them. The industry built a measurement ecosystem where nobody trusts anyone’s numbers, including their own.
And the response has been predictable: 68% of multi-touch attribution models over-credited digital channels by 30% or more in a 2025 analysis of over a thousand ad accounts. When the models systematically overcredit, the budget follows the overcrediting. TV gets defunded. Digital gets overfunded. And the advertisers who can’t see the full picture spend more on what claims the most, not on what works the best.
Where the budget actually goes
I’ve sat in enough budget meetings to know what happens next. The CMO asks why TV spend is declining. The analytics team shows the multi-touch attribution report. The report shows that Google drove 70% of conversions. Nobody in the room mentions that Google is grading its own homework. The TV budget gets cut. Digital spend goes up. And nobody asks what actually influenced the customer to search in the first place.
The models break physics
Here’s the one that’s bothered me for years, and the reason I eventually built something different.
The most common approach to measuring TV’s effect on digital behavior uses exponential decay — a model that assigns maximum attribution weight at the exact instant the ad airs. Time zero. The weight drops from there.
Think about that. A TV ad airs at 8:15 PM. The model says the highest probability of someone visiting a website is at 8:15:00 PM. Not 8:17. Not 8:18. The literal instant the ad hits the screen.
That can’t happen. The person has to notice the ad. Register the brand or the offer. Pick up their phone. Open a browser or tap the search bar. Type something. Wait for a page to load. That sequence takes time. It’s a physical process involving a human being, and it doesn’t happen in zero seconds.
What the data actually shows
We’ve observed the actual pattern across decades of data, calibrated against nearly a million web sessions: response starts at zero, ramps up as people register the stimulus and act, peaks at approximately 150 seconds, then decays. By five minutes, roughly 90% of the response has occurred. By ten minutes, virtually all of it.
Five published academic studies independently confirm this shape — Veverka & Holy (2024) in Applied Stochastic Models, Liaukonyte, Teixeira & Wilbur (2015) in Marketing Science, Joo, Wilbur, Cowgill & Zhu (2014) in Management Science, Lewis & Reiley (2013) at EC ‘13, and Shapiro, Hitsch & Tuchman (2021) in Econometrica. Different teams, different methodologies, different product categories. Same shape: response doesn’t start at maximum. It builds, peaks, and decays.
When your model starts at maximum, it gives credit to whatever web traffic happened to be there anyway. Coincidental traffic gets counted as ad-driven. The numbers look good. They’re systematically wrong.
It’s not just exponential decay
Here’s what I discovered when we researched what competitors actually do: the TV attribution landscape is even messier than the exponential decay problem suggests.
Flat windows miss the shape
Most dedicated TV attribution vendors don’t use exponential decay at all. They use flat time windows — typically five to fourteen minutes — where every web session within the window gets equal credit regardless of when it occurred. A session 30 seconds after the ad (almost certainly coincidental) gets the same weight as a session three minutes after (likely genuine response). Rockerbox uses a 5-minute spike window. Tatari uses a 5-minute immediate window plus a 30-day “DragFactor”. Others use 7- or 14-day flat windows with no time weighting at all.
Nobody will show you the math
And here’s the part that should concern everyone: most vendors don’t disclose their methodology. Innovid says they use “time decay” but their documentation shows they give “higher weightings to the closest touchpoint” — meaning the most recent ad gets the most credit. That’s functionally the same as exponential decay’s problem: maximum weight where genuine response is least probable.
Neither the MRC nor the IAB prescribe a specific decay model. The MRC requires that attribution windows be “empirically supported” — but doesn’t mandate how. The result is an industry where everyone claims to do attribution, nobody agrees on how, and most won’t show you the math.
The wrong question
Here’s the deeper problem. Attribution was built to answer: “Which channel gets credit for this sale?”
That question assumes one channel caused it. It assumes the platform telling you the answer doesn’t have an incentive to lie. It assumes the model is physically sound. None of those assumptions hold.
I stopped asking that question years ago. The question I care about: “What actually influenced this behavior?”
Not which channel claims it. What stimuli — advertising, weather, news, competitive activity, demographics, seasonal patterns — were present when the response occurred? Can we verify that the stimulus could have reached this person, at the right time, in the right context?
Three things must be true
That’s tracing influence. It’s different from claiming credit. And it requires three things to be true simultaneously.
Geography — Could the stimulus have reached this person? If a TV ad didn’t air in your market, it can’t get credit for your web visit. This sounds obvious, but most attribution systems either ignore geography entirely or approximate it at a level too coarse to be meaningful.
Time — Did the response happen after the stimulus, in the right order? And was the time difference consistent with how humans actually behave? A web session that started before the ad aired cannot have been caused by it. A session that started at the exact instant of the ad is almost certainly coincidental. A session that started two to three minutes later, through a plausible channel like organic search, deserves investigation.
Context — What was the environment? Two identical ads on two different programs produce dramatically different response. News programming drives significantly more digital engagement than entertainment — viewers in a learning mindset act differently than viewers checking out. If your attribution model ignores context, it’s averaging signal with noise.
Context reveals why. Geography proves where. Time confirms when. Without all three, you’re measuring noise and calling it signal.
Context changes everything
Here’s an example from real data. An advertiser running broadcast in the upper Midwest saw that news programming drove dramatically more digital response than entertainment — the same creative, the same markets, different programming context. Viewers in a news-watching mindset are already in an information-seeking mode. They see an ad and they act on it. Entertainment viewers are in a different cognitive state. They’re checking out, not checking in. A context-blind attribution model averages these together and tells you a meaningless number.
That single insight — which programming context drives response for your specific category — is worth more than an overall “TV drove X conversions” number. But you can’t see it if your model doesn’t track context. And most don’t.
What the answer looks like
The answer isn’t better attribution. It’s a different question entirely.
Instead of “which channel gets credit,” ask: what actually influenced this behavior? Then verify it. Did the stimulus reach this market? Did the response happen in the right time window? Was the context consistent with genuine influence rather than coincidence?
That’s what tracing influence means. It requires geographic verification — down to the zip code, not just the DMA. It requires temporal precision — ordering events to the microsecond, not relying on platform-reported timestamps. It requires separating advertising influence from everything else happening in the world — weather, seasonal patterns, competitive activity, demographic shifts.
And it requires feeding the truth back. When platforms like Google Ads and Facebook learn from actual revenue events instead of pixel fires, their optimization algorithms improve. Every channel in the mix gets better when the training data reflects reality instead of each platform’s self-serving view.
The difference between “tracing influence” and “claiming credit” might sound semantic. It’s not. It’s the difference between a system designed to tell you who to thank and a system designed to tell you what’s true. One of those is useful. The other is what the industry has been doing for twenty years.
That’s the problem the Insights & Data Engine was built to solve. The methodology is published. The math is available.
The crisis isn’t going away
MarTech Series put it well: “If 2025 was the year marketers stopped believing in perfect attribution, 2026 will be the year they build systems of truth.” The industry is moving from “which channel wins” to “what actually happened” — from credit-claiming to influence-tracing.
The trends are all pointing the same direction. Performance TV became the number-one investment channel in 2026, with 71% of marketers increasing their budgets. CTV ad spending is projected at $38 billion, up 14% year over year. Netflix and Comcast launched Conversion APIs. The IAB published CTV conversion guidelines. Everyone wants TV to prove itself like digital does — but the measurement infrastructure hasn’t caught up.
73% of marketing leaders now view incrementality testing as essential, up from 41% in 2023. The industry is hungry for measurement it can trust. But incrementality without geographic precision, without temporal accuracy, without context — it’s just another number that looks rigorous and might not be.
That shift can’t happen if the underlying models assume physics that don’t exist. It can’t happen if every vendor grades their own homework. It can’t happen if nobody will show you how the numbers are computed.
Attribution has a credibility problem because it was designed to take credit, not to represent reality. That design flaw doesn’t get fixed with better algorithms or more data or AI. It gets fixed by asking a different question.
What actually influenced the behavior? Start there. Everything else follows.