What actually influenced my customers?
Not what ran. Not who claims credit. What actually influenced behavior.
Everyone claims credit. Nobody agrees.
Add up what every platform claims and they've collectively credited themselves with three times your actual revenue. Each one is measuring from its own vantage point, using its own model. The totals were never meant to reconcile.
You don't need another platform claiming credit. You need a truthful representation of what stimuli — advertising and otherwise — actually drove the response.
The industry calls this a measurement crisis. Seventy-five percent of buy-side leaders say their current measurement approaches underperform (IAB 2026). The reason is structural: every platform self-attributes, every vendor has an incentive to overclaim, and no one has built a system that describes what happened without also trying to take credit for it.
One taxonomy. One picture of reality.
NEXT90's Insights & Data Engine ingests signals from every source — advertising, weather, demographics, search behavior, CRM, phone calls, news events — and normalizes them through one vendor-agnostic taxonomy. Devices resolved to households. Every event ordered to the microsecond.
A TV airing in Minneapolis, a programmatic impression in Phoenix, and a heat wave in Dallas are described in the same terms, analyzed with the same methodology, and held to the same standard.
The three pillars — context, geography, time — applied consistently regardless of medium or signal type.
How the unified taxonomy works
The IDE's taxonomy is a three-level hierarchy: Channel, Platform, Product. It covers linear broadcast, streaming, radio, programmatic display, digital out-of-home, direct mail, web analytics, phone calls, and CRM systems — dozens of product types across platforms and channels, with built-in detection rules for automatic source identification.
What that means in practice: a linear TV spot on a local NBC affiliate, a programmatic video impression served through a demand-side platform, a radio spot on a drive-time FM station, and a direct mail piece that hits mailboxes in the same zip code all share the same data model. Each event carries a media type (linear, streaming, response) and an attribution role (awareness, engagement, response, conversion). The taxonomy is extensible — when a new medium arrives, it slots into the existing hierarchy without re-architecting the system.
This matters because cross-media measurement is impossible without structural normalization. If your TV data uses one naming convention, your programmatic data uses another, and your CRM uses a third, you cannot compare them. The taxonomy eliminates that problem at the ingestion layer, before any analysis begins.
Householding: from devices to people
A household has multiple devices. A phone, a laptop, a tablet, a smart TV. Ad platforms see each of these as separate entities. The IDE resolves them into households using deterministic identity resolution — device fingerprints, IP and geographic clustering at the zip level, authenticated user data where available. Each identity link carries a confidence score from 0 to 100, with the match type recorded.
This means when a TV ad airs in a market and someone in that market picks up a phone and searches, the IDE can connect the device that searched to the household where the TV was on — not through probabilistic guessing, but through a layered identity graph that starts with what it can confirm and scores what it infers.
The result: stimulus and response connect at the household level, not the device level. That distinction matters. A click on a phone is not a person. A household with a confirmed TV market, a confirmed zip code, and a confirmed device cluster is closer to one.
The stimulus-response-conversion chain
Every influence follows a chain. A stimulus occurs — a TV ad airs, a programmatic impression serves, a weather event happens. A response follows — someone searches, visits a website, picks up a phone. A conversion completes — a form fills, a job books, revenue records.
Stimulus
A TV ad airs, a programmatic impression serves, a weather event happens.
Response
Someone searches, visits a website, picks up a phone.
Conversion
A form fills, a job books, revenue records.
The IDE traces this chain from origin to outcome. A TV ad detection triggers a response window. Within that window, a web session appears — arriving through organic search, direct visit, or referral. The session is tagged by the IDE's first-party tracking. If the visitor calls, the call is matched to the session. If the call becomes a booked job in the CRM, the revenue attaches to the chain.
Each link in the chain uses a different identity key — session ID connects the web visit, GCLID connects the search click, phone number connects the call, CRM contact ID connects the revenue. The IDE holds these keys together so the full journey is visible, not just the last touch or the first touch, but the actual sequence of events.
What you see
A unified dashboard where every influence sits side by side. Same dimensions. Same time references. Same geographic precision.
For broadcast: by market, daypart, network, creative, programming genre. A local NBC affiliate in Minneapolis and a national cable buy on ESPN appear in the same view, but with their actual geographic footprints respected — the local affiliate reaches its TV market, the national buy reaches all 254.
For programmatic: by publisher, page content, creative, audience segment. Not just "your ad ran on a website" but which publisher, which page, which content context surrounded the impression.
For non-advertising signals: by geography, time period, and correlation with response. Weather, demographics, seasonal patterns — anything that influences behavior, described in the same terms as advertising.
For phone calls: matched to sessions, matched to CRM outcomes, with revenue attached. The call that came from a TV-driven search is visible alongside the call that came from a paid click and the call that came from a direct visit.
All in one place. Your data. Your dashboard. An AI assistant you can ask anything — in plain language, answered from your actual data.
How the IDE verifies influence
Every connection is tested against context (does the environment explain the response?), geography (could the stimulus have reached this person?), and time (did the response follow in the right order, to the microsecond?). Overlapping stimuli are resolved through conflict resolution — the strongest influence wins proportionally, not by default.
Geography
The IDE maintains geo shapes for all 254 TV markets across North America — 210 US markets and 44 Canadian broadcast markets with geocartography shapes built from the ground up. Every response resolves through a full geographic stack: zip code, city, metro, county, TV market, country, time zone. If a stimulus didn't reach the market where the response occurred, it cannot receive credit. Period.
Time
Every data source uses a different time reference — broadcast time for TV, UTC for web sessions, caller local time for phone calls, business time zone for CRM data. The IDE converts every event to UTC for ordering, then applies a response curve calibrated against decades of real-world observation and validated against nearly a million web sessions. The curve reflects how people actually behave: no one responds in zero seconds, response ramps up as people register and act, and nearly all response is captured within minutes.
Context
Two identical stimuli in two different environments can produce dramatically different responses. News programming drives substantially higher digital response than entertainment programming for certain advertisers — a real signal about viewer mindset, not noise. The IDE tracks content context as a first-class dimension: programming genre, network, daypart, creative message on the stimulus side; channel, device, page content on the response side.
When multiple stimuli compete for credit on the same response, the IDE applies conflict resolution rules. A national cable ad and a local affiliate ad can share credit proportionally. Two local ads in the same market compete — the one with the stronger time signal wins. The system does not default to last-touch or first-touch. It uses the physics of the situation.
Let's look at your data
Every organization's influence picture is different. Let's see what yours looks like.
Start the Conversation