From Dawn to Dusk.
To Dawn Again.
Ads are one signal. Influence has thousands.
The Insights & Data Engine
We call it the IDE. It ingests every signal that influences your business, unifies them into a single picture of reality, and makes that picture actionable.
The IDE isn't a dashboard. It isn't a report. It's a continuous cycle — each pass through the system makes the next one smarter.
Ingest
Unify
Connect
Articulate
Feed Back
Optimize
Predict
Every signal that touches your business. Broadcast airings, web sessions, phone calls, search activity, weather patterns, news events, CRM records, demographic data, viral trends. If it influences behavior, it belongs in the picture.
Tracing the Complete Journey
Google doesn't motivate a search. It enables it. No one randomly types a brand name into a search bar. Something started the journey — a broadcast spot, a radio ad, a billboard, a news segment, a conversation, a change in the weather.
Advertising, weather, news events, seasonal patterns, competitive activity
Search, social engagement, website visits, app activity
Calls, forms, appointments, purchases, revenue
64% of TV-attributed web sessions arrive via Google Organic. Google captured the action. TV caused it.
Most systems credit the last touchpoint. The IDE uses geographic precision, time ordering, and the gamma response curve to trace back to what actually started the journey.
The Science Behind It
The framework. The math. The proof.
Proving It Wasn't Coincidence
Knowing the shape of response isn't enough. You also need to prove that a specific stimulus could have reached a specific person, at the right time, in the right context. Otherwise you're crediting noise.
Context
Does the environment explain the response?
Two identical stimuli in two different contexts produce different results. News programming drives dramatically different response than entertainment — viewers in a learning mindset act differently than viewers checking out.
Context includes what was on, what was advertised, the weather, the news cycle, the demographics of the area, and the content on the page where someone responded. The IDE considers both the context of the stimulus and the context of the action.
Every new signal added to the system expands what context means — making the picture more complete with each cycle.
Geography
Could this stimulus have reached this person?
The IDE resolves every event to its actual delivery footprint — not an approximation, not a model. From the TV market level down to the zip code, the postal code, the individual household. Over a million geographic entities across the US and Canada, including broadcast market boundaries we built ourselves because they didn't exist. Cable zone sub-markets. Custom geo-fences for outdoor and digital out-of-home. If it has a delivery footprint, we can map it.
If a stimulus didn't reach a geography, nobody there can be influenced by it. This filter eliminates impossible connections before any weighting occurs.
Time
Did the response follow the stimulus, in the right order?
Every event is ordered to the microsecond. Broadcast time runs 6 AM to 6 AM and is timezone-naive. Web sessions arrive in UTC. Phone calls arrive in local time. Platform clicks arrive with a date but no timestamp — our tracking tag adds the precision.
Morning TV airs at 7 AM in four different time zones. Syndicated programs air at different times in different markets. A weather front moves across the country over hours. The IDE reconciles all of it — because if you get the order wrong, you credit the wrong stimulus.
How People Actually Respond
Most approaches to tracing influence assume maximum response at the instant a stimulus occurs. That's physically impossible. No one sees a TV ad and visits a website in zero seconds.
Starts at zero. Ramps up as people register the stimulus. Peaks at the most probable response time. Then decays.
Assumes maximum response the instant the ad airs. Credits coincidental activity. Never starts at zero.
This isn't a theoretical assumption. It's the shape we observed across decades of real-world data and validated against nearly a million web sessions.
The shape matters because it determines what gets credit. A model that starts at zero and peaks where people actually act filters out the noise naturally.
When Multiple Sources Compete for Credit
Even after context, geography, and time filter out the impossible, there are moments when two or more stimuli legitimately overlap — a national broadcast spot and a local affiliate airing in the same market, minutes apart. A programmatic impression and a radio spot reaching the same household in the same window.
The overcounting problem: give everyone full credit
Most systems give every platform full credit for the same conversion. Add up the numbers and they exceed your actual sales. Nobody's lying — everybody's overcounting. The math was never designed to add up across platforms.
The simplification problem: pick one arbitrarily
Last-touch wins. Or first-touch wins. Or the biggest channel wins by default. The decision is made by a rule that ignores everything about the actual stimuli — when they arrived, where they reached, and how strongly they influenced behavior.
Proportional, on equal terms
Not every overlap is a conflict. A local spot running in Dallas and another running in Chicago? Geography resolves it instantly — each ad only reaches its own market. No conflict, no tie-break needed.
A national ABC prime-time spot and a local diginet airing in the same market, three minutes apart? Now it matters. The response shape tells you where each stimulus sits on its probability curve at the moment the session arrived. The one closer to its peak had more influence. Geographic precision confirms both actually reached the market — not an approximation, the actual delivery footprint. Publisher strength factors in: a prime-time network spot carries different weight than a low-power diginet airing.
Two national spots from the same advertiser, overlapping in the same window? The unified taxonomy means every stimulus — TV, radio, programmatic, CTV — speaks the same language and competes on equal footing. The system doesn't pick a winner. It weights each proportionally based on the actual data: response timing, geographic reach, and signal strength.
The strongest influence wins proportionally. Not winner-take-all. Not split evenly. Not by a rule that ignores the data. By the data itself.
Validated by Published Research
Five published academic studies independently confirm the same response pattern. Our parameters are fit to our own data. The academic literature independently confirms the behavior.
Confirms the ramp-up → peak → decay shape using e-commerce response data.
Read the study →Demonstrated web traffic lift within a two-minute post-ad window across 20 brands.
Read the study →Measured TV advertising's effect on branded search queries — TV drives specific search behavior.
Read the study →Detected search spikes within 15 seconds of ad conclusion, persisting up to an hour.
Read the study →Most rigorous econometric study of TV's behavioral impact — confirmed measurable response across 288 brands, even where ROI varies.
Read the study →Request a Methodology Briefing
We share our full methodology under NDA with qualified prospects. The gamma curve. The validation data. The conflict resolution approach. It's documented — and we'll walk you through it.
We publish ours. If your current vendor doesn't publish theirs, ask why.