Skip to content
Pixeltree

Analytics

Multi-Touch Attribution Infrastructure for DTC

Attribution infrastructure that blends MMM, incrementality tests, and MER so the finance team and the media team read from the same source of truth.

What you get

Deliverables, not deliverable-ish.

Scoped plan

Written scope with success criteria, not a vague retainer.

Senior execution

The person scoping the work is the person doing the work.

Measurable output

Deliverables you can point at. Dashboards, flows, code, docs.

Clean handoff

Documentation and training so the work lives inside your team.

How we work

Our approach.

The problem attribution infrastructure solves

Your CMO asks whether to cut TikTok or double it, and five different numbers answer. Meta Ads Manager says ROAS is four point one. GA4 last-click says two point three. Triple Whale last-click says three point six. Triple Whale pixel says two point nine. Your MER says blended is one point eight. Everybody is right and nobody agrees. The meeting ends with a gut call and a promise to revisit next quarter.

This is attribution without infrastructure. The problem is not the numbers themselves, which are each measuring a real thing. The problem is that the organization has never agreed on which number answers which question. Platform ROAS answers whether the ad account is healthy relative to itself over time. Last-click GA4 answers whether a given landing page and campaign are converting. MER answers whether the business is profitable at the current spend level. Incrementality answers whether a given channel is actually driving new revenue versus harvesting existing demand. When all four are used interchangeably, every meeting becomes a negotiation rather than a decision.

The consequence compounds. The media team learns to quote whichever number justifies the budget they want. The finance team loses trust and starts running a parallel model in Excel. The CEO loses patience and makes intuition-based calls. Good channels get cut because their last-click number looks weak in a dashboard that was never designed to measure them. Bad channels survive because their platform number looks healthy. The brand burns eighteen months of growth before anyone names the real problem, which is infrastructure, not skill.

Our approach

We build attribution infrastructure in five steps, and we run it as an eight week engagement because attribution is a system problem, not a tool problem.

Step one is decision mapping. We interview every stakeholder who makes a spending decision and we document which decisions they make and which data they currently use. Budget reallocation between channels, budget scaling up or down, creative iteration calls, landing page iteration calls, new channel tests, geo expansion. Each decision maps to a data source and a cadence. This document becomes the backbone of the entire engagement.

Step two is the MER framework. Marketing Efficiency Ratio, defined as total revenue divided by total paid media spend, is the number the finance team and the CEO actually care about. We build MER daily, weekly, and monthly, with rolling seven, thirty, and ninety day windows, broken by new versus returning customer where useful. We document break-even MER based on contribution margin. See our guide on MER versus ROAS for the underlying logic and break-even ROAS for the math on profit floors.

Step three is the MMM layer. For brands under forty million in revenue we ship a pragmatic spreadsheet MMM with weekly granularity, channel-level spend and revenue, seasonality adjustments, and adstock decay. For brands above that threshold we integrate with Recast, Lifesight, Prescient, or similar. The MMM outputs channel contribution estimates and marginal ROAS curves, which is what the reallocation decision actually needs.

Step four is the incrementality program. We design geo holdout tests for Meta and Google on a rotating schedule, typically one test per channel per half year. Each test defines a control region, a test region, a budget change, a duration, and a readout method. Results feed back into the MMM as coefficient calibration.

Step five is the decision dashboard. Every stakeholder group gets a dashboard tuned to their decisions. Media gets platform and pixel data with cost and creative breakdowns. Finance gets MER, contribution margin, and MMM channel attribution. The CEO gets a single-page weekly summary. Nobody gets forced to use somebody else's view, which is how you get buy-in.

What you get

▸ Decision map document linking every recurring spending decision to the data source that answers it. ▸ MER framework implemented in your BI tool with documented break-even thresholds by channel. ▸ MMM layer, either a spreadsheet model or integration with a commercial tool, producing channel contribution and marginal ROAS curves. ▸ Geo holdout test design document with a twelve month rolling calendar. ▸ Incrementality readouts for the first two tests run during the engagement. ▸ Decision dashboards tuned per stakeholder: media, finance, leadership. ▸ Reconciliation query between platform reported, GA4 last-click, MMM attributed, and Shopify gross revenue. ▸ Quarterly attribution review template so the team can run reviews without us once we are off the engagement. ▸ Recorded training for each stakeholder group on how to use their dashboard and which questions it answers.

Timeline

Four phases across eight weeks.

Weeks one and two are discovery. We run stakeholder interviews, document the decision map, audit the current state of every data source, and agree on definitions for MER and contribution margin.

Weeks three and four are the MER and reconciliation build. We wire daily MER in your BI tool, we build the reconciliation query against Shopify, and we deliver the first version of the finance dashboard.

Weeks five and six are the MMM layer. We build the spreadsheet model or integrate the commercial tool, we calibrate against the last twelve to eighteen months of data, and we deliver the first channel contribution readout.

Weeks seven and eight are incrementality test design and dashboard handoff. We design the first two geo holdout tests, we launch the first one before sign-off, and we train each stakeholder group on their dashboard.

Mini case anatomy

A skincare brand in the twenty-five to forty million revenue range had five sources of truth and a CMO fighting a quarterly budget battle with the CFO. Meta reported a four point two ROAS. GA4 last-click reported one point nine. Triple Whale pixel reported two point seven. Shopify showed revenue roughly flat quarter over quarter despite twenty percent more spend. The brand was about to cut Meta by half because GA4 said it was not working.

We ran the full eight week engagement. MER calibration showed blended at one point six against a break-even of one point four, so the brand was profitable but thin. MMM showed that Meta was actually contributing about thirty-three percent of revenue, consistent with Meta's own reported number once deduplication against email and direct was accounted for. GA4 last-click was understating Meta by roughly fifty percent because most purchases happened several sessions after the click that converted.

The brand did not cut Meta. Instead the MMM identified that TikTok had higher marginal ROAS at current spend levels, so incremental budget shifted there. Six months later blended MER was at one point nine, revenue was up twenty-two percent, and the quarterly budget battle had been replaced by a quarterly attribution review using the template we left behind. For the underlying logic see our posts on attribution for DTC using MER and MER versus ROAS.

FAQs

See frequently asked questions below. Attribution works best when it sits on a clean data foundation, which is why we usually recommend pairing this engagement with GA4 implementation and server-side tagging. For the complete picture see our analytics and reporting hub and our guide on break-even ROAS.

FAQ

Questions we hear most.

Pure last-click MTA is dead for paid social. Blended approaches that combine platform reported data, a lightweight MMM, periodic incrementality tests, and MER as the finance-level truth still work well. That is what we build.
For brands above forty million in revenue, yes, and we implement against whichever you choose. Under that threshold a well-built spreadsheet MMM in Google Sheets or a simple Python notebook works, and we ship that as part of the engagement.
We run geo holdout tests on Meta and Google twice a year per channel. Results calibrate the MMM coefficients and validate platform reported ROAS. Without periodic incrementality you are building a model on vibes.
We recommend a one day click and one day view window as the default and we report both platform numbers and the MMM calibrated numbers in the finance dashboard. The gap is itself a diagnostic.
Yes. Attribution tools are inputs to the system, not the system itself. We wire them as data sources alongside GA4 and BigQuery, and we reconcile their numbers against MER and incrementality results.

Let's see if we're a fit.

15 minutes. We'll tell you whether this service fits where you are. If not, we'll name what does.

Book a 15-min call