top of page

How to Read a Marketing Mix Model Report Without Getting Snowed

A vintage cartoon-style illustration of a marketer holding a flashlight to peek inside an oversized wooden contraption full of brass gears, springs, and tangled wires — a 'look under the hood' metaphor for reading a Marketing Mix Model.

There are four questions that will tell you whether a Marketing Mix Model is doing serious work or whether you're being sold a forecast in a lab coat. Most marketers in an MMM presentation never ask any of them. The consultancy walks out with a renewal; the brand walks out with a $4M reallocation recommendation it can't defend in front of the CFO.


The model under the hood is usually fine — the snowing happens in the translation from regression output to budget recommendation. Asked in order, these four questions surface where that translation went thin.


Question 1: What's the base-versus-incremental split, and is it plausible?


Every MMM starts by decomposing total sales into base (what you'd have sold with zero marketing) and incremental (what marketing actually drove). For most CPG brands the split lands around 70–75% base / 25–30% incremental. For a young DTC brand it might be 30 / 70. For an enterprise SaaS where most revenue is contracted renewals, it could be 90 / 10.


Look at the number on slide 7 and ask if it matches the business you actually run. If a CPG client is told their incremental share is 55%, somebody made the model overweight marketing's contribution — usually because the consultant needs marketing to look impactful enough to justify the next engagement. If the incremental share is 8% on a brand that runs $40M a year of paid media, the model under-fit and the recommendations downstream are being driven by noise.


The split is the foundation. If the foundation reads wrong, every percentage on every later slide is wrong by the same factor.


Question 2: What's the channel ROI, and what's the confidence interval around it?


The middle of the deck will show ROAS or ROI by channel: paid search 4.2x, paid social 2.1x, CTV 1.4x, podcasts 0.9x. Headlines like "shift budget out of podcasts" come straight from this slide.


But every one of those numbers is a regression coefficient with a standard error around it. A point estimate of 1.4x for CTV with a 95% confidence interval of [0.6x, 2.2x] is statistically indistinguishable from "we have no idea." A point estimate of 4.2x for paid search with a CI of [3.9x, 4.5x] is something you can act on.


If the deck doesn't show confidence intervals, ask. If the consultant says the intervals are "directionally fine," they're hiding something. The honest version of an MMM presentation prints the CI next to every channel number — and acknowledges that for any channel where spend has been roughly flat for the last 18 months, the model has almost no signal to work with and the CI will be huge.


A useful gut check: any channel that's contributed less than 5% of total spend in the modeling window should be flagged as low-confidence regardless of what the point estimate says.


Question 3: What do the response curves say about saturation?


Response curves are the most underused output in any MMM deck. They show, for each channel, how the next dollar of spend translates to incremental revenue. The curve has a shape — usually steep at low spend and flattening as spend climbs.


The interesting decision isn't where the channel sits on the ROAS slide; it's where the channel sits on its own response curve. A channel at 80% saturation means the next dollar buys you maybe 30 cents of incremental revenue — even if its average ROAS still looks healthy. A channel at 40% saturation has room to grow before diminishing returns kick in.


The cover-slide reallocation recommendation should be reading these curves, not the average ROAS. If the deck recommends "move budget from paid search to CTV" and you can't find a slide that compares the marginal-return curves of those two channels at current spend levels, the recommendation is being driven by averages, not margins. That's the textbook way to under-invest in your strongest channel and over-invest in a weaker one.


Question 4: How was the model validated against actual ground truth?


This is the question that separates real MMM from expensive curve-fitting. A model built on three years of historical data can be made to fit that history almost arbitrarily well — and still be useless for predicting the next quarter.


Two validations matter:


  • Backtest on a holdout period. The model is trained on, say, January 2023 through June 2025 and then asked to predict July through December 2025 without seeing it. The deck should show predicted-versus-actual sales for the holdout with mean absolute percentage error. Anything under 10% MAPE is good; 10–15% is workable; over 20% means the model has not earned the right to recommend anything.

  • Calibration against incrementality tests. If the brand has run any geo holdout tests, ghost ads, or platform-side conversion lift studies in the modeling window, the MMM's channel-level lift estimates should roughly match those experimental results. When the MMM says paid social drives 18% lift and a clean Meta conversion lift study from the same window said 3%, the MMM is wrong, not the experiment.


If the consultant skipped both validations, the model is a guess in a lab coat. The recommendations might still be right — but they might also be very expensive wrong.


What to do this week


Before your next MMM presentation, send the consultant four questions in writing:


  • The base/incremental split with a one-sentence justification of why it's plausible for your business.

  • Channel ROIs with 95% confidence intervals, plus a flag on any channel under 5% of spend.

  • Response curves for the three biggest channels, with the current-spend marker visible on each.

  • The holdout-period MAPE plus any incrementality tests used for calibration.


If they can answer all four cleanly, the model is probably worth its fee. If they hedge on any of them, you've just saved yourself a quarter of misallocated budget — and learned something about who you should hire next time.

Comments


  • LinkedIn
  • Reddit
  • Spotify
  • Apple Podcast
  • X
bottom of page