Gain Market Share by Disrupting Bad Ad Measurement

"It is difficult to get a man to understand something when his salary depends upon his not understanding it."
- Upton Sinclair

Do you ever get the feeling that your advertising performance metrics are mostly baloney? If so, this article is for you.

The advertising industry desperately needs brave, skeptical individuals willing to demand better evidence of what actually works—from media partners, agencies, measurement firms, and industry bodies.

But does anyone really care? Currently, over 90% of media practitioners either don't grasp the shortcomings of their current practices, have conflicting incentives, or are content with CYA reporting theater, leading to poor investment decisions.

Can you trust big tech companies like Google and Meta to tell you what's effective? After all, they've got those smarty-pants kids from MIT and Stanford, right? Surely you jest. Do you really expect them — who already receive most of your budget — to suggest you spend more on Radio, Outdoor, or Snap? Read the receipts.

What about your ad agency — isn't that their job? Actually, no. Agencies get paid based on a percentage of ad spend, so why would they ever suggest spending less? And if they're compensated on a variable rate, e.g., 15% of programmatic spend vs. 5% of linear, it's no surprise they're steering budgets toward easier and higher-margin channels.

How about your in-house analytics experts? Well, first they'd need to imply they've been measuring iROAS poorly for years. Awkward. The head of search marketing, whose budget might shrink if real measurement proved poor performance? Unlikely. The CMO? Only if she's new and ready to shake things up—your best hope for disruption.

As Upton Sinclair aptly wrote, "It is difficult to get a man to understand something when his salary depends upon his not understanding it."

We're calling on:

  • Private equity firms seeking hockey-stick growth

  • CFOs skeptical of ad metrics, viewing marketing as a cost center

  • Media sellers, large and small, being strangled by Google and Meta’s duopoly

  • Mid-tier DTC brands not yet hypnotized by synthetic control methods or captive to big tech

Today's "performance" measurement is largely flawed, built on bad signals, lazy modeling, and vanity metrics. The consequences are poor decisions, wasted budgets, and lost market share.

Avoid these unreliable measurement approaches:

  • Matched market testing

  • Synthetic control methods

  • Attribution models

  • Quasi-experiments

  • Black-box "optimization" solutions like PMax and Advantage+

None of these offer solid evidence. Some might be "better than nothing," but that's hardly the standard you want for multimillion-dollar decisions. At worst, they're self-serving illusions from platforms guarding their own margins, not advertiser interests.

Our profession embraces these weaker standards when better measurement is easily attainable. Marketing scientists have never met a quasi-experimental method they don't like — dense statistics are so much fun! But randomized controlled trials (RCTs) — deemed the best evidence of causality by science, with straightforward math — are falsely labeled as "too hard."

RCTs remain the only reliable method to isolate causal impact and identify true sales lift. Claims about their complexity or expense are myths perpetuated by those benefiting from the status quo.

Today's trendiest method, Synthetic Control Method (SCM), is essentially matched market testing (DMA tests familiar since the 1950s) boosted by statistical steroids. You pick DMAs to represent a test group (the first mistake) and construct a Frankenstein control from a weighted mashup of donor DMAs.

SCM excels when an RCT truly isn't possible — like assessing minimum wage impacts, gun laws, or historical political policies such as German reunification. These scenarios can't ethically or practically be randomized. But advertising? Advertising is perhaps the easiest, most benign environment for RCTs. Countless campaigns run daily, media is easily manipulated, outcomes (sales) are straightforward, quick to measure, and economically significant. There are few valid reasons not to conduct RCTs in advertising.

The first rule of holes is "Stop digging." The first rule of quasi-experiments is "Use them when RCTs are unethical or infeasible." That's almost never true for ad campaigns.

Synthetic Control Method is problematic because it:

  • Offers weaker causal evidence than RCTs

  • Is underpowered for ad measurement: the academic literature on SCM cite social policies resulting in effect sizes >10%, much greater than large marketers can expect from advertising

  • Lacks transparency (requires advanced statistical knowledge)

  • Is not easily explainable to non-statisticians

  • Lacks replicability (each instance is a unique snowflake dependent on many choices)

  • Lacks generalizability (blending 15 DMAs to mirror Pittsburgh still doesn't reflect national performance)

I've yet to hear a compelling reason for choosing SCM over RCT for ad measurement that wasn't self-interested rationalization. For more on the how, why and when to use geo RCTs, see my previous essay in this series.

As a global economic downturn looms, many advertisers will unwisely slash budgets without knowing what's genuinely effective. Don't make mistakes that could threaten your company's future. Measure properly — cluster randomized trials are your best path to true advertising ROI.

Next
Next

Advertisers seeking accurate ROAS should use large-scale, randomized geo tests