Jim Miller:
Now let’s do a quick exercise.
What you’re seeing on the screen are eight postcards, shown front and back — top over bottom. The assignment with this client was to develop a concept that would outperform the current control in market. You can see the top-left card is the existing control. The goal was also to improve conversion among current customers who own a second vehicle, encouraging them to bring in that second vehicle for service as well as the first.
Each design you see met brand standards and featured the exact same offer. I want to give you a moment to take a look and, on your own, decide which — if any — might beat the control or perform best in market. I’ll give you about 10 seconds.
[Pause]
Hopefully, you’ve picked the one you think would win. Now let’s walk through the actual in-market results.
And the winner is: “Two cars, double the deals, too easy.”
Now look, everyone on this call is, to some degree, a pro. I’ve done this same exercise nearly 100 times, and on average, about 80% of participants guess incorrectly. Even the client couldn’t accurately predict the results.
The point is this: Professional experience alone consistently fails to identify the top performer. That’s why we need to let the data guide us. The data tells us what will resonate most with our intended audience.
So how do we find the right data? Especially when we’re talking about creative?
It’s not demographic, geographic or transactional data. We’re talking about something different — data that informs creative direction.
And the answer is pretty simple: We test.
Now, I’m confident most of you already have a robust testing strategy in place. I’m also confident that A/B testing is part of that mix — and it should be. It’s one of the most accurate ways to test, but it’s also one of the most expensive. Each test cell needs to run at a statistically significant volume, which drives up costs. And with channels like direct mail, where packaging is involved, those costs can really balloon.
Another hidden cost of A/B testing is time. Even if you’re getting incremental lift, it takes a long time to reach that next benchmark.
Our anecdotal research at Quad shows that A/B testing replaces performance benchmarks just 5% to 15% of the time — regardless of industry, product or channel. That’s not exactly inspiring.
So what’s the alternative?
Shannon and I are strong advocates for pre-market testing because we’ve seen it predict real-world behavior again and again. At Quad, we use both virtual and experiential pre-market testing — and the results are incredibly compelling.
Over the past seven years, when clients use these methods before going to market, we achieve successful, benchmark-replacing results about 85% of the time. That’s a 6x improvement over A/B testing.
Think about it this way: if your typical testing cycle takes three months, A/B testing alone could take 18 months to achieve the same breakthrough you might get from one round of pre-market testing.
That’s a year and a half of waiting — versus three months of insight and immediate gains.
To me, that’s pretty compelling.