The Upside to A/B Tests That Fail
Opinion
5
min read
There is an unfortunate tendency to frame A/B testing in binary “win” or “lose” terms. If none of the test variants outperform the control, then there is no successful optimization to implement, and therefore, the test was an utter failure. That’s sort of how it goes a lot of the time when people talk about testing that fails to produce a desired outcome. But that’s not really how I look at it. For me, while it’s true that the variants in your test will either “beat” one another or not, the experiment itself is always a win.
Another way to look at it is to imagine you’ve made a soup for a dinner party. It’s pretty good the way it is. But you have this idea that adding some smoked salt might make take it to the next level. You ladle some of the soup from the pot into a bowl, add some smoked salt, and give it a taste. Well, it’s not better. It may even be slightly worse for those with sensitive palates, because the smokiness of the salt could be overpowering for them. You decide to stick with the soup as it was and serve it to your guests. Do you view this test as a failure?
You probably don’t, and neither do I, so why do so many think of it that way when it’s a webpage but not a soup? There are a number of good things that come out of running tests, be it soup or webpages, and whether there is a clear “winner” or not.
Test Framing
When you run tests under the belief that that they have to produce a winner, it changes the dynamics of the tests you’re willing to run. The fear of failure can prevent you from running what may not seem like a worthwhile test, but had you only run it, would have surprised you and proven fruitful. For example. if you were only willing to run tests that win, would you ever conduct one where all you do is increase your font size by a couple of pixels? Dylan Ander, who runs a CRO agency called Split Testing, did just that and discovered that for one of his clients, increasing the font size from 16px to 20px produced a 40% increase in conversion rate. If you approach tests like this from a position of curiosity instead of fear, you’ll discover more things that may surprise you.
Learning from Failed Tests
Optimization stems from learning, and learning stems from testing. Even when you run tests that fail, you’re learning something new. You’re learning what doesn’t resonate with your users. If you conduct a test that fails to beat the control, don’t just discard it. Try to understand why. And file it away in your memory bank. I even share some losses with others in the company, so that they too can learn about things that don’t work and why. Teaching others about what didn’t work can also be helpful for dealing with conflicting opinions down the road, where you now have data to help navigate forks in the road.
Innovating Through Failure
If you run enough tests that inform you about what doesn’t resonate with your users, you’re going to get a lot better at designing future tests that do. When you run a lot of tests, many of them will fail to produce a winner and over time, you’re going to see common themes between them. As you start to identify those commonalities, you’ll naturally steer yourself towards new, different, and better ideas.
Limiting Your Losses
Even in situations where you run a test and the control comes out victorious, you’ve still reduced the negative impact relative to having deployed the change without a test. That might feel like a moral victory perhaps, but if you didn’t run the test at all and just pushed it to production, you would have sent 100% of your traffic to the “loser” instead of just 50%.
No Negative Impacts are Awesome Sometimes
There are occasions in which running a test that shows no impact of the tested change, is actually a wonderful thing that you should celebrate. As an example, a scenario that probably every web marketer at some point has had to face, involves the legal team requiring that a disclaimer be “conspicuously placed” somewhere near a call to action. Now before you simply publish that awkward legalese right up near that precious “Buy Now” button, wouldn’t it be great to know that doing so has no materially negative impact on your conversions? That’s a win in my book, not a failure.
Conclusion
It’s true that the point of A/B testing is to get more conversions so you can make more money, just as it is true that the point of going to school is to get good grades so you can make more money. But that is overly reductive and misses a critical step — learning. The point of going to school is to learn. And when you do a good job of learning, you do well on tests. And when you do well on tests, you get the good grades that you need to make more money. Similarly, the point of A/B testing is to learn, first and foremost. Do a good job of that, and you’ll understand your customers better. When you understand your customers better, you get those conversions you needed in order to make more money. And to me, that sounds like a win.
© 2024 Keith Mura