Daniel Burstein

A/B Testing: Split tests are meaningless without the proper sample size

January 7th, 2013

A/B testing can be extremely influential. Heck, it even convinced the leader of the free world to send some curious emails (“It doesn’t have to be this way”).

This is because numbers are powerful. Seeing one call-to-action button receive a clickthrough of 10% while another button only receives 8% is pretty convincing.

Just know this …

Numbers can be misleading, as well. And one of the most frequent mistakes I see inexperienced testers make is not having a sufficient sample size.

This problem is so counterintuitive. After all, you can clearly see that 10% is more than 8%. How can that button not be better?

A quick story from my own life to explain …


Getting to the heart of testing

So my wife has a (don’t worry, it’s minor) heart thing. And she was telling me how her heart felt. And I said …

“Don’t worry, it’s perfectly normal. I get that all the time, and I’m fine.”

Well, the punch line is, I wasn’t (don’t worry, I’m all fixed now).

And that, my friends, is random chance. That’s why you want to make sure you ask enough people (or, in the case of A/B testing, have enough visits and conversions for your split test) to defeat random chance, and to know with a pretty high level of confidence that your control and treatment are actually different.


Here are a few resources to help you with sample size sufficiency in your online testing:

A/B Testing: Working with a very small sample size is difficult, but not impossible

Marketing Optimization: How to determine the proper sample size

Online Marketing Tests: How do you know you’re really learning anything?

Determining if a Data Sample is Statistically Valid

Sample Size Calculator

Marketing Optimization: How to design split tests and multi-factorial tests

Daniel Burstein

About Daniel Burstein

Daniel Burstein, Director of Editorial Content, MECLABS Institute Daniel oversees all editorial content coming from the MarketingExperiments and MarketingSherpa brands while helping to shape the editorial direction for MECLABS – working with our team of reporters to dig for actionable information while serving as an advocate for the audience. Daniel is also a frequent speaker and moderator at live events and on webinars. Previously, he was the main writer powering MarketingExperiments publishing engine – from Web clinics to Research Journals to the blog. Prior to joining the team, Daniel was Vice President of MindPulse Communications – a boutique communications consultancy specializing in IT clients such as IBM, VMware, and BEA Systems. Daniel has more than 15 years of experience in copywriting, editing, internal communications, sales enablement and field marketing communications.

Categories: Analytics & Testing Tags: , ,

  1. January 8th, 2013 at 19:49 | #1

    Thank you! I was just debating this with a tester who claims he was some sort of “statistics expert” on a different blog… even though he thought his test showing a 47% confidence level was valid.

    Good timing 😉

We no longer accept comments on the MarketingExperiments blog, but we'd love to hear what you've learned about customer-first marketing. Send us a Letter to the Editor to share your story.