If you are not careful with planning your split tests, they can be one of the main reasons that reduce the quality of optimization work.
You must always remember that testing (AB or multivariate) is only one component of conversion optimization.
We have seen many companies that completely relied on testing software without doing a deep analysis of what they were actually testing. Our article on the case against multivariate testing points out this example:
Let’s do some simple math.
Say you want to test six different elements on a page (headers, benefits list, hero shots, call to action, etc).
For each element, you will choose four different options. This means you will have a total of 4^6 = 4,096 possible scenarios that you will have to test.
As a general rule of thumb, you will need around 100 conversions per scenario to ensure the data you are collecting is statistically significant. This translates into 4,096 * 100= 409,600 conversions.
If your website converts around 1%, you will need 409,600 * 100= 40,960,000 visitors before you start gaining some confidence in your testing results.
If testing 4,096 variations sounds difficult, imagine how complicated matters will get by adding variation in campaigns, offers, products, and keywords. Running this many test scenarios is not unheard of for many larger websites.
When creating an MVT test, keep these possible problems in mind:
• Be aware of the dangers of creating the test without paying close attention to the hypothesis behind it;
• Be aware of the number of variables you are testing and their dependency on one another;
• Be aware of the length of time it will take to complete the test to a statistical significance.