“Should I run an A/B test or a Multivariate test?”
This simple yet fundamental question always pops up in planning for conversion rate optimization. In this post, you will know about the advantages and disadvantages of A/B Testing and Multivariate testing and when should you use A/B testing or Multivariate testing.
A/B testing is the default and most common procedure in CRO programs. In some cases, though, running a multivariate test can add a significant value. Other times, you have room for both tests interchangeably.
The great benefit of testing? You give your visitors a voice in the design process.
When implemented correctly, testing removes the guesswork from conversion optimization and you move to the stage where every action you take is a result of an informed decision.
In previous chapters of our guide, we presented the conversion framework principals and plentiful practical applications to use on your website. You must always remember that every website is unique and what works for one site, may or may not necessarily work for yours.
Increasing conversion rates starts with understanding your target market, creating personas to represent that market, and, then, applying the elements of the conversion framework within the context of these personas.
These best practices are great in theory but their actual application is no walk to the park. However, the more time you invest understanding them, the more likely you will receive accurate and approximated results.
Before starting with A/B and multivariate tests, keep in mind the following few guidelines. When implementing and launching tests, you are looking for a design that increases conversions:
- For an e-commerce website, you are looking for a design that generates more sales.
- For a lead generation website, you are looking for a design that generates more leads.
- For a subscription website, you are looking for a design that generates more subscribers.
You start any test by defining the action you want to enhance as a result of the new designs.
In some instances, you might look for a design that increases the average order value, or a design that keeps visitors more engaged, or even one that generates more social shares.
It is typical for team members to disagree on the best website or landing page design, the best visitor flow, or the best sales. Stakeholders usually have different views of what changes you should make on your website. Trying to resolve the differences is challenging. Your goal is to use better designs, copy and visitor flow that persuade more visitors to convert.
But how do you determine which design, copy and visitor flow increase conversions?
You test any modifications you introduce to your website against the old one and compare the impact on conversions. Split testing software allows you to carry out the comparisons.
By testing two or more variations of a page against each other, you can observe which designs result in higher conversions. For example, split testing software permits you define which of two main home page designs is better for conversion. If the main home page receives 15,000 visitors per day, then the software can direct 7,500 visitors to one design and 7,500 visitors to the other design. The software then records which of the two designs generated more orders.
So, what are the differences between A/B and multivariate testing?
What Is A/B Testing
A/B testing or split testing is a much less complicated task than multivariate testing. It often depends on testing one thing at a time, and you can easily point out which variant of the two had the best effect on the visitors’ behavior by contrasting the conversion rates of both the treatment and the control up against each other. Let’s say, for example, measuring the effectiveness of a small CTA button to an all across CTA button in terms of conversion rates.
A/B tests allow you to test a baseline design against one another to determine which one converts more and name it a winner.
Note that if you are testing more than two designs, you are conducting an A/B/N test.
Let’s see how A/B testing works in practice.
Years back, when we first started working with clients, we helped an amazing client optimize their cart page. As 62% of the visitors were lost from cart page to the checkout page, macro conversions at the bottom of the funnel were hindered.
When analyzing the cart page, we noticed the lengthy page had competing CTAs and a cluttered design.
Our hypothesis: conversions would increase if we introduced a prominent checkout button, with drop down designs for coupon code fields, and deleted buttons for extra actions, like “clear shopping cart.”
You can see below the original page design at that time:
The changes in the page should include security badges and reduce clutter. In our hypothesis, a clear design with a prominent CTA would increase the overall trust and confidence, which would improve the experience of the user and eventually have a positive effect on conversion rates.
Based on our hypothesis, we prepared two variations to test. Here is an overview of the changes we made within these variations:
V1 – In variation 1, we created two prominent checkout buttons in a contrasting color, transformed “update shopping cart” in a text link, removed the “clear shopping cart” button, made a dropdown for donation/coupon/gift code areas, placed “secure shopping” text above the 2nd checkout button, and security badges under the button.
V2 – In variation 2, we designed just one prominent checkout button, and maintained all the other changes: a text link for “update shopping cart,” no “clear shopping cart” button, a dropdown for donation/coupon/gift code areas, “secure shopping” text above the 2nd checkout button, and security badges under the button.
After the second launch of the test, Variation V2 was the winner, for both runs, with an improvement of 12.51% in conversions, at a high level of confidence of 85.17%.
Advantages of A/B Testing
- Conducting careful changes to enhance the user’s experience throughout time
- Continuous improvement and consequently better conversion rates
- The test period is considerably shorter than multivariate testing
- Better learning process as the elements being tested can always be isolated and tested for better understanding.
Disadvantages of A/B Testing
- Previous planning: all the involved elements should be pre-planned and carefully measured for better results.
- Testing rounds are not up to you to decide, they depend mainly on obtaining statistical significance, that’s why they should be pre-planned carefully.
While A/B testing allows you to test one element at a time, multivariate tests are designed to test multiple elements of a single page at the same time, which makes them more complicated than A/B testing.
Multivariate testing has a significant role in conversion optimization for sure, but it does not have a leading role.
Testing software allows you to test different headlines, images, buttons, or any other elements on a single page to measure their impact on your conversion rates, as the image below shows:
Image source: Invesp
In this product page from Apple.com, you see the possible tests you can conduct at once. You can test different variations of the headline, different CTA colors and texts. You can also test different pricing and the display of two products per line.
As you can test multiple combinations on the same page, keep in mind that for multivariate tests you need to separate the variables being tested and decide the winner in variation separately.
The Advantages of Multivariate Tests
- Conducting multiple tests at one, on various distinct variables simultaneously
- More detailed results with in-depth analytics
- Comprehensive, with the ability to gather various distinct elements at once on a certain page and test to understand their effects, with no need to run various A/B tests on the same page with a shared goal
- A great way to continue increasing conversions after conducting an A/B testing, especially in cases of websites with a substantial traffic volume
The Disadvantages of Multivariate Tests
- Require a large number of visitors to execute the tests, so as to achieve statistical significance, bearing in mind that the traffic will be split into quarters, fifths or even much smaller segments depending on the number of the different variables being tested.
- Involve complex data analysis and might result in the inability to determine which variable had the magic effect on conversion rates.
1. When Should I Use A/B or Multivariate Testing?
The answer depends on your specific situation and on the number of conversions your website generates per month. Here are some guidelines to help you decide:
1st Rule: If your website, landing page or campaign gets less than 200 conversions per month, you should first focus on increasing your website visitors using different channels (SEO, PPC, social, etc.). If you do want to start testing, then you should use A/B testing with micro conversions.
2nd Rule: If your website, landing page or campaign gets more than 200 conversions but less than 500 conversions per month, you can start A/B testing, introducing two to three variations against the original.
3rd Rule: If your website, landing page or campaign gets more than 500 conversions but less than 1,000 conversions per month, you can start A/B testing, introducing five to eight variations against the original.
4th Rule: If your website, landing page or campaign gets more than 1,000 conversions per month, you have the choice between expanded A/B tests or limited MVT tests.
5th Rule: If your data shows that you can start testing and you have not tested before on your website, start with A/B testing.
2. For How Long Should I Run My Tests?
- At a minimum, you should NOT stop your test before the original design and the winner have each collected 100 conversions. In our practice, we require a minimum of 500 conversions per variation.
- We also recommend that you allow testing to run for a minimum of one week (preferably two weeks).
- If the test takes longer than four weeks to conclude, then you should consider declaring a no-winner.
- You should aim at a test confidence level of more than 95%.
- We recommend a minimum confidence level of 90%. There are few instances, though, where 85% is acceptable.
3. How to Run Any Test Properly?
Keep in mind that the process of gathering relevant data and information sits at the heart of conversion optimization. A/B and MVT tests are great tools for this process if launched properly:
- Planning: Knowing what to expect from running tests is crucial and it has a huge impact on results. Any form of testing that is just a massive collaboration of tests, without a previous setting of goals and the kind of results you are planning to harvest at the end, is probably a waste of time, money, and most importantly effort.
- Traffic: One of the biggest challenges for running a multivariate test is the amount of traffic needed to obtain significant results. Multivariate tests are traffic greedy and require loads of visitors to participate in the test. So, as a start, you should make sure that you have enough traffic for launching this kind of test. As for A/B testing, traffic is not of an issue if you can drive enough visitors to your sample.
- Time length: The right timing to stop your tests is significant and affects your results. Tests should not exceed a month. Always allow enough time to reach the statistical significance. For both A/B and MVT tests, you should not rush up to stop the test once your conversion rate is dramatically high, to avoid false positives. While running a MVT test you’ll be more tempt than in an A/B test to stop.
We have set previously, in our article on A/B testing vs Multivariate Testing, the five steps to implement when conducting any test on your website. These steps can be summarized in the following points:
- Finding the right page to optimize. If you have a large website, finding these pages will probably take a while. The best page to optimize is the one that is leaking the most visitors.
- Check the number of your visitors on that page, the ones who will actually participate in the testing.
- Never run your tests for more than four weeks.
- Determine your goal, which doesn’t necessarily have to be increasing the macro conversion. Any test on a smaller scale can still have a huge impact on your macro conversion rate.
- Be selective in terms of the elements you are going to test, pick only the ones with the most impact.