Multivariate and A/B Testing Best Practices

Khalid Saleh

Khalid Saleh

Khalid Saleh is CEO and co-founder of Invesp. He is the co-author of Amazon.com bestselling book: "Conversion Optimization: The Art and Science of Converting Visitors into Customers." Khalid is an in-demand speaker who has presented at such industry events as SMX, SES, PubCon, Emetrics, ACCM and DMA, among others.
Reading Time: 7 minutes

Conducting a multivariate test is exciting.

By using the right tool, you can quickly develop different designs for your webpage, split visitors between these designs and observe the conversion rates for each variation.

Done incorrectly, AB testing can result in a waste of money, misuse of staff-hours, and, even worse, a decrease in your conversion rates.

Here are ten best practices you must follow when conducting a multivariate test.

A/B testing best practices during the planning stage

1.  Set expectations correctly

set expectations correctly

Wrong expectations lead to disappointment and lost investment.

Many marketers start split testing because they read or watched a case study where a company achieved an incredible increase in conversion rates. They jump into conversion optimizations and testing looking for significant uplifts, but their excitement slowly disappears, as they are not able to achieve the results they were hoping for.

Setting reasonable goals to increase conversion rates will save you a lot of heartburn.

Here are two approaches you can take to set the right expectations:

1st approach: Think of a reasonable annual goal for your CRO program. A conversion program should achieve between 20% to 30% annual increase in conversions. Is that increase in conversions enough to cover all the costs related to running the program for your company?

2nd approach: Calculate the total investment for the conversion optimization program. This total should include time for both marketing and development teams. It should also include testing software investment. What is a reasonable return on investment do you expect? Are you looking to make $3 or $5 for every dollar you invest?

Let’s say your total investment in CRO adds up to $80,000 and that you are expecting to get a 3X return on your investment.

That means you will have to increase your sales by $240,000 to justify your program. What is the percentage increase in online sales that will generate $240,000?

  • If you are currently doing $1,000,000 in annual online sales, that means you will have to increase your conversion rate by 24%.
  • If you are currently doing $10,000,000 in annual online sales, that means you will have to increase your conversion rate by merely a 2.4%.

2. Understand your technical limitations

Since 2006, our team has worked with hundreds of organizations across the globe in many different industries. We observed that many CRO programs fail because the project owners do not assign the proper technical resources to ensure quick and efficient implementation.

If you are running an in-house conversion program, then you must allocate enough technical resources to implement two to four tests per month. If you are hiring an outside firm to handle your CRO program, then make sure that they will manage the full implementation of all the tests they will deliver.

3. Create a “research opportunities” list: how do you select which page and what elements to test?

Your first task is to identify potential conversion problems on your website. To do so, you should do the following:

  1. Create buyer personas for your site visitors
  2. Analyze your quantitative data (analytics, heat maps, and session videos)
  3. Conduct qualitative studies to identify areas where visitors struggle with the website (one on one interviews, focus groups, online polls and surveys)
  4. Conduct a competitive analysis of your site against your competitors
  5. Conduct a heuristic assessment of your website
  6. Conduct several usability tests on your website

By following these six steps, you will end up with an extensive list of items that you can test on your platform. We refer to this list as the “research opportunities” list.

Each line item on our research opportunities list includes:

  1. A problem statement describing it
  2. How the issue was identified
  3. An initial hypothesis of how we can fix the problem

We will use each of these three points to prioritize items on the research opportunities list. Each item on the list includes additional data such as the page it was identified on and the device type.

On quantitative and qualitative analysis

Before optimizing any page on your website, you should consult your analytics to determine thee percentage of your overall website visitors who make it to that page. If a particular page(s) gets 20% of your visitors then are only optimizing 20% of the overall website traffic. The remaining 80% remains a goldmine.

For detailed quantitative analysis, you should create several analytics goals for your website. Your objective is to measure the percentage of traffic that flows from one section of the website to the next.

For an e-commerce website, set up the following funnels/goals:

  1. Visitors flow from homepage to order confirmation
  2. Visitors flow from category pages to order confirmation
  3. Visitors flow from product pages to cart page
  4. Visitors flow from product pages to order confirmation
  5. Visitors flow from product pages to category pages
  6. Checkout abandonment rate
  7. Cart abandonment rate

For a lead generation website, set up the following funnels/goals:

  1. Visitors flow from homepage to contact confirmation page
  2. Visitors flow from landing pages to contact confirmation page
  3. Visitors flow from different services pages to contact form page
  4. Visitors flow from different services pages to contact confirmation page

The goal of each of these is to start dissecting visitor behavior around the website.

This quantitative research gives you half of the picture.

You will also need to conduct qualitative usability analysis where you perform one-on-one meetings with customers/visitors, focus groups, online polls, and email surveys asking for feedback on your website.

You should ask participants what worked well for them on your website and what did not. What persuaded them to convert or what made them leave the site.

4. Prioritize items on the research opportunities list

By prioritizing the research opportunities list, you will decide which item to tackle first. We use 18 different factors to prioritize the research opportunities list (click here to download our prioritization sheet). Our evaluation criterion includes:

  • The potential impact of the item on the conversion rate
  • How the item was identified (qualitative, quantitative, expert review, etc.)
  • The elements which the proposed hypothesis addresses
  • The percentage of traffic the page receives
  • The type of change
  • Ease of implementation

Prioritizing items on “research opportunities” list creates a six to eight-month conversion roadmap for the project. That does not mean that you are done with conversion optimization after six or eight months. It means that you will have to go through the exercise of creating the list periodically.

Here is a partial screen capture for a conversion roadmap on one of our CRO projects:

conversion roadmap example

Split testing best practices during the implementation phase

5. Create your testing hypothesis

A hypothesis is a predictive statement about a possible change on the page and its impact on your conversion rate.

Every item on the prioritized research opportunities list includes an initial hypothesis of how to fix the conversion problem. As we start tackling issues on our list, we develop that initial hypothesis into a concrete hypothesis that we can use in our testing.

An initial hypothesis is the first stab our team takes on how we should address a potential problem on the website/webpage.

Example of initial hypothesis:

Adding social proof will enhance the visitor trust in the website and increase conversions.

Example of concrete hypothesis:

Based on qualitative data collected from online polling, we observed that website users do not trust the brand and are unaware of how many users are using it. Adding social proof on the homepage will increase visitors trust and improve conversion rates by 10%.

What do you notice about the concrete hypothesis?

  • It states how we identified the issue
  • It indicates the problem identified on the page
  • It states the potential impact of making the fix

A coherent, concrete hypothesis must drive every test you create. The hypothesis might impact multiple elements on a web page. Resist the temptation to change elements that do not relate to the hypothesis.

6. Determine the sample size to achieve statistical significance

There are two steps that you must take:

A. Determine how many unique visitors go through the page(s) you intend to test

The fact that your website gets 1,000,000 visitors does not mean that all of these visitors will go through a particular test. Examine your analytics to determine the total number of unique visitors that will go through the particular page(s) you plan to test in a month.

B. Determine how many visitors you will need to include in your test

Before launching any split test, you must determine the total number of visitors that must be tested before concluding your test. The goal is to determine when a decision will be reached at a specific moment in time after a certain number of visitors have gone through the test. This is referred to as fixed horizon testing. To do this calculation, you will need to have the following numbers:

  • The current conversion rate of the control
  • Number of visitors who will view each variation
  • Number of variations
  • Expected conversion uplift from running the test (referred to as MDE: minimum desirable effect)

7. Create design variations based on test hypothesis

Once you have the hypothesis, the next step is to create new page designs that will validate it.

You must be careful when you are creating new designs. Yes, your webpage might have several problems with it, but your test must be driven by the hypothesis you created. Before creating the final designs for a test, we like to use pen and paper to mockup these designs and evaluate them. Doing so forces everyone to focus on the changes involved in the test. Our team focuses on the elements we are trying testing and forgets other details such colors, fonts and other items that do not relate to the test. After these mockups are approved, we then use software to create them.

8. Limit the number of variations

Do not go overboard with creating variations. The split testing software allows you to create millions of variations for a single page. You must keep in mind that validating each new variation requires a certain number of conversions. This approach of throwing things at the wall rarely works. If it works in one test, it fails in the end.

You should avoid letting the split testing software think for you. What you are looking for are sustainable and repeatable results. The more variations you introduce in a test, the less you can link the impact of these variations on each other.

For most websites, we like to limit the number of variations to less than seven. Adding more variations will only muddy up the analysis required. It will also increase the possibility of running into statistical errors.

A/B testing best practices during the post-test analysis

9. Re-test winning designs against control

You are not done when you determine a winning design in a split test. A best practice is to run your original page against the winning design in head-to-head (one on one) test.

This will help you ensure and solidify your conclusion of the winning page and confirm that any external factors did not pollute testing data.

10. Look for lessons learned

The real power of conversion optimization happens when you discover marketing insights from your testing to apply across verticals and channels.

Always be on the lookout for actionable marketing insights from your test. These are an excellent way to move forward with your next test.

Share This Article

Join 25,000+ Marketing Professionals!

Subscribe to Invesp’s blog feed for future articles delivered to receive weekly updates by email.

Khalid Saleh

Khalid Saleh

Khalid Saleh is CEO and co-founder of Invesp. He is the co-author of Amazon.com bestselling book: "Conversion Optimization: The Art and Science of Converting Visitors into Customers." Khalid is an in-demand speaker who has presented at such industry events as SMX, SES, PubCon, Emetrics, ACCM and DMA, among others.

Discover Similar Topics