Search
Close this search box.

A/B Testing Best Practices You Should Know

Simbar Dube

Table of Contents

Join 25,000+ Marketing 
Professionals!

Subscribe to Invesp's blog feed for future articles delivered to your feed reader or receive weekly updates by email.

Guides / AB Testing Guide / AB Testing Best Practices

Have you ever wondered why some A/B tests skyrocket conversions while others fall flat? The secret isn’t just having the right tools—it’s following proven best practices.

Done incorrectly, AB testing can waste money, misuse staff hours, and, even worse, decrease conversion rates.  

But when done right, A/B testing lets you easily experiment with different web page designs, split traffic to see how each performs, and gain valuable insights into what truly resonates with your audience.

Here are 11 best practices you must follow to ensure your A/B tests deliver tangible results.

A/B testing best practices during the planning stage

1. Set expectations correctly

Wrong expectations lead to disappointment and lost investment. Many marketers conduct A/B tests because they read or watched a case study where a company increased conversion rates. 

While success stories are inspiring, they shouldn’t be your sole benchmark. Every website is different, and your results will depend on various factors.

Here are two approaches you can take to set the right expectations: 

  • Think of a reasonable annual goal for your CRO program. A conversion program should achieve a conversion increase of 20% to 30% yearly. This increase in conversions should cover all the costs related to running the program for your company.
  • Return on investment (ROI). Calculate your total investment in CRO (including staff time, tools, etc.). Then, determine your desired ROI. For example, if you invest $80,000 and want a 3X return, you’d need to generate an additional $240,000 in sales.

    For example:
    • If your annual online sales are $1 million, a 3X ROI on an $80,000 investment would require a 24% conversion rate increase. 
    • If your sales are $10 million, that ROI only requires a 2.4% increase.

2. Understand your technical limitations

Since 2006, our team has worked with hundreds of global organizations in many industries. 

Over the years, we’ve observed a common pitfall in many CRO programs: a lack of dedicated technical resources.

This often leads to slow implementation and missed opportunities.

Before you run an A/B test, consider the following: 

  • In-house programs: When running an in-house conversion program, ensure you have dedicated front-end developers or a development team that can implement two to four A/B tests monthly. This will keep your experimentation momentum high. You’ll also need access to a web analytics tool like Google Analytics or A/B testing platform to measure results. 
  • Outsourcing: When hiring a CRO specialist or a reputed CRO agency, make sure they can manage the full implementation of all the tests they deliver.

Why does it matter? Without the right technical skills or resources, A/B testing can quickly become a bottleneck. 

Some tests need complex code changes or integrations with other tools, so it’s crucial to understand these needs from the start. This helps set realistic timelines, allocate resources wisely, and ensure your testing program runs smoothly.

3. Identify and prioritize testing opportunities

Your first task in effective A/B testing is identifying potential website conversion problems. 

Here’s a structured approach to building your “research opportunities” list:

Step #1 Gather enough data and insights:

  • Create buyer personas. Understand your target audience and their preferences, pain points, and needs by creating buyer personas. 
  • Analyze your quantitative data. Dive into analytics, heat maps, and session videos to identify drop-off points, high-exit pages, and other friction areas. 
  • Conduct qualitative research. Identify areas where visitors struggle with the website using one-on-one interviews, focus groups, and online polls and surveys. 
  • Benchmark against competitors. Analyze how your website compares to your competitors in design, functionality, and user experience. 
  • Conduct a heuristic assessment of your website. Evaluate your website against established usability principles to discover design flaws and potential improvements. 
  • Conduct usability tests. Use session replay tools like FigPii to see real users interacting with your website and identify specific pain points and areas of confusion.

Step #2. Build your research opportunities list:

Based on your research, you will end up with an extensive list of items you can test on your platform. We refer to this list as the “research opportunities. 

For each item, include:

  • A clear problem statement
  • How the issue was identified (e.g., analytics, user feedback, heuristic evaluation)
  • A hypothesis for how to fix the problem

We will use these three points to prioritize items on the research opportunities list. Each item on the list includes additional data, such as the page it was identified on and the device type.

Step #3. Prioritize your list:

Not all tests are created equal. Prioritize your list based on factors like:

  • Potential impact on conversions: Focus on changes likely to have the most significant positive impact.
  • Traffic volume: Prioritize web pages that receive a significant amount of traffic.
  • Ease of implementation: Start with relatively easy to implement tests to build momentum.

4. Analyze quantitative and qualitative data to prioritize testing opportunities

While your research opportunities list gives you a broad overview of potential areas for improvement, you’ll need to analyze data more deeply to prioritize your A/B tests effectively. 

This involves analyzing both quantitative and qualitative data.

Quantitative analysis: unveiling traffic patterns and bottlenecks:

Before optimizing any page, understand how much traffic it receives. For example, if a page gets only 20% of visitors, optimizing it solely addresses 20% of potential improvements. 

Utilize your web analytics to track user journeys and identify where users drop off.

For an e-commerce website, set up the following funnels/goals:

  • Visitors flow from the home page to order confirmation
  • Visitors flow from category pages to order confirmation
  • Visitors flow from product pages to cart page
  • Visitors flow from product pages to order confirmation
  • Visitors flow from product pages to category pages
  • Checkout abandonment rate
  • Cart abandonment rate

For a lead generation website, set up the following funnels/goals:

  • Visitors flow from the homepage to the contact confirmation page
  • Visitors flow from landing pages to contact confirmation page
  • Visitors flow from different services pages to the contact form page
  • Visitors flow from different services pages to the contact confirmation page

Each of these goals aims to start dissecting user behavior on the website. However, this quantitative research gives you half the picture—you need to conduct a qualitative analysis for more reliable data. 

Qualitative analysis: Gaining user insights

Quantitative data helped you uncover what users are doing. Now, you must conduct a qualitative usability analysis to understand the why.

For qualitative analysis, gather feedback directly from users through:

  • One-on-one meetings
  • Focus groups
  • Online polls
  • Email surveys asking for feedback on your website. 

Ask about their experiences, pain points, motivations, and what factors influenced their decisions to convert or abandon the website.

5. Prioritize items on the research opportunities list

To determine which item to tackle first, prioritize your research opportunities list. 

We use 18 factors to prioritize the research opportunities list (click here to download our prioritization sheet). 

Our evaluation criterion includes:

  • The potential impact of the item on the conversion rate
  • How the item was identified (qualitative, quantitative, expert review, etc.)
  • The elements that the proposed hypothesis addresses
  • The percentage of website traffic the page receives
  • The type of change
  • Ease of implementation

Prioritizing items on the “research opportunities” list creates a six to eight-month conversion roadmap for the project. 

This doesn’t mean optimization stops after this period—instead, you’ll repeat this process regularly to ensure ongoing improvement.

Here is a partial screen capture for a conversion roadmap on one of our CRO projects:

conversion roadmap

Split testing best practices during the implementation phase

6. Create a strong testing hypothesis

Don’t just guess—test with a hypothesis!

A hypothesis is a predictive statement about a possible change on the page and its impact on your conversion rate. 

Each item on your prioritized research opportunities list should include an initial hypothesis—a starting point for addressing a potential problem. As you delve deeper into each issue, you’ll refine this initial hypothesis into a concrete, actionable one that you can use to design your test.

Example: Evolution of a Hypothesis

  • Initial Hypothesis: Adding social proof will enhance visitor trust and increase conversions.
  • Concrete Hypothesis: Based on qualitative data collected from online polls, we observed that website users need to trust the brand and be made aware of how many users use it. Adding social proof on the homepage will increase visitor trust and improve conversion rates by 10%.

Notice how the concrete hypothesis is more specific and actionable. It:

  • States how we identified the issue (through online polls)
  • Indicates the problem identified on the page (lack of social proof on the page)
  • States the potential impact of making the change (a 10% increase in conversions)

A coherent, concrete hypothesis should drive every test you create. Avoid the temptation to change elements unrelated to your hypothesis, as this can muddy your results and make it difficult to draw meaningful conclusions.

7. Determine the sample size to achieve statistical significance

To ensure your A/B test results are reliable, determine the minimum number of visitors required to achieve statistical significance. 

This involves two key steps:

1. Determining unique visitors for the test pages:

Don’t assume all website visitors will encounter your test. Use your analytics to determine the total number of unique visitors going through the particular page(s) you plan to test in a month. 

2. Determining how many visitors you must include in your test

Before launching any split test, determine how many visitors must complete the test before you can draw statistically valid conclusions. This is called “fixed horizon testing,” and it ensures you don’t end your test prematurely or drag it on unnecessarily.

To calculate this, you’ll need the following information:

  • The current conversion rate of the control
  • Number of visitors who will view each variation
  • Number of variations
  • Expected conversion uplift from running the test (referred to as MDE: minimum desirable effect)

Many online calculators and statistical tools can help you determine the required sample size based on these inputs.

A/B test sample size calculator

Why does this matter? Testing with an insufficient sample size can lead to inaccurate conclusions and wasted resources. By calculating the A/B test sample size upfront, you can ensure your test results are statistically significant and provide actionable insights for your optimization efforts.

8. Create design variations based on test hypothesis

Once you have a clearly defined hypothesis, the next step is to design a new page to validate it. However, be careful to avoid getting sidetracked by other design elements; focus solely on the changes related to your hypothesis.

Follow this simple two-step design process: 

  • Start with pen and paper. Before creating the final designs for a test, we like to use pen and paper to mock up these designs and evaluate them. Doing so forces everyone to focus on the changes involved in the test rather than getting distracted by colors, call-to-action buttons, fonts, and other items.
  • Get approval. Once the mockups are finalized and approved, you can create the digital designs using your preferred software.

9. Limit the number of variations

White your A/B  testing software may allow you to create millions of variations for a single page; don’t go overboard with creating variations. 

Validating each new variation requires a certain number of conversions. This approach of throwing things at the wall rarely works. Throwing multiple options at the wall and hoping something sticks is seldom a successful strategy. Instead, focus on creating a limited number of well-thought-out variations that address specific hypotheses.

Remember, the goal is to uncover sustainable and repeatable results. The more variations you introduce in a test, the harder it becomes to isolate the impact of each one. 

For most websites, we recommend limiting the number of variations to less than seven. This simplifies analysis and reduces the risk of statistical errors.

A/B testing best practices during the post-test analysis

10. Retest winning designs against control

You are not done when determining a winning design in a split test. A best practice is to run your original page against the winning design in a head-to-head (one-on-one) test. 

Why Re-test?

  • Validate Results: This helps confirm that your initial findings were accurate and not influenced by external factors (e.g., seasonal fluctuations, promotional ad campaigns, viral marketing campaigns, etc.).
  • Solidify Conclusions: You gain additional confidence in the winning design’s performance by running a second test.
  • Mitigate Risk: If the follow-up test reveals different results, you can re-evaluate before fully committing to the change.

A/B testing is an iterative process. Continuously validating your results will help you make data-driven decisions and optimize your website for maximum performance.

11. Look for lessons learned

The real power of conversion optimization happens when you discover marketing insights from your testing to apply across verticals and channels. 

Always look for actionable marketing insights from your test. These are excellent ways to proceed with your next test.

Conclusion: Your Path to A/B Testing Success

Follow these 11 best practices to transform your A/B testing process from a guessing game into a tool for conversion rate optimization. Remember, successful A/B testing is an ongoing journey of experimentation, learning, and optimization.

If you’re looking for expert guidance and support to unlock the full potential of A/B testing, Invesp’s team of seasoned CRO specialists is here to help.