What to do When your A/B Tests Keep on Losing

Simbar Dube

Simbar Dube

Simba Dube is the Growth Marketing Manager at Invesp. He is passionate about marketing strategy, digital marketing, content marketing, and customer experience optimization.
Reading Time: 5 minutes

Do you know what’s easy to do? 

Setting up and running A/B tests. 

Do you know what’s difficult to do?

Coming up with winning A/B test ideas! 

Yes. Anyone can come up with A/B testing ideas but not everyone can come up with winning A/B testing ideas.

This is why 1 out of 7 tests succeed.

When it comes to A/B testing, you don’t just need to have the right tools, you also need to do research and analysis before you implement the test. 

The essence of A/B testing is about understanding the target audience. So, without adequate research and analysis, you will find it hard to produce huge gains. 

Okay, supposed you’ve done all the necessary research and analysis but your A/B test keeps on failing, what’s the next step? 

Well, that question is the reason why we had to come up with this article – so that you get ideas on how to improve your A/B testing success rate. 

Let’s delve in…

1. Wait for, at least, 2 weeks before declaring a result 

One of the most fatal A/B testing mistakes you need to avoid is declaring your result too soon. Even if you have enough traffic and you’ve reached a statistical significance level, A/B testing requires a lot of patience. 

Statistical significance is the probability that a test result is accurate.

It’s important to make sure that your test reaches statistical significance. But it would be a mistake to stop the test that has reached statistical significance in a few days.

If you end a test too soon there’s a high possibility that you will get the wrong results. You may think that your test won but in reality, it would have failed. 

When A/B testing, it is important to wait at least two weeks before announcing the winning variation as this will allow fluctuations in results from both variations of your website’s traffic levels. 

This way you can also reduce any bias during an experiment due to changes being made on different days or times.

And if you happen to change anything on your website while your test is running, then you should also wait – at least seven days – before you declare the winner. Waiting for seven more days will help your testing tool evaluate if the new changes made any impact. 

We always recommend that you don’t change anything on your website while the test is running so that you don’t pollute the test in any way. But in case you’ve changed something while a test was running – you’d have to re-run the test again, for at least two weeks. 

2. Make sure your test is based on a Research-backed hypothesis

No matter what your testing velocity looks like, each and every test you run has to be based on a research-backed hypothesis – this is where most people fail. 

Without doing proper conversion research, whatever you test is a random idea. And to be frank, doing so is a waste of time and traffic. 

What exactly is a hypothesis, you ask? 

A hypothesis is a research-based statement that aims to explain an observed trend and create a solution that will improve the result. This statement is an educated, testable prediction about what will happen.

Hypothesizing is something that we take seriously at invesp. In fact, it is actually the second step in our SHIP optimization process

Conversion research can help you identify problem areas on your site, and from there you can come up with a hypothesis to support your test. 

And the other thing to note about the importance of basing your test on a research-backed hypothesis is that you get to learn something about your target audience. Remember, learning from your target audience also helps you come up with better testing ideas. 

3. Don’t review the test result by yourself

Once you have gathered enough data, reached the statistical confidence level, and run your test long enough, it’s now time to analyze your results. 

Regardless of the results, reviewing the test results is an important stage that you shouldn’t ignore as it determines what you will have to test next. 

Here’s the thing, the way you analyze your results doesn’t only determine what you will test next, but it can also affect your testing strategy completely. 

Another important to remember when reviewing your test results is that any outcome is a learning curve that will help you understand your audience better. 

It’s even more important to review the results as a team if the test didn’t bring out a winning result. This is something we always do at Invesp – and it helps us brainstorm for better testing ideas.

4. Make the size of the element you’re testing more prominent

Sometimes when your test loses, it’s not because you tested the wrong element or you had a wrong testing idea. 

Sometimes it’s because you did not position the element, you’re testing it, correctly or you did not make it prominent enough for visitors to notice it. 

Items such as call-to-action buttons, sidebars, trust badges, or even contact details can be easily missed if it is not positioned correctly.

Positioning and prominence is important when it comes to A/B testing. If it makes sense, you should also consider moving the elements you’re testing above the fold and re-run the test again. 

If you do this and your test still bears no fruits, at least you now know what matters and what doesn’t matter to your audience – and you can consider moving on to testing other elements.

5. Make sure you’re not testing useless elements

Not every element on your website is worth testing. 

That “test everything” advice is BS. 

When you have obvious problems – like usability issues – on your website then there’s no need to launch tests. Usability issues have to be fixed right away – we usually refer to these scenarios as stopping the bleeding. 

A/B testing things like the color and size of a CTA button can be a waste of time and traffic. I know there are a bunch of case studies that promise huge gains if you test the size of your CTAs – but, that’s a no-brainer; just follow the best practices and implement. 

The thing is, your visitors won’t be persuaded to make a purchase just because you have changed the color of your button from orange to green, or from yellow to blue. It’s never up to the color of the button; it’s the visual hierarchy that matters the most. 

So, yes…your test is probably going to lose if you test useless elements. 

6. Make radical changes to your variations

Incremental or small-scale changes on your variations have their own place in A/B testing. But more often than not, small changes in your variations can be the reason why your tests keep on losing. 

In other words, if the variations(s) you’re testing are similar to the control, visitors may not notice the difference and these kinds of tests can take long before they have actionable results.  This is something that we keep on seeing time and time again. 

So, instead of going with incremental tests, try radical tests. 

A radical test is when you make huge changes on your variations or when you test many web elements at the same time: 

So, instead of testing small things like color changes, you can run a radical test and test elements like value propositions, hero images, and form placements.  

The downside of radical testing is that you won’t know which exact elements persuaded visitors to make a purchase since you will be testing many changes at the same time. The radical testing tactic is a high-risk, high reward concept. 

Conclusion

Unless you’re a conversion optimization expert, be prepared to see most of your A/B testing ideas tank. This doesn’t mean that your testing ideas are not any good. But it means that A/B testing is tricky. Sometimes you might follow all of the best practices, implement your test the right way, but your tests keep on losing. If this is you, I would recommend that you try these tips in this article and see how they work for your website. Good luck.

Share This Article

Join 25,000+ Marketing Professionals!

Subscribe to Invesp’s blog feed for future articles delivered to receive weekly updates by email.

Simbar Dube

Simbar Dube

Simba Dube is the Growth Marketing Manager at Invesp. He is passionate about marketing strategy, digital marketing, content marketing, and customer experience optimization.

Discover Similar Topics