Search
Close this search box.
Search
Close this search box.

An A/B Testing Process That Guarantees Success

Simbar Dube

Table of Contents

Join 25,000+ Marketing 
Professionals!

Subscribe to Invesp's blog feed for future articles delivered to your feed reader or receive weekly updates by email.

It would be best to have people who can design test scenarios, analyze results accurately, and create meaningful follow-up experiments.

Poorly designed experiments might not provide concrete insights into why visitors behave a particular way on your website. It would be best if you had criteria to determine, for example, which elements on a page you should test, which external and internal factors could affect the results, and in which ways to create the designs for new phases of your testing program.

As much as testing is essential to any conversion optimization project, it should only be conducted after completing equally essential stages of optimization work, such as persona development, site analysis, design, and copy creation. Each element provides a building block for a highly optimized website that converts visitors into clients.

Find below four steps to follow in creating a successful split test. Please refer to this article for a more detailed guide on how we conduct our conversion optimization projects.

Problem identification

Before considering elements on the page to test, start by analyzing different problem areas on your website.

How do you do that? Several conversion optimization methodologies can help you. Invesp uses the Conversion Framework for page analysis.

Conversion Framework

The Conversion Framework analyzes seven different areas on the page:

These seven areas affect whether visitors stay on your website or leave. You must remember that different elements have diverse impacts based on the type of page you are evaluating.

Using the Conversion Framework, a conversion optimization expert can quickly pinpoint 50 to 150 problems on a webpage.

We do NOT believe you should attempt to fix all these simultaneously. Prioritize and focus on the top three to seven problems to get started.

Test hypothesis

A hypothesis is a predictive statement about the impact of removing or fixing one of the problems identified on a webpage.

The image below shows the original design of a shopping cart for one of our clients, who sells nursing uniforms. When our team examined the analytics data for the client, we noticed the high checkout abandonment rates:

Client Shopping Cart

Original Design of the Shopping Cart

Abandonment rates for un-optimized checkout usually range from 65% to 75%.

This client reported checkout abandonment rates close to 82%. Nothing on the checkout page explained this high rate.

Our team then conducted a usability test. Nurses were invited to place an order on the site while the optimization team observed and conducted exit interviews to gather participant information. The nurses revealed that the biggest problem was the fear of paying too much for a product. As nurses are price-conscious, they know they can buy the same item from other competing websites or brick-and-mortar stores.

Our client was aware of the price sensitivity issue, and that price played a significant role in deciding whether visitors purchased a uniform. The client’s website offered money-back guarantees and a 100% price match.

The problem is that these assurances were only displayed on the site’s main homepage, while most of the visitors landed on category and product pages. Visitors did not know about these assurances.

The hypothesis for this particular test: usability study revealed that visitors are sensitive to price. Thus, adding assurances can reduce visitor price concerns and cart abandonment by 20%.

Client-shopping-cart-new-design

The image below shows the new design of the shopping cart.

The team added an “assurance center” on the left-hand navigation of the cart page, reminding visitors of the 100% price match and the money-back guarantee.

The new version of the page resulted in a 30% reduction in shopping cart abandonment.

A hypothesis that works for one website may not succeed or, even worse, deliver negative results for another site.

After the previous client’s test results were published in the Internet Retailer magazine, another client approached us to test an assurance center on their site. This client was also looking for a way to reduce their cart abandonment rate.

The image below shows the original design of the cart page:

landing page

The following image shows the new design of the cart page with the assurance center added to the left navigation:

Landing Page 2

This test had the same hypothesis as the last one, that most online visitors did not convert on the site due to the price FUD and that adding assurances on the cart page would ease the shoppers’ concerns.

The results revealed an entirely different outcome when we tested the new version with the assurance center against the control. The new assurance center caused the website conversion rate to drop by 4%. So, while the assurance helped one client, it negatively impacted another.

Can we say with absolute certainty that adding an assurance center for the second client would always produce negative results? No.

Several elements could have influenced this design and caused the drop in conversion rates. The assurance center design, copy, or location could have been the real reason for the drop in conversions.

Validating the hypothesis through testing and creating a follow-up hypothesis is at the heart of conversion optimization. In this case, we needed to test many elements around the assurance center before deciding its impact on conversions.

Tests that increase conversion rates are excellent in validating initial assumptions about visitors and our hypothesis.

We do not mind tests that reduce conversion rates because we can learn about our hypothesis from these tests.

We do worry about tests that do not produce any increases or decreases in conversion rates.

Create variation based on the test hypothesis.

Once you have the hypothesis, the next step is to create new page designs to validate it.

You must be careful when you are creating new designs. Do not go overboard with creating new variations. Most split-testing software allows you to create thousands, if not millions, of variations for a single page. You must remember that validating each new variation requires a certain number of conversions.

We limit page variations to less than seven for high-converting websites. We limit page variations for smaller sites to two or three new variations.

Let visitors be the judge: test the new designs.

How do you judge the quality of the new designs you introduced to test your hypothesis? You let your visitors be the judge through AB or multivariate testing.

Remember the following procedures when conducting your tests:

  • Select the right AB testing software to speed up the test implementation process. Technology should help you implement the test faster and not slow you down.
  • Do not run your test for less than two weeks. Several factors could affect your test results, so allow the testing software to collect data long enough before concluding the test.
  • Do not run your test for longer than four weeks. Several external factors could pollute your test results, so limit the impact of these factors by limiting the test length.