Editor Note: We highly recommend that you implement the different ideas in this blog post through AB testing. Use the guide to conduct AB testing and figure out which of these ideas in the article works for your website visitors and which don’t. Download Invesp’s “The Essentials of Multivariate & AB Testing” now to start your testing program on the right foot.
First, do not rely solely on a testing software to create successful tests.
You need people who can design efficient test scenarios, analyze results accurately, and create meaningful follow-up tests.
Poorly designed experiments might not provide concrete insights to conversion rates optimization. You need criteria to determine, for example, which elements on a page you should test, which external factors could affect the results, and in which ways to rearrange the designs for new phases of the test.
As much as testing is essential to any optimization project, it should only be conducted after the completion of equally important stages of optimization work such as persona development, site analysis, design and copy creation. Each of these elements provides a building block towards a highly optimized website that converts visitors into clients.
Find below four steps to follow in creating a successful split test.
Before thinking about elements on the page to test, start by analyzing different problem areas.
How do you do that? Several conversion optimization methodologies can help you. Invesp uses the Conversion Framework for page analysis.
Examples of other methods include the LIFT model and Marketing Experiments heuristics.
The Conversion Framework analyzes seven different areas on the page:
• Trust and confidence
• Buying stage
• Sales complexity
Elements of these six areas affect whether visitors stay on your website or leave. You must keep in mind that different elements have diverse impacts based on the type of page you are evaluating.
A conversion optimization expert can easily pinpoint 50 to 150 problems on a webpage.
We do NOT believe you should attempt to fix all of these at once. Prioritize and focus on the top three to seven problems to get started.
A hypothesis is a predictive statement about the impact of removing or fixing one of the problems identified on a webpage. Successful testing begins by creating a hypothesis to explain why visitors react to certain elements on a page.
The image below shows the original design of a shopping cart for one of our clients who sells nursing uniforms. When our team examined the analytics data for the client, we noticed the high checkout abandonment rates.
Abandonment rates for un-optimized checkout usually range from 45% to 60%.
This client reported checkout abandonment rates close to 82%. Nothing in the checkout page explained this alarming rate.
Our team, then, conducted a usability test. Nurses were invited to place an order with the site while the optimization team observed and conducted exit interviews to gather information from participants. The nurses revealed that the visitor’s biggest problem was the fear of paying too much for a product. As nurses are price conscious, they are aware they can buy the same item from other competing website or brick and mortar stores.
So, price played a big role in deciding where to purchase a uniform. Our client was previously aware of the price sensitivity issue. The client’s website already offered money-back guarantees and 100% price match. The problem is that these assurances were only displayed on the main homepage of the site while most of the visitors landed on category and product pages. Visitors did not know about these assurances.
The hypothesis for this particular test: online visitors are sensitive to price, adding assurances can counter the FUDs the visitors have due to price concerns. Figure 10-8 shows the new design of the shopping cart.
The team added an “assurance center”on the left-hand navigation of the cart page reminding visitors of the 100% price match and the money back guarantee.
The new version of the page resulted in a 30% reduction in shopping cart abandonment.
New Design of the Shopping Cart
A hypothesis that works for one website may not succeed or, even worse, deliver negative results, for another site.
After the results of the previous client’s test had been published in the Internet Retailer online magazine, another client approached us to test an assurance center on their site. This client was also looking for a way to reduce the cart abandonment rate.
Original Shopping Cart Page
The above image shows the original design of the shopping cart.
New Shopping Cart Page with Assurance Center
This image shows the new design of the cart page with the assurance center added to the left navigation.
This test had the same hypothesis as the last one, that most online visitors did not convert on the site due to the price FUD and that adding assurances on the cart page would ease the shoppers’ concerns.
When we tested the new version with the assurance center against the old version, the results pointed out to an entirely different outcome. The new assurance center caused the website conversion rate to drop by 4%. So, while the assurance helped one client, it produced a negative impact with another.
Can we say with absolute certainty that adding an assurance center for this client would always produce negative results? No. Several elements could have influenced this particular design and caused the drop in conversion rates. The assurance center design, copy or location could have been the real reason for the drop in conversions.
Analyzing the validation of a hypothesis through test data and creating a follow-up hypothesis is at the heart of conversion optimization. In this case, we needed to test many different elements around the assurance center before we could decide its impact on conversions.
Tests that produce increases in conversion rates are excellent in validating initial assumptions and hypothesis.
We do not mind tests that result in reducing conversion rates, because we can learn something about our hypothesis from these tests.
We do worry about tests that do not produce any increases or decreases in conversion rates.
Create Variation Based On Test Hypothesis
Once you have the hypotheses, the next step is to start creating new page designs that will validate it.
You must be careful when you are creating new designs. Do not go overboard with creating variations. Testing software allows you to create millions of variations for a single page. You must keep in mind that validating each new variation requires a certain number of conversions.
For high converting websites, we like to limit page variations to less than 30. For smaller websites, we like to limit page variations to less than five new variations or designs.
Let Your Visitors Be The Judge: Test The New Designs
Let Your Visitors Be The Judge: Test The New Designs
How do you judge the quality of the new designs you introduced to test your hypotheses? You let your visitors be the judge through AB or multivariate testing.
Remember the following procedures when conducting your tests:
• Select the right technology platform to speed up the process of implementing the test. Technology should help you implement the test faster and should NOT slow you down;
• Do not run your test for less than five days. Several factors could affect your test results, so allow the testing software to collect data long enough before concluding the test;
• Do not run your test for longer than four weeks. Several external factors could pollute your test results, so try to limit the impact of these factors by limiting the test length.