Editor Note: We highly recommend that you implement the different ideas in this blog post through AB testing. Use the guide to conduct AB testing and figure out which of these ideas in the article works for your website visitors and which don’t. Download Invesp’s “The Essentials of Multivariate & AB Testing” now to start your testing program on the right foot.
Conducting a multivariate test is exciting.
By using the right tool, you can quickly develop different designs for your website, direct visitors to each design and watch the conversions for each variation.
Done incorrectly, AB testing can result in a waste of money, a misuse of man-hours, and, even worse, a decrease in your conversion rates.
Here are nine best practices you must follow when conducting a multivariate test.
1. Set expectations correctly
Wrong expectations translate into disappointment and lost investment.
Many marketers start testing because they heard or watched a case study where a company achieved an incredible increase in conversion rates. They jump into conversion optimizations and testing looking for significant uplifts,but this excitement slowly disappears, as they are not able to achieve the results they were hoping.
Setting reasonable goals to increase conversion rates will save you a lot of heartburn.
Here are two approaches you can take to set your goals:
Approach 1: Think of a reasonable annual goal for your testing program. A testing program should be able to achieve 30% increase in conversions with conservative estimates. Is that 30% increase in conversions enough to cover all the costs related to conversion optimization?
Approach 2: Calculate the total investment for the testing program. This total should include time for both marketing and development teams. It should also include testing software investment. What reasonable return on investment do you expect? Are you looking to make $3 or $5 for every dollar you invest? Let’s sayyour total investment in testing comes up to $80,000 and that you are expecting to get $3 for every dollar you investment. That means you will have to increase sales by $240,000 to justify your testing program. The final step is to calculate how much $240,000 is in terms of conversion rate increase.
2. Understand your technical limitations
Most CRO programs fails because the project owners do not assign the proper technical resources to ensure quick and efficient implementation. After ten years of working with organizations across the globe and in many different industries, this continues to be the biggest reason we see projects fail.
You must allocate the proper resources to be able to handle one to two tests per month. You will probably need to change other projects’ priorities to manage these tests, but that is the only way you will succeed.
3. Determine the correct page to test
Some companies conduct testing in a random way. One month, they test the homepage, next month, the product pages, and,on the third, they look at category pages. However, successful programs require investing time initially to determine which pages of the website are leaking visitors, which pages need the most improvement and which pages should be left alone.
Prioritizing the different pages of the website and coming up with a conversion roadmap is a must. Good conversion roadmaps typically cover six months spanning anywhere from eight to fifteen different tests.
4. Determine the sample size
The fact that your website gets 100,000 visitors does not mean that all of these visitors will go through a particular test. You must look closely at your analytics to determine the total number of unique visitors that will go through a page or class of pages (product pages or category pages) over an elected period.
5. Start with a hypothesis
A hypothesis takes your test from a gambling exercise into a meaningful marketing study.
After spotting a possible problem on the page or visitor flow, a hypothesis creates a predictive statement on how removing or minimizing that problem would increase conversion rates.
A sample hypothesis from a test we conducted recently on a subscription page for a content website was:
“Presenting visitors with fewer obstacles, less noise, and a cleaner design will lead to a higher CR across all subscription packages.”
A good hypothesis will make you think more about your online visitors: what they are struggling with on a particular page, where they should go next, and how to address any of their fears, uncertainties and doubts.
The image above shows the original design of a shopping cart for a website that nursing uniforms. When our team examined the analytics data for the client, we noticed the high checkout abandonment rates.
Abandonment rates for un-optimized checkout usually range from 45% to 60%.
This client reported checkout abandonment rates close to 82%. Nothing in the checkout page explained this alarming rate.
Our team, then, conducted a usability test. Nurses were invited to place an order with the site while the optimization team observed and conducted exit interviews to gather information from participants. The nurses revealed that the visitors’ biggest problem was the fear of paying too much for a product. As nurses are price conscious, they are aware they can buy the same item from other competing website or brick and mortar stores.
So, price played a big role in deciding where to purchase a uniform. Our client was previously aware of the price sensitivity issue. The client’s website already offered money-back guarantees and 100% price match. The problem is that these assurances were only displayed on the main homepage of the site while most of the visitors landed on category and product pages. Visitors did not know about these assurances.
The hypothesis for this particular test: online visitors are sensitive to price, adding assurances can counter the FUDs the visitors have due to price concerns.
The team added an “assurance center” on the left-hand navigation of the cart page reminding visitors of the 100% price match and the money back guarantee.
The new version of the page resulted in a 30% reduction in shopping cart abandonment.
A hypothesis that works for one website may not succeed or, even worse, deliver negative results, for another site.
After the results of the previous client’s test had been published in the Internet Retailer online magazine, another client approached us to test an assurance center on their site. This client was also looking for a way to reduce the cart abandonment rate.
The above image shows the original design of the shopping cart.
This image shows the new design of the cart page with the assurance center added to the left navigation.
This test had the same hypothesis as the last one, that most online visitors did not convert on the site due to the price FUD and that adding assurances on the cart page would ease the shoppers’ concerns.
When we tested the new version with the assurance center against the old version, the results pointed out to an entirely different outcome. The new assurance center caused the website conversion rate to drop by 4%. So, while the assurance helped one client, it produced a negative impact with another.
Can we say with absolute certainty that adding an assurance center for this client would always produce negative results?
No. Several elements could have influenced this particular design and caused the drop in conversion rates. The assurance center design, copy or location could have been the real reason for the drop in conversions.
Analyzing the validation of a hypothesis through test data and creating a follow-up hypothesis is at the heart of conversion optimization. In this case, we needed to test many different elements around the assurance center before we could decide its impact on conversions.
Tests that produce increases in conversion rates are excellent in validating initial assumptions and hypothesis.
We do not mind tests that result in reducing conversion rates because we can learn something about our hypothesis from these tests.
We do worry about tests that do not produce any increases or decreases in conversion rates.
6. Create design variations based on test hypothesis
Once you have the hypotheses, the next step is to start creating new page designs that will validate it.
You must be careful when you are creating new designs. Do not go overboard with creating variations. Testing software allows you to create millions of variations for a single page. You must keep in mind that validating each new variation requires a certain number of conversions.
For high converting websites, we like to limit page variations to less than 30. For smaller websites, we like to limit page variations to less than five new variations or designs.
7. Limit the number of variations
Some companies avoid the process of analysis by testing millions of designs against the original. Multivariate testing software available out there allows them to do so. This approach of throwing things at the wall rarely works. If it works in one test, it fails in the end.
You should avoid letting software do the thinking for you. What you are looking for are sustainable and repeatable results. The more variations you introduce in a test, the less you can link the impact of these variations on each other.
8. Re-test winning against control
A good practice after a test concludes is to run your original page against the winning design and in head-to-head (one on one) test. This will help you ensure and solidify your conclusion of the winning page and confirm that testing data is not polluted by any external factors.
9. Look for lessons learned
The real power of conversion optimization happens when you discover marketing insights from your to apply across verticals and channels.
Always be on the lookout for actionable marketing insights from your test. These are an excellent way to move forward with your next test.