Editor Note: We highly recommend that you implement the different ideas in this blog post through AB testing. Use the guide to conduct AB testing and figure out which of these ideas in the article works for your website visitors and which don’t.  Download Invesp’s “The Essentials of Multivariate & AB Testing” now to start your testing program on the right foot.

More and more companies are turning to AB testing to increase their online conversion rates.

The scene has changed a lot since we first started doing conversion optimization over ten years ago. Back then, it was difficult to explain to companies the impact of testing on websites’ performances. However, still today, most companies invest primarily in visitor-driving activities and set lower budgets to converting visitors into customers. CRO and split testing budgets in these cases represent a portion equivalent to 10% of the resources devoted to visitor driving.

If your company is starting with A/B or multivariate testing, the following guide will help you avoid common 14 mistakes we have seen companies fall into during their first year of testing.

 

Testing The Wrong Page

What page should you test? If you have a large website, the possibilities are endless.

For an e-commerce website, you can start at the top of the funnel, such as home page or category pages. You can also start at the bottom of the funnel pages, such as the cart or checkout pages.

If you are a lead generation website, you can start by optimizing your home page, landing pages, or contact form pages.

When you choose the wrong page, you invest time and money in a page that might not have a real impact on your bottom line. Over tenyears ago, most companies knew little about CRO and ignored the relevance of selecting the right page to perform a test.

How do you determine where to start? You should rely on both qualitative and quantitative analyzes to make that decision.

For quantitative analysis, first determine the percentage of visitors who arrive at the page you are considering to improve.

If you want to optimize product pages, consult your analytics to determine their percentage of visitors. You might discover that only 20% of your visitors get to your product pages. Which means you are only optimizing 20% of the website traffic. The exceeding 80% remains a goldmine.

For a detailed quantitative analysis, you should create several analytics goals for your website. Your objective is to measure the percentage of traffic that flows from one section of the website to the next.

For an e-commerce website, setup the following funnels:

1.    Visitors flow from home page to order confirmation
2.    Visitors flow from category pages to order confirmation
3.    Visitors flow from product pages to cart page
4.    Visitors flow from product pages to order confirmation
5.    Visitors flow from product pages to category pages
6.    Checkout abandonment rate
7.    Cart abandonment rate

For a lead generation website, setup the following funnels:

1.    Visitors flow from home page to contact confirmation page
2.    Visitors flow from landing pages to contact confirmation page
3.    Visitors flow from different services pages to contact form page
4.    Visitors flow from different services pages to contact confirmation page

The goal of each of these is to start dissecting visitor behavior around the website.

This quantitative analysis gives you half of the picture.

You will also need to conduct a bit of qualitative analysis. This is where you conduct one-on-one meetings with visitors, focus groups, and email surveys asking for visitor feedback on your website.

You will be asking participants what worked well for them on your website and what did not. What persuaded them to convert or what made them leave the website.

By combining the results of your qualitative and quantitative analyzes, you will be able to create a conversion roadmap that covers testing for four to six months.

 

Testing Without Creating Your Website Personas

Testing gives your visitors a voice in your website design process. It validates what works on your website and what does not. But, before you start testing, you must understand your visitors at an intimate level to create tests that appeal to them.

We have talked about the process of creating personas in several of our webinars and our book Conversion Optimization: The Art and Science of Converting Prospects to Customers.

Most companies have a decent knowledge of their target market. The challenge is how to translate that knowledge into actionable marketing insights on your website.

Personas play a crucial role in this process of translation.

Let’s say you are an e-commerce website that sells gift baskets online. You have worked with your marketing team to define two different segments within your target markets:

  • B2C segment: white females, age 38 to 48, college educated, with annual income above $75,000. Your average order value for this segment is $125.
  • B2B segment: corporate clients, with the purchase decision made by an executive. These companies are generating between 10 to 50 million dollars in annual revenue. Your average order value for this segment is $930.

This over generalized format of market segments raises the central question of marketing design. How can you design your website to appeal to these two distinctly different segments?

Creating personas will help you identify with each of the segments. At the end of the persona creation process for this website, you could end up with eight different personas. Let’s take one of them as an example:

suzan

As you can see, Suzan resembles your target market. However, she removes the abstract nature of the marketing data. As you start designing different sections of the website, you will be thinking of Suzan and how she would react to them.

You might be thinking to yourself, “this is all great, but how does that impact my testing?”

Good and successful testing uses personas in creating design variations that challenge your existing baseline.

How would you create a home page test when you are thinking of Suzan?

• She is a caring persona, so you can test different headlines that appeal to her.
• She is looking for unique gifts, so you can test different designs that emphasize the uniqueness of the products.
• Price is an important motivation for Suzan, so you can test different designs that emphasize pricing.

 

Testing Without A Hypothesis

A testing hypothesis is a predictive statement about possible problems on a web page,and the impact that fixing them might have on your KPI.

Most testers dismiss hypothesis as a luxury. So, they create a test that generates results (positive or negative), but when you ask them about the rationale behind the test, they cannot explain it.

Getting disciplined about creating test hypothesis will magnify the impact of your test results.

But how do you come up with a test hypothesis in the first place?

The process of evaluating a web page to create a test starts with analyzing the page to determine possible conversion problems on it. To do so, you will have to conduct both qualitative and quantitative analyzes.

Of course, you can always ignore the process, throw things at the wall, and pray that one of your challengers will win. And yes, it might work some of the time. But it will not work most of the time. And itwill certainly NOT work if you are looking for repeatable and sustainable results.

To determine possible problems on the page, we use the Conversion Framework for page assessment.

The conversion framework by Invesp

The conversion framework focuses on two sets of elements that impact your website conversion rate: website centric factors and visitor centric factors.

 

1. Website-centric factors

These are factors you control on your website. You can change them and adapt them to your visitors’ need, to generate more conversions. Website centric factors include the following four major components:

a. Trust:  if the visitor does not trust your business, he will not interact with you. Trust translates into over 70 different elements that you should evaluate on a page.

b. FUDs:  fears, uncertainties, and doubts might stop your visitor from interacting with your website or converting on it.

c. Incentives: how do you incentivize the visitor to act right away? You can use price, urgency and scarcity strategies to get the visitor to act.

d. Engagement: engaged visitors are more likely to convert.

 

2. Visitor centric factors

These factors are dependent on visitors who land on your website, their persona and the nature of their purchase. Visitor centric factors are broken into the following subgroups:

a.  Visitor persona

b. Complexity of the sale: the complexity of the sale will impact how fast visitors convert. B2B enterprise sales differ from B2C small item purchases.

c. The buying stage: the stage of visitors require different types of design and content to accommodate them. Visitors early in the buying funnel require different information compared to visitors late in the funnel.

Before creating any test, evaluate each of the above elements in your page.

 

Not Considering Mobile Traffic

Most websites had to fix their mobile e-commerce presentation because Google announced that mobile is an important ranking factor.

But there is an even more significant story.

More and more websites are getting a higher percentage of their website traffic on mobile devices. Most of our European clients are reporting anywhere from 40 to 60% of their traffic coming to the website using a mobile device.

You can only expect these numbers to grow over the next few years.

Essentials of AB Testing

So, what should you do?

1.  Determine the percentage of your website traffic that is using mobile devices to browse your website.
2.    Evaluate the behavior of mobile traffic compared to desktop traffic for all the website funnels.
3.    Determine the top ten devices visitors are using to browse your website.
These three steps should give you a lot of action points to take on your website.

Let’s see an example:

SourceMediumDeviceCategory

To Product

Product

To Cart

Cart to checkout
1GoogleOrganicDesktop40%18%40%
OrganicMobile43%18%22%
2GooglePaidDesktop31%15%33%
PaidMobile35%13%18%
3BingPaidDesktop48%22%44%
PaidMobile52%22%25%
4FacebookPaidDesktop32%14%37%
PaidMobile36%13%18%
5EmailInternalDesktop50%35%30%
InternalMobile55%33%18%

For a CRO expert, the data above provides such a wealth of information. When looking at the flow from category to product pages, Bing paid traffic outperforms all other types of paid traffic and only comes second to the website email campaigns. On the other hand, Facebook paid traffic underperforms.

For category to product flow, mobile traffic outperforms desktop for all traffic sources and mediums. This indicates that category page design for mobile is acceptable to users. Notice the drop in mobile performance when visitors get to product pages.

Things get even worse when we start evaluating mobile checkout. The numbers are telling us that the mobile checkout has much higher abandonment rates compared to desktop.

 

Not Running Separate Test For New Vs. Repeat Visitors

Repeat visitors are loyal to your website. They are used to it with all of its conversion problems!

Humans are creatures of habit. In many instances, we find that repeat visitors convert at a lower rate when we introduce new and better designs.

For this reason, we always recommend testing new website designs with new visitors.

Before you test new designs, you need to assess how repeat visitors interact with your website compared to new visitors. If you run Google Analytics, you can view visitor behavior by adding a visitor segment to most reports.

Let’s examine how visitors view different pages on your website.

After you login to Google Analytics, navigate to Behavior>Site Content > All Pages

Google will display the page report showing different metrics for your website.

Google_Analytics_1

To view how repeat vs. new visitors interact with your website, apply segmentation to the report:

Google_Analytics_2

 Select “New Users” and “Returning Users”

Google_Analytics_3
Google will now display the same report segmented by the type of user:

segment-users

Notice the difference for this particular website in terms of bounce and exit rates for repeat visitors compared to new visitors.

Most websites show anywhere from 15% to 30% difference in metrics between the two segments. If your website shows something less than 10% for these segments, then you should examine your design carefully to understand why repeat visitors are acting the same way as new visitors.

Next, create a test for new visitors. Most testing software will allow you to segment visitors.

If your testing software does not support this feature, then switch to something else!

In Pii AB testing engine, at the last step of creating the test, you can select which visitor segment to run the test for:

pii

Not Considering Your Traffic Sources

Visitors land on your website from diverse traffic sources and mediums. You will notice that visitors from different sources interact with your website in different ways.

Let’s consider the conversion rate uplift formula.

Trust is one of the first and largest influences on wether a visitor is persuaded to convert on your website. One of the subelements of trust is continuity.

Continuity means you must maintain a consistent messaging and design from the traffic source and medium to the landing page.

Creating the same test for different traffic sources ignores that visitors might see different messaging or designs prior to landing on your website.

To assess how traffic sources can impact your website, follow these three steps:

1.    Understand how different traffic sources/mediums interact with your website.
2.    Analyze reasons for different visitor interaction (if any).
3.    Create separate tests based on the traffic sources/mediums.

Let’s see how this is done in Google Analytics.

First, generate the traffic sources/mediums report. To do this, navigate to Acquisition>All Traffic > Sources/Mediums

google-analytics-4

Google will generate the report for you. Now, it is difficult to assess all your traffic sources. We recommend assessing either your top ten or the traffic sources that drive more than 50,000 visitors to your website.

As you analyze the report, examine the following metrics:

Bounce rate

Exit rate

Pages/Session

Conversion rate for different goals

FigPii CRO platform

Ask questions such as:

Do you see high bounce rates for paid traffic?

Which traffic sources are driving the lowest bounce and exit rates?

What traffic sources are driving the highest conversions?

So far, these steps help in determining if there is difference in visitor behavior for different traffic sources.

Next, you need to determine the causes of such different behavior.

This will require examining each traffic source:

What visitors are seeing prior to landing on your website?
Do you have control over the display/messaging that visitors see?
Can you change your landing page to maintain continuity?

 

 

Trying To Do Too Much In One Test

This is one of the mistakes we fell into the first year we conducted A/B and MVT testing. The clients wanted to see large-scale tests. They were not convinced by small tests. And instead of explaining to them what we were trying to accomplish, we created large tests where we changed too many things.

Most of our testing produced excellent results.

As a matter of fact, in 2007, 82% of our testing generated an uplift in conversions and 78% of our test produced more than 12% increase in conversion rates.

These are amazing results. So, what was the problem?

Since we were making too many changes on a page for every test, we could not isolate what exactly was causing the uplift. So, our team could only guess. Every test produced an increase inone of seven to nine different factors.

This approach might be fine if you are looking to do two to three tests and get done with testing. But if you are looking for long term testing program that takes a company from 2% conversion rate to 9% conversion, that approach will definitely fail.

Our approach ten years later looks tremendously different. Our testing programs are sharp focused today.

Every test we perform now relies on a hypothesis and introduces small changes backed by research.

Let’s take a recent evaluation we did for a top IRCE 500 retailer. Their product pages suffered high bounce rates. Visitors were clicking on PPC ads, getting to the product pages, checking out the prices and leaving. They were just comparison shopping.

Instead of doing a single test for the product pages, we did five rounds of testing:

1.    Test focused on the value proposition of the website
2.    Test focused on price-based incentives
3.    Test focused on urgency-based incentives
4.    Test best on scarcity-based incentives
5.    Test focused on social proof

The results?

The website increased revenue (not conversions!) by over 180%.

 

Running A/B Tests When You Are Not Ready

Everyone is talking about A/B and multivariate testing. The idea of being able to increase your website revenue without having to drive more visitors to the website is amazing.

But A/B testing might not work for every website. MVT testing is for SURE not for every website.

Testing might not work for you in two instances: when you do not have enough conversations or when you do not have the mindset for running a testing program.

1. You do not have enough conversions

If you do not have enough traffic coming to your website,testing might not work for you.

A small A/B test that has one challenger to an original design requires your website to have a minimum of 200 conversions per month. If you are getting less than 200 conversions, then your tests might run too long without concluding.

We typically do not start A/B testing with a client unless the website has 400 conversions per month.

Multivariate testing requires more conversions and more traffic. Do not consider MVT testing unless your website has 2,000 conversions per month.

2.  You do not have the mindset for running a testing program

While the first instance of failing testing is easy to figure out (you just look at your monthly conversions), the second problem is more difficult to deal with.

The truth is that not every organization or business is ready for testing.

Testing requires you to admit that visitors may hate your website design.

Testing requires you to admit that some designs which you hate will actually generate more sales for you.

Testing requires surrending the final design decision to your visitors.

At the surface, every business owner or top executive will say that they are focused on their revenue. But after running over 400+ conversion optimization projects with over 3,000+ tests in them, we can simply state that this is not the case.

We have seen business owners reject the results of testing that generated 32% uplift in conversions with 99% confidence because they liked the original design.

3M Results

We have seen executives reject the results of testing that generated 25% uplift in conversions with 95% confidence because they hated the winning design.

We have also seen testing programs fail because, while the CEO of the company was committed to testing, the team was not sold on the idea.

Split testing requires a complete culture change for many companies. To make sure its results will have a direct and significant impact on your bottom line, everyone – and we do mean everyone – mustbe completely committed to it.

 

Calling The Test Too Soon

You run the test and, few days later, your testing software declares a winning design. Everyone is excited with the uplift. You stop the test and make the winning challenger your default design.

You expect to see your conversion rate increase. That does not happen.

Why?

Because the test was called too soon. Most testing software declares winners after achieving a 95% confidence level.

They do not take into account the number of conversions the original design made and the variations recorded. If the test is allowed to run long enough, you will notice that the uplift that was recorded slowly disappears.

So, how do you deal with this?

1.   Regardless to the setup in the testing software you use, make sure you adjust it to require a minimum of 100 conversions for the original design and the winning challenger. If you use Pii, this is the default setting.

2.    Run the test for a minimum of seven days so that your test ran on every day of the week. That will account for fluctuations might happen between different days of the week.

Why wouldn’t most testing software require the minimum 100 conversions?

To be fair, decent testing software allows adjusting the minimum conversions. We guess that requiring the 100 conversions slows down any testing and this could negatively impact your view of the testing software.

But it should not.

 

Calling The Test Too Late

You have no control over external factors when you run a split test. These factors could pollute the results of your testing.

Three different categories of external factors could impact your results:

–    General market trends: a sudden downturn of the economy, for example
–    Competitive factors: a competitor running a large marketing campaign
–    Traffic factors: organic or paid traffic quality change

All of these factors could negatively impact the results of your testing for no-fault of the testing program itself. For this reason, we highly recommend limiting the time span of any split for no longer than 30 days.

We have seen companies require tests to run for two to three months, trying to achieve confidence on a test. In the process, they allow their testing data to get polluted.

Remember that achieving 95% confidence is not a goal set in stone. Confidence levels provide a general trend line that the test results are positive and consistent. If a test consistently shows a positive improvement in results for 30 days with a confidence level of 88%, this should be good enough to call the test as opposed to waiting for the test for an additional 30 days.

 

When Technology Becomes A Problem

The goal of testing is to increase your conversion rates and your revenue. Developers sometimes struggle with this focus, especially when they get fascinated by a certain piece of software that complicates implementing a test.

As a goal, most split tests should not take longer than three days to implement.

As a matter of fact, if you follow what we recommended in conducting small tests, most of your tests should not take longer than one day to implement.

You must keep in mind that the first two tests will take a little longer to implement as your development team gets used to whatever testing platform you selected.

However, if you notice that, over a six-months period, all of your tests are taking over a week to implement, then you MUST assess the cause for the delay:

–    Is the testing platform too complicated and an overkill for the type of testing you are doing?
–    Is your actual website or application code not developed with good standards which is causing the delays?
–    Does your development team have a good handle on implementing tests or are they struggling with every test?

Golden rule: A good testing program that will generate increases in revenue should deploy two tests per month.

 

Running Simultaneous Tests

Some companies try to do too much with testing and launch simultaneous tests.

This might be fine if the traffic for each test does not intersect. In this case, tests run in separate swim lanes. However, most of the time, this is not the case.

If the same visitors are navigating through your website and seeing different tests, you are cannibalizing your own testing data.

Imagine the scenario of running a test on the home page, with three challengers to the original design. A visitor might view any of the following designs:

H0Original home page design
H1Challenger 1
H2Challenger 2
H3Challenger 3

At the same time, you run a test on the product pages, with two challengers to the original design. A visitor might view any of the following designs:

P0Original home page design
P1Challenger 1
P2Challenger 2

In this scenario, as the visitor navigates from the home pageto the product pages, he can see any of the following combination of designs

1H0, P0Original home page, original product page
2H0, P1Original home page, product page challenger 1
3H0, P2Original home page, product page challenger 2
4H1, P0Home page challenger 1, original product page
5H1, P1Home page challenger 1, product page challenger 1
6H1, P2Home page challenger 1, product page challenger 2
7H2, P0Home page challenger 2, original product page
8H2, P1Home page challenger 2, product page challenger 1
9H2, P2Home page challenger 2, product page challenger 2
10H3, P0Home page challenger 3, original product page
11H3, P1Home page challenger 3, product page challenger 1
12H3, P2Home page challenger 3, product page challenger 2

Two simple separate tests end up impacting each other.

 

Missing The Insights

The real impact of conversion optimization takes place after you conclude each of your split tests and it is by no means limited to its impact on your conversion rates.

stop CRO frustrations

Yes, seeing an increase in conversion rates is awesome!

But there is a secret to multiply the results of any test significantly by deploying the marketing lessons you learned across markets and verticals.

Let’s put this in perspective.

We worked with one of the largest satellite providers in North America helping them tests different landing pages for their PPC campaign. The testing program was very successful, generating significant increases in conversion rates.

As we concluded the testing program, the director of digital marketing called us and asked if we could use the same lessons to their newspaper advertising.

This was a new challenge.

Would offline buyers react the same way to the advertising as online buyers?

There was only one way to find out. We had to test it.

Each test we created was built using a hypothesis. We applied to the newspaper advertising the lessons we had learned from the previous test. We ran three different tests.

Each test generated an uplift in conversions.

But things did not stop there.

We then applied the same lessons to snail mailers which the company was sending.

Again, we saw uplifts in conversions.

If you follow a conversion optimization methodology, then you will be able to take lessons from your testing and apply them again and again.

 

Not Documenting Everything

A conversion optimization program is documentation intensive. You should document every little detail.

You must document:

Your qualitative analysis research and findings
Your quantitative analysis research and findings
Every page analysis you conduct
Every hypothesis you make
Images of every design you deploy
Testing data

Many companies do not pay close attention to the importance of testing. They discover its importance when they come back to the documentation few months later. When there is no documentation or when it is scattered in email, they struggle remembering why they made a particular change. But it is too late.

From the very start of any CRO project, decide on the method you will use to document everything and use it thoroughly.

Invesp can help you!