{"id":10100,"date":"2017-11-16T18:14:29","date_gmt":"2017-11-16T23:14:29","guid":{"rendered":"https:\/\/www.invespcro.com\/blog\/?p=10100"},"modified":"2024-09-03T14:53:11","modified_gmt":"2024-09-03T14:53:11","slug":"aa-tests","status":"publish","type":"post","link":"https:\/\/www.invespcro.com\/blog\/aa-tests\/","title":{"rendered":"What Is An AA Test And Why You Should Run AA Tests"},"content":{"rendered":"<span class=\"span-reading-time rt-reading-time\" style=\"display: block;\"><span class=\"rt-label rt-prefix\">Reading Time: <\/span> <span class=\"rt-time\"> 10<\/span> <span class=\"rt-label rt-postfix\">minutes<\/span><\/span><p><span style=\"font-weight: 400;\">Did you know 80% to 90% of A\/B tests do not produce any statistically significant result? Only<\/span><a href=\"https:\/\/www.invespcro.com\/blog\/the-state-of-ab-testing\/\"><span style=\"font-weight: 400;\"> 1 out of 8<\/span><\/a><span style=\"font-weight: 400;\"> A\/B tests show a significant difference.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Done incorrectly, some marketers have begun to question the value of <\/span><a href=\"https:\/\/www.invespcro.com\/ab-testing\/vs-multivariate-testing\/\"><span style=\"font-weight: 400;\">A\/B testing<\/span><\/a><span style=\"font-weight: 400;\">\u2014their A\/B test reports an uplift of 20%. Yet, the increase reported by the AB testing software never seems to translate into improvements or profits.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">The reason? \u201cMost winning A\/B test results are illusory.\u201d <\/span><a href=\"http:\/\/www.datascienceassn.org\/sites\/default\/files\/Most%20Winning%20A-B%20Test%20Results%20are%20Illusory.pdf\"><span style=\"font-weight: 400;\">(Source: Qubit)<\/span><\/a><\/p>\n<p><span style=\"font-weight: 400;\">Furthermore, most arguments that call for running A\/A testing consider it a sanity check <\/span><a href=\"https:\/\/www.invespcro.com\/ab-testing\/\"><span style=\"font-weight: 400;\">before you run an A\/B test<\/span><\/a><span style=\"font-weight: 400;\">.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">In this post, we\u2019ll examine arguments around that (for and against it) and suggest other ways to look at A\/A tests and why we run them regularly on our <\/span><a href=\"https:\/\/www.invespcro.com\/blog\/creating-a-conversion-roadmap-how-to-prioritize-conversion-problems-on-your-website\/\"><span style=\"font-weight: 400;\">CRO projects<\/span><\/a><span style=\"font-weight: 400;\">.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Let\u2019s dive right in!<\/span><\/p>\n<h2><span style=\"font-weight: 400;\">What is an AA test?<\/span><\/h2>\n<p><span style=\"font-weight: 400;\">An A\/A test is essentially an A\/B test where both variations are identical. Instead of comparing different versions of a webpage or marketing material to see which performs better, an A\/A test compares two identical variations against each other.\u00a0\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">In theory, AA tests are designed to help marketers examine the reliability of the <\/span><a href=\"http:\/\/www.figpii.com\/\"><span style=\"font-weight: 400;\">A\/B testing tool<\/span><\/a><span style=\"font-weight: 400;\"> used to run them, aiming to find \u201cno difference between the control and variant.\u201d<\/span><\/p>\n<figure id=\"attachment_98787\" aria-describedby=\"caption-attachment-98787\" style=\"width: 600px\" class=\"wp-caption alignnone\"><img fetchpriority=\"high\" decoding=\"async\" class=\"size-full wp-image-98787\" src=\"https:\/\/www.invespcro.com\/blog\/images\/blog-images\/image1-9.jpg\" alt=\"What Is an AA Test? \" width=\"600\" height=\"433\" srcset=\"https:\/\/www.invespcro.com\/blog\/images\/blog-images\/image1-9.jpg 600w, https:\/\/www.invespcro.com\/blog\/images\/blog-images\/image1-9-300x217.jpg 300w\" sizes=\"(max-width: 600px) 100vw, 600px\" \/><figcaption id=\"caption-attachment-98787\" class=\"wp-caption-text\">AA Testing<\/figcaption><\/figure>\n<p><span style=\"font-weight: 400;\">Now, since you\u2019re running the original page against itself (or multiple versions of itself), it will be logical that visitors will react the same way to all the different test recipes. Thus, the A\/B testing software will not be able to declare a winner.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">The theory is that the difference in <\/span><a href=\"https:\/\/www.invespcro.com\/ab-testing\/process\/\"><span style=\"font-weight: 400;\">conversion rates between variations<\/span><\/a><span style=\"font-weight: 400;\"> will not reach statistically significant results.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Other versions of AA tests include running an AABB test. In this case, you will have the control, an identical variations of the control, a challenger, and an identical copy of the challenger.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Then, you will <\/span><a href=\"https:\/\/www.figpii.com\/ab-testing\"><span style=\"font-weight: 400;\">run your A\/B test<\/span><\/a><span style=\"font-weight: 400;\"> as you usually do with an original and a challenger, but you will also add two sanity check versions to measure the accuracy of the testing software on the results.<\/span><\/p>\n<h2><span style=\"font-weight: 400;\">How does AA testing work?<\/span><\/h2>\n<p><span style=\"font-weight: 400;\">Running an AA test is much like running AB tests, except in this case, the two groups of users randomly chosen for each variation are given the same experience.<\/span><\/p>\n<p><b>Here\u2019s a quick breakdown of how AA testing works:<\/b><\/p>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>Create two identical versions:<\/b><span style=\"font-weight: 400;\"> Duplicate your existing webpage, email, or other marketing material exactly.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>Divide traffic: <\/b><span style=\"font-weight: 400;\">Split your test traffic equally between the two identical versions.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>Monitor results: <\/b><span style=\"font-weight: 400;\">Track key performance indicators (KPIs) like click-through rates, baseline conversion rate, or revenue generated.<\/span><\/li>\n<\/ul>\n<p><span style=\"font-weight: 400;\">If your A\/B testing tool is working correctly, you should see no statistically significant difference between the two identical versions. Any significant difference indicates a potential issue with the tool, your experiment setup, or data quality.<\/span><\/p>\n<p><b>Note: <\/b><span style=\"font-weight: 400;\">You will also want to integrate your AB testing tool with your analytics to compare conversions and revenue reported by the testing tool to those reported by analytics\u2014they should correlate.<\/span><\/p>\n<h2><span style=\"font-weight: 400;\">Purpose of AA Testing<\/span><\/h2>\n<p><span style=\"font-weight: 400;\">Whether to conduct an A\/A test or not invites conflicting opinions. Some companies include running an A\/A test as part of any engagement, while others consider it a waste of time and resources.<\/span><\/p>\n<p><b>Here are some reasons for companies to run AA tests:\u00a0<\/b><\/p>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>Validate testing setup:<\/b><span style=\"font-weight: 400;\"> Ensure the testing tool, data collection, and analysis processes are working correctly.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>Identify biases: <\/b><span style=\"font-weight: 400;\">Detect any inherent biases in the testing methodology or data.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><b>Establish baseline metrics:<\/b><span style=\"font-weight: 400;\"> Determine expected performance levels for future AB tests.<\/span><\/li>\n<\/ul>\n<h2><span style=\"font-weight: 400;\">Arguments against AA Testing<\/span><\/h2>\n<p><span style=\"font-weight: 400;\">There are three main arguments against running AA testing:<\/span><\/p>\n<h4><span style=\"font-weight: 400;\">1. Running an AA test is a waste of time and resources that you could use for something that generates better ROI<\/span><\/h4>\n<p><a href=\"https:\/\/conversionxl.com\/blog\/aa-testing-waste-time\/\"><span style=\"font-weight: 400;\">Craig Sullivan<\/span><\/a><span style=\"font-weight: 400;\">, one of the early CROs, doesn\u2019t recommend A\/A testing.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Not because he thinks it is wrong, rather,<\/span><\/p>\n<blockquote><p><i><span style=\"font-weight: 400;\">\u201cMy experience tells me there are better ways to use your time when testing.\u00a0 The volume of tests you start is important, but even more so is how many you *finish* every month and how many from those that you *learn* something useful from.\u00a0 Running A\/A tests can eat into \u2018real\u2019 testing time.\u201d<\/span><\/i><\/p><\/blockquote>\n<p><span style=\"font-weight: 400;\">The issue is not philosophical for Craig but one that revolves around practicality. This makes total sense in an industry focused on delivering the most value for clients.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">While you could technically run an A\/A test in parallel with an A\/B test, doing so would make the process more statistically complex.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">It will take longer for the test to complete, and you\u2019ll have to discard your <\/span><a href=\"https:\/\/www.invespcro.com\/ab-testing\/\"><span style=\"font-weight: 400;\">A\/B test results<\/span><\/a><span style=\"font-weight: 400;\"> if the A\/A test shows that your tools aren\u2019t properly calibrated.<\/span><\/p>\n<h4><span style=\"font-weight: 400;\">2. Declaring a winner in an A\/A test does not tell you a lot<\/span><\/h4>\n<p><span style=\"font-weight: 400;\">Confidence is Inherent in any type of split or multivariate testing. The fact that an A\/B testing engine declares a winner with 99% confidence does not mean that you are certain that you found a true winner.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">A statistical significance of 95% means that there is a 1 in 20 chance that the results you\u2019re seeing in your test are due to random chance.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">As a matter of fact,\u00a0<\/span><\/p>\n<blockquote><p><i><span style=\"font-weight: 400;\">\u201cAfter running thousands of A\/B tests and hundreds of A\/A tests, I expected to see different testing platforms regularly declare a winner in an A\/A test. I have seen this in Test &amp; Target, <\/span><\/i><a href=\"https:\/\/www.invespcro.com\/blog\/google-optimize-the-good-the-bad-and-the-ugly\/\"><i><span style=\"font-weight: 400;\">Google Website Optimizer<\/span><\/i><\/a><i><span style=\"font-weight: 400;\"> (while it lasted), Optimizely, and VWO.\u201d<\/span><\/i><\/p><\/blockquote>\n<h4><span style=\"font-weight: 400;\">3. A\/A tests require a large sample size to conclude<\/span><\/h4>\n<p><span style=\"font-weight: 400;\">The final argument against running A\/A tests is that they require a <\/span><a href=\"https:\/\/www.invespcro.com\/blog\/calculating-sample-size-for-an-ab-test\/\"><span style=\"font-weight: 400;\">large sample size<\/span><\/a><span style=\"font-weight: 400;\"> to prove that there is no significant bias.<\/span><\/p>\n<blockquote><p><span style=\"font-weight: 400;\">Here\u2019s a vivid example Qubit shared in their phenomenal white paper titled \u201c<\/span><a href=\"http:\/\/www.qubit.com\/sites\/default\/files\/pdf\/mostwinningabtestresultsareillusory_0.pdf\"><span style=\"font-weight: 400;\">Most Winning A\/B Test Results Are Illusory<\/span><\/a><span style=\"font-weight: 400;\">:\u201d<\/span><\/p><\/blockquote>\n<p><i><span style=\"font-weight: 400;\">Imagine you are trying to determine whether there is a difference between men&#8217;s and women&#8217;s heights.<\/span><\/i><\/p>\n<p><i><span style=\"font-weight: 400;\">If you measured only a single man and a single woman, you would risk not detecting the fact that men are taller than women.<\/span><\/i><\/p>\n<p><i><span style=\"font-weight: 400;\">Why is this? Because random fluctuations mean you might choose an especially tall woman or an especially short man just by chance.<\/span><\/i><\/p>\n<p><i><span style=\"font-weight: 400;\">However, if you measure 10000 people, the average for men and women will eventually stabilize, and you will detect the difference between them. That\u2019s because statistical power increases with the size of your sample.<\/span><\/i><\/p>\n<h2><span style=\"font-weight: 400;\">The Unexpected Benefits of A\/A Testing in CRO<\/span><\/h2>\n<p><span style=\"font-weight: 400;\">At Invesp, we run A\/A tests as part of any <\/span><a href=\"https:\/\/www.invespcro.com\/services\/\"><span style=\"font-weight: 400;\">CRO services<\/span><\/a><span style=\"font-weight: 400;\">. We typically run these tests at the start of the project and then every 4 to 6 months for the first 1-2 weeks as we gather different data on the website and its customers.<\/span><\/p>\n<p><b>Our rationale:<\/b><\/p>\n<h3><span style=\"font-weight: 400;\">1. We want to benchmark the performance of different pages or funnels on the website<\/span><\/h3>\n<p><span style=\"font-weight: 400;\">How many visitors or conversions come to the homepage, cart page, product page, etc?<\/span><\/p>\n<p><span style=\"font-weight: 400;\">When we do that, we are not worried about whether we are going to find a winner; we are looking for general trends for a particular page.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">These tests help us understand questions such as: What is the macro conversion rate for the home page?<\/span><\/p>\n<p><span style=\"font-weight: 400;\">How does that conversion rate break down between different visitor segments?\u00a0 How does that conversion rate break down between different device segments?<\/span><\/p>\n<p><span style=\"font-weight: 400;\">A\/A tests provide us with a baseline that we can examine when preparing new tests for any part of the website.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">We can get the same data from the analytics platforms on the website. Yes, and No!<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Since our A\/B testing tool is mainly used to declare a winner (while still sending data to Google Analytics or doing external calculations), we still want to see the website metrics when using it.<\/span><\/p>\n<h3><span style=\"font-weight: 400;\">2. We decide on a minimum sample size and expected time to run a test<\/span><\/h3>\n<p><span style=\"font-weight: 400;\">Determining the required sample size is very <\/span><a href=\"https:\/\/www.invespcro.com\/blog\/ab-testing-statistics-made-simple\/\"><span style=\"font-weight: 400;\">important<\/span><\/a><span style=\"font-weight: 400;\"> for an A\/B test<\/span><\/p>\n<p><span style=\"font-weight: 400;\">If the sample size is too small, little information can be obtained from the test to draw meaningful conclusions.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">On the other hand, if it is too large, the information obtained through the tests will be beyond that needed, wasting time and money.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">When we conduct an A\/A test for different areas of the funnel, we look closely at the number of visitors the A\/B testing platform is capturing, the number of conversions, conversion rates, etc.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">This data helps us determine the minimum sample size required to run an A\/B test on a particular website funnel and how long we need to run our regular A\/B tests.<\/span><\/p>\n<h3><span style=\"font-weight: 400;\">3. We want to get a general sense of how long it takes to deploy the simplest, straightforward A\/B test on the website<\/span><\/h3>\n<p><span style=\"font-weight: 400;\">You have to agree that an A\/A test is the easiest and fastest test you can deploy on a website. It is amazing how many technical challenges appear when you run a simple A\/A test. This is always the case if the client is just starting with a <\/span><a href=\"https:\/\/www.invespcro.com\/cro\/\"><span style=\"font-weight: 400;\">CRO<\/span><\/a><span style=\"font-weight: 400;\"> project.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">They have never deployed a test on their website. The more complicated the technical architecture for the client website, the more AA tests will be helpful in identifying possible technical issues before we launch the actual program.\u00a0<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Scripts are not installed correctly, GTM needs to be configured to capture additional data, issues around third-party conversions, and the list goes on.<\/span><\/p>\n<h3><span style=\"font-weight: 400;\">4. Never trust the machine: check the accuracy of the A\/B testing tool<\/span><\/h3>\n<p><span style=\"font-weight: 400;\">Before running A\/B tests, it\u2019s important to make sure your tools are configured correctly and working the way they should.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Running these tests helps us check the accuracy of the A\/B testing tools we\u2019re using.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">What more?<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Companies about to purchase an A\/B testing tool or want to switch to a new testing software may run an A\/A test to ensure the new software works fine and to see if it has been set up properly.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">I recall one project where all tests run by the client on a particular testing platform ended up with a loss. All 170 tests. Mind you, I am used to running tests that generate any improvement. But running 170 tests with no result is unusual.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">When we switched the client to another platform and re-ran some of the ten most promising tests, 6 of them resulted in the winner with 99% confidence.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Chad Sanderson, co-founder and CEO of <\/span><a href=\"https:\/\/www.gable.ai\/\"><span style=\"font-weight: 400;\">Gable.ai<\/span><\/a><span style=\"font-weight: 400;\">, had great insights into this:<\/span><\/p>\n<blockquote><p><i><span style=\"font-weight: 400;\">It isn\u2019t wise to downplay the danger of system errors. Most A\/B testing solutions use slightly different algorithms that may or may not result in major discrepancies the harder the program is pushed (Think 10 \u2013 20 \u2013 30 variants). This might seem like an outlier issue, but it also might indicate a deeper underlying problem with either A.) the math\u00a0 B.) the randomization mechanism, or C.) the browser cookie. Tools break (quite often, actually), and putting blind trust in any other product is asking for trouble.<\/span><\/i><\/p><\/blockquote>\n<p><span style=\"font-weight: 400;\">We have to admit that the reliability of your AB testing software is a scary thought. If you are using that AB testing software to determine the winner of your tests, and then you question the reliability of the software, you are effectively questioning your whole program:<\/span><\/p>\n<p><a href=\"https:\/\/twitter.com\/chadjsanderson?lang=en\"><span style=\"font-weight: 400;\">Chad Sanderson<\/span><\/a><span style=\"font-weight: 400;\"> adds:<\/span><\/p>\n<blockquote><p><i><span style=\"font-weight: 400;\">If a program doesn\u2019t generate an overwhelming amount of type I errors (95% confidence), it doesn\u2019t mean it still can\u2019t be flawed. Thanks to the statistical mechanisms behind A\/A tests (P Values are distributed uniformly under the null hypothesis), we can analyze test data the same way we\u2019d determine whether or not a coin is fair or weighted: by examining the likelihood of observing a certain set of outcomes.<\/span><\/i><\/p>\n<p><i><span style=\"font-weight: 400;\">For example, after flipping a fair coin 10 times, we could expect to see 10 heads in a row only once out of 1024 attempts (50\/50 chance per flip). In the same way, if we run a 10 variant A\/A test and see all 10 values have a p-value over .5, the probability of this happening would be the same (50\/50 chance per test). Without going too deep into Bayesian statistics, the next step would be to ask yourself if it\u2019s more likely that you observed a rare result on your first attempt or that something is wrong with the tool.<\/span><\/i><\/p><\/blockquote>\n<p><span style=\"font-weight: 400;\">If you\u2019re a CRO, let me suggest a new idea. Do not rely on your A\/B testing software to declare winners for the next month. Send your testing data to Google Analytics, pull the numbers for each variation from analytics, and do the analysis yourself!<\/span><\/p>\n<h2><span style=\"font-weight: 400;\">Considerations when conducting A\/A testing<\/span><\/h2>\n<h3><span style=\"font-weight: 400;\">1. What should you do if your A\/A test shows a winner?<\/span><\/h3>\n<p><span style=\"font-weight: 400;\">When running an A\/A test, it\u2019s important to remember that finding a <\/span><a href=\"https:\/\/www.invespcro.com\/cro\/\"><span style=\"font-weight: 400;\">difference in conversion rate<\/span><\/a><span style=\"font-weight: 400;\"> between two identical versions is always possible.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">This doesn\u2019t necessarily mean the A\/B testing platform is inefficient or poor, as there is always an element of randomness in testing.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Now, what should you do if you find a winner?<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Let me give you an example posted by <\/span><a href=\"https:\/\/www.linkedin.com\/in\/chad-sanderson\/\"><span style=\"font-weight: 400;\">Chad Sanderson <\/span><\/a><span style=\"font-weight: 400;\">using Adobe Target, running\u00a0 10 Variants, all default vs default.<\/span><\/p>\n<blockquote><p><i><span style=\"font-weight: 400;\">The period being looked at here was 3 weeks (note the number of orders on the left and confidence to the right) on a Desktop \u2013 New Visitors \u2013 Cart segment. Running these through a one-tailed statistical calculator, even with a strict family error correction, still yields 7 significant results.<\/span><\/i><\/p>\n<p><i><span style=\"font-weight: 400;\">If you used a Bayesian interpretation, it would be 10\/10. All variants had more than enough power. Yikes.<\/span><\/i><\/p><\/blockquote>\n<h3><span style=\"font-weight: 400;\">2. If you perform A\/A tests and the original wins regularly,<\/span><\/h3>\n<p><span style=\"font-weight: 400;\">There is a good chance that challengers are losing due to performance issues (it takes a little bit to load a variation, and that delay works in favor of the original).<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Therefore, evaluate your testing platform. We have seen this happen more with some testing platforms than others.<\/span><\/p>\n<h3><span style=\"font-weight: 400;\">3. If you perform A\/A tests regularly, and one of the variation\/original wins:<\/span><\/h3>\n<p><span style=\"font-weight: 400;\">It is a fact of life! In my experience, testing engines will declare a winner in 50% to 70% of AA tests (95% to 99% significance).<\/span><\/p>\n<p><span style=\"font-weight: 400;\">But then, look at the kind of uplifters and downlighters you&#8217;re seeing: If one of the variations wins with 3-4% regularly, that could be a signal that you should aim for uplifts higher than that when running your tests.<\/span><\/p>\n<p><a href=\"https:\/\/www.linkedin.com\/in\/chad-sanderson\/\"><span style=\"font-weight: 400;\">Chad Sanderson<\/span><\/a><span style=\"font-weight: 400;\"> adds a great point:<\/span><\/p>\n<blockquote><p><i><span style=\"font-weight: 400;\">Another great use of A\/A Tests is as physical evidence for or against testing a page or element. Let\u2019s say you\u2019ve run an A\/A test and, after one month, have observed a difference between means that is still large, perhaps over 20%. While you could determine this just as easily from a sample size calculation, presenting numbers without context to stakeholders who REALLY want to run tests on that page may not get the job done.<\/span><\/i><\/p>\n<p><i><span style=\"font-weight: 400;\">It\u2019s far more effective to show someone the actual numbers. If they see that the difference between two variations of the same element is drastic even after a month, it\u2019s far easier to understand why observing test results on such a page would mean only a result far greater than what was observed would be possible.<\/span><\/i><\/p><\/blockquote>\n<p><span style=\"font-weight: 400;\">In this case, you\u2019ll have the control, the challenger, and an identical copy of the control.<\/span><\/p>\n<h2><span style=\"font-weight: 400;\">Final Thoughts:<\/span><\/h2>\n<p><span style=\"font-weight: 400;\">Whether you believe in A\/A testing or not, you should always run a winner from an A\/B test against the original in a head-to-head test to validate wins.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Here\u2019s an example:<\/span><\/p>\n<p><span style=\"font-weight: 400;\">You run an A\/B test with 4 challengers against original V2 wins with a 5% uplift, then create a new test with the original against V2 in a head-to-head test.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Done correctly, A\/A testing can help prepare you for a successful AB testing program by providing data benchmarks on different areas of the website and checking for any discrepancy in your data, let\u2019s say, between the number of visitors you see in your testing tool and the web analytics tool.<\/span><\/p>\n<p><i><span style=\"font-weight: 400;\">What\u2019s your perception of A\/A tests? Are you running them on your website? I\u2019d love to hear your thoughts, so let me know in the comment section below!<\/span><\/i><\/p>\n<h2><span style=\"font-weight: 400;\">Additional Resources<\/span><\/h2>\n<ol>\n<li><a href=\"https:\/\/www.invespcro.com\/cro\/\"><span style=\"font-weight: 400;\">What Is Conversion Rate Optimization?<\/span><\/a><\/li>\n<li><a href=\"https:\/\/www.invespcro.com\/ab-testing\/\"><span style=\"font-weight: 400;\">What is AB Testing?<\/span><\/a><\/li>\n<li><a href=\"https:\/\/www.invespcro.com\/blog\/what-is-multivariate-testing\/\"><span style=\"font-weight: 400;\">What Is Multivariate Testing?<\/span><\/a><\/li>\n<li><a href=\"https:\/\/www.invespcro.com\/ab-testing\/tools\/\"><span style=\"font-weight: 400;\">What Are The Features Of A Good AB Testing Tool?<\/span><\/a><\/li>\n<\/ol>\n","protected":false},"excerpt":{"rendered":"<p><span class=\"span-reading-time rt-reading-time\" style=\"display: block;\"><span class=\"rt-label rt-prefix\">Reading Time: <\/span> <span class=\"rt-time\"> 10<\/span> <span class=\"rt-label rt-postfix\">minutes<\/span><\/span>Did you know 80% to 90% of A\/B tests do not produce any statistically significant result? Only 1 out of 8 A\/B tests show a significant difference. Done incorrectly, some marketers have begun to question the value of A\/B testing\u2014their A\/B test reports an uplift of 20%. Yet, the increase reported by the AB testing [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":10101,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[116,36],"tags":[357,87,109],"class_list":["post-10100","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ab-testing","category-cro","tag-advanced","tag-general","tag-resource"],"_links":{"self":[{"href":"https:\/\/www.invespcro.com\/blog\/wp-json\/wp\/v2\/posts\/10100","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.invespcro.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.invespcro.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.invespcro.com\/blog\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.invespcro.com\/blog\/wp-json\/wp\/v2\/comments?post=10100"}],"version-history":[{"count":2,"href":"https:\/\/www.invespcro.com\/blog\/wp-json\/wp\/v2\/posts\/10100\/revisions"}],"predecessor-version":[{"id":98788,"href":"https:\/\/www.invespcro.com\/blog\/wp-json\/wp\/v2\/posts\/10100\/revisions\/98788"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.invespcro.com\/blog\/wp-json\/wp\/v2\/media\/10101"}],"wp:attachment":[{"href":"https:\/\/www.invespcro.com\/blog\/wp-json\/wp\/v2\/media?parent=10100"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.invespcro.com\/blog\/wp-json\/wp\/v2\/categories?post=10100"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.invespcro.com\/blog\/wp-json\/wp\/v2\/tags?post=10100"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}