Spotting patterns - the difference between making and losing money in A/B testing.

Kyle Hearnshaw

Wrongly interpreting the patterns in your A/B test results can lose you money. It can lead you to make changes to your site that actually harm your conversion rate.

Correctly interpreting the patterns in your A/B test results will mean you learn more from each test you run. It will will give you confidence that you are only implementing changes that will deliver real revenue impact, and it will help you turn any losing tests into future winners.

At Conversion.com we’ve run and analyzed hundreds of A/B and multivariate tests. In our experience, the result of a test will generally fall into one of 5 distinct patterns. We’re going to share these five patterns here, and we’ll tell you what each pattern means in terms of what steps you should take next. Learn to spot these patterns, follow our advice on how to interpret them, and you’ll be making the right decision, more often – making your testing efforts more successful.

To illustrate each of the patterns, we’ll imagine we have run an A/B test on an e-commerce site’s product page and are now looking at the results. We’ll be looking at the increase/decrease in conversion rate that the new version of this page delivered compared to the original page. We’ll be looking at this on a page-by-page basis for the four steps in the checkout process that the visitor goes through in order to complete their purchase (Basket, Checkout, Payment and finally Order Confirmation).

To see the pattern in our results in each case, we’ll plot a simple graph of the conversion rate increase/decrease to each page. We’ll then look at how this increase/decrease in conversion rate has changed as we move through our site’s checkout funnel.

1. The big winner

This is the type of test result we all love. Your new version of a page converts at x% higher to the next step than the original and this x% increase continues uniformly all the way to Order Confirmation.

The graph of our first result pattern would look like this.

The big winner

We see 10% more visitors reaching each step of the funnel.

Interpretation

This pattern is telling us that the new version of the test page successfully encourages 10% more visitors to reach the next step and from there onwards they convert equally as well as existing visitors. The overall result would be a 10% increase in sales. It is clearly logical to implement this new version permanently.

2. The big loser

The negative version of this pattern, where each step shows a roughly equal decrease in conversion rate, is a sign that the change that was made has had a clear negative impact. All is not lost though, often an unsuccessful test can be more insightful than a straightforward winner as the negative result forces you to re-evaluate your initial hypothesis and understand what went wrong. You may have stumbled upon a key conversion barrier for your audience and addressing this barrier in the next test could lead to the positive result you have been looking for.

Graphically this pattern will look like this.

The big loser

We see 10% fewer visitors reaching each step of the funnel.

Interpretation

As the opposite of the big winner, this pattern is telling us that the new version of the test page causes 10% fewer visitors to reach the next step and from there onwards they convert equally as well as existing visitors. The overall result would be a 10% decrease in sales. You would not want to implement this new version of the page.

3. The clickbait

“We increased clickthrus by 307%!” You’ve probably seen sensational headlines like this being thrown around by people in the optimization industry. Hopefully, like us, you’ve developed a strong sense of cynicism when you read results like this. The first question I always ask is “But how much did sales increase by?”. Chances are, if the result being reported fails to mention the impact on final sales then what they actually saw in their test results was this pattern that we’ve affectionately dubbed “The clickbait”.

Test results that follow this pattern will show a large increase in the conversion rate to the next step but then this improvement quickly fades away in the later steps and finally there is little or no improvement to Order Confirmation.

Graphically this pattern will look like this.

The clickbait

Interpretation

This pattern catches people out as the large improvement to the next step feels as if it should be a positive result. However, often this pattern of results is merely showing that the new version of the page is pushing a large amount of visitors through to the next step who have no real intention of purchasing. This is illustrated by the sudden large drop in the conversion rate improvement at the later steps when all of the unqualified extra traffic abandons the funnel.

As with all tests, whether this result can be deemed a success depends on the specifics of the site you are testing on and what you are looking to achieve. If there are clear improvements to be made on the next step(s) of the funnel that could help to convert the extra traffic from this test, then it could make sense to address those issues first and then re-run this test. However, if these extra visitors are clicking through by mistake or because they are being misled in any way then you may find it difficult to convert them later no matter what changes you make. Instead, you could be alienating potential customers by delivering a poor customer experience. You’ll also be adding a lot of noise to the data of any tests you run on the later pages as there are a lot of extra visitors on those pages who are unlikely to ever purchase.

4. The qualifying change

The third pattern is almost the reverse of the second in that here we actually see a drop in conversion to the next step but an overall increase in conversion to order confirmation.

Graphically this pattern looks like this.

The qualifying change

Interpretation

Taking this pattern as a positive can seem counter-intuitive because of the initial drop in conversion to the next step. Arguably, this type of result is actually as good if not better than a big winner from pattern 1. Here the new version of the test page is having what’s known as a qualifying effect. Visitors who may have otherwise abandoned at a later step in the funnel are leaving at the first step instead. Those visitors who do continue past the test page on the other hand are more qualified and therefore convert at a much higher rate. This explains the positive result to Order Confirmation.

Implementing a change that causes this type of pattern means visitors remaining in the funnel now have expressed a clearer desire to purchase. If visitors are still abandoning at a later stage in the funnel, the likelihood now is that this is being caused by a specific weakness on one of those pages. Having removed a lot of the noise from our data, in the form of the unqualified visitors, we are left with a much more reliable measure of the effectiveness of the later steps in the funnel. This means identifying weaknesses in the funnel itself will be far easier.

As with pattern 2 there are circumstances where a result like this may not be preferable. If you already have very low traffic in your funnel then reducing that further could make it even more difficult to get statistically significant results when testing on the later pages of the funnel. You may want to look at tests to drive more traffic to the start of your funnel before implementing a change like this.

5. The messy result

This final pattern is often the most difficult to extract insight from as it describes results that show very little pattern whatsoever. Here we often see both increases and decreases in conversion rate to the various steps in the funnel.

The messy result

Interpretation

First and foremost, a lack of a discernible pattern in the results of your split-test can be a tell-tale sign of insufficient levels of data. At the early stages of experiments, when data levels are low, it is not uncommon to see results fluctuating up and down. Reading too much into these results at this stage is a common pitfall. Resist the temptation of checking your experiment results too frequently – if at all – in the first few days. Even apparently strong patterns that emerge at these early stages can quickly disappear with a larger sample.

If your test has a large volume of data, and you’re still seeing this type of result, then the likelihood is that your new version of the page is delivering a combination of the effects from the clickbait and the qualifying change patterns. Qualifying some traffic but simultaneously pushing more unqualified traffic through the funnel. If your test involved making multiple changes to a page, try testing the changes separately to pinpoint which individual changes are causing the positive impact and which are causing the negative impact.

Key takeaways

The key point to take from all of these patterns is the importance of tracking and analyzing the results at every step of your funnel when you A/B test, rather than just the next step after your test page. It is easy to see how if only the next step was tracked that many tests can have been falsely declared as winners or losers. In short, this is losing you money.

Detailed test tracking will allow you to pinpoint the exact step in your funnel that visitors are abandoning, and how that differs for each variation of the page that you are testing. This can help to answer the more important question of why they are abandoning. If the answer to this is not obvious, running some user tests or watching some recorded user sessions of your test variations can help you to develop these insights and come up with a successful follow up test.

There is a lot more to analyzing A/B tests than just reading off a conversion rate increase to any single step in your funnel. Often, the pattern of the results can reveal greater insights than the individual numbers. Avoid jumping to conclusions based on a single increase or decrease in conversion to the next step and always track right the way through to the end of your funnel when running tests. Next time you go to analyze a test result, see which of these patterns it matches and consider the implications for your site.

Join 5,000 other people who get our newsletter updates