Data & Analytics Archives |

The Perception Gap: Can we ever really know what users want?

Have you ever heard of Mazagran? A coffee-flavoured bottled soda that Starbucks and Pepsi launched back in the mid-1990s? No, you haven’t, and there is a good reason for that!

Starbucks correctly collected market research that told them customers wanted a cold, sweet, bottled coffee beverage that they could conveniently purchase in stores.

So surely Mazagran was the answer?

Evidently not! Mazagran was not what the consumers actually wanted. The failure of this product was down to the asymmetry that existed between what the customers wanted and what Starbucks believed the customer wanted.

Despite Starbucks conducting market research, this gap in communication still occurred, often known as the perception gap. Luckily for Starbucks, Mazagran was a stepping stone to the huge success that came with bottled Frappucinos; what the consumers actually wanted.

What is the perception gap and why does it occur?

Perception is seen as the (active) process of assessing information in your surroundings. A perception gap occurs when you attempt to communicate this assessment of information but it is misunderstood by your audience.

Assessing information in your surroundings is strongly influenced by communication. Due to different forms of human communication, a perception gap can occur when communication styles are different to your own. Not only can these gaps occur, but they vary in size. This depends on the different levels of value that you, or your customers, attach to each factor. In addition, many natural cognitive biases can influence the degree of the perception gap, biasing ourselves to believe we know what other people are thinking, more than we actually do.

Perception gaps in ecommerce businesses

Perception gaps mainly occur in social situations, but they can also heavily impact e-commerce businesses, from branding and product to marketing and online experience.

Perception gaps within ecommerce mainly appear due to customers forming opinions about your company and products on their broader experiences and beliefs. One thing that is for sure, perception gaps certainly occur between websites and their online users. Unfortunately, they are often the start of vicious cycles, where small misinterpretations of what the customer wants or needs are made worse when we try to fix them. Ultimately, this means we are losing out on turning visitors into customers.

Starbucks and Pepsi launching Mazagran was an example of how perception gaps can lead to the failure of new products. McDonalds launching their “Good to Know” campaign is an example of how understanding this perception gap can lead to branding success.

This myth-busting campaign was launched off the back of comprehensive market research using multiple techniques. McDonalds understood the differences between what they thought of themselves e.g. fast food made with high quality ingredients, and what potential customers thought of McDonalds, e.g. chicken nuggets made of chicken beaks and feet. Understanding that this perception gap existed allowed them to address these in their campaign, which has successfully changed users perceptions of their brand.

For most digital practices, research plays an important part in allowing a company or brand to understand their customer base. However, conducting and analysing research is often where the perception gap begins to form.

For example, say you are optimising a checkout flow for a retailer. You decide to run an on-site survey to gather some insight into why users may not be completing the forms, and therefore are not purchasing. After analysing the results it seems the top reason users are not converting is they are finding the web form confusing. Now this where the perception gap is likely to form. Do users want the form to be shortened? Do they want more clarity or explanation around form fields? Is it the delivery options that they may not understand? 

Not being the user means we will never fully understand the situation that the user is in. Making assumptions of this builds on the perception gap.

Therefore, reducing the perception gap is surely a no-brainer when it comes to optimising our websites. But is it as easy as it seems? 

In order to reduce the perception gap you need to truly understand your customer base. If you don’t, then there is always going to be an asymmetry between what you know about your customers and what you think you know about your customers.

How to reduce perception gaps

Sadly, perception gaps are always going to exist due to our interpretation of the insights we collect and the fact that we ourselves are not the actual user. However, the following tips may help to get the most out of your testing and optimisation by reducing the perception gap:

  1. Challenge assumptions – too often we assume we know about our customer, how they are interacting with our site and what they are thinking. Unfortunately, these assumptions can get cemented over time into deeply held beliefs of how users think and behave. However, challenging these assumptions leads to true innovation and new ideas that may not have been thought of before. With this in mind, assumptions can be answered by the research we conduct.
  2. Always optimise based on two supporting evidences – the perception gap is more likely to occur when research into a focus area is limited or based on one source of insight. Taking a multiple-measure approach means insights are likely to be more valid and reliable.
  3. Read between the lines – research revolves around listening to your customers but more importantly it is about reading between the lines. It is the difference between asking for their responses and then actually understanding them. As Steve Jobs once said “Customers don’t know what they want”; whether you believe that or not, understanding their preferences is still vital for closing the perception gap.
  4. Shift focus to being customer-led – being more customer-led, as opposed to product-led will place a higher value on research of your customers. With more emphasis on research, this should lead to a great knowledge and understanding of your customer base, which in turn should reduce the perception gap that has the potential to form.


The perception gap is something that is always going to exist and is something we have to accept. Conducting research, and a lot of it, is certainly a great way to reduce the perception gap that will naturally occur. However, experimentation is really the only means to truly confirm whether the research and insight you collected into your customer base are valid and significantly improve the user experience. One quote that has always made me think is by Flint McLaughlin who said “we don’t optimise web pages, we optimise for the sequence of thought”. This customer-led view when it comes to experimentation can only result in success.

5 steps to kick-start your experimentation programme with actionable insights

Experimentation has to be data-driven.

So why are businesses still kicking off their experimentation programmes without good data? We all know running experiments on gut-feel and instinct is only going to get you so far.

One problem is the ever-growing number of research methods and user-research tools out there. Prioritising what research to conduct is difficult. Especially when you are trying to maximise success with your initial experiments and need to get those experiments out the door quickly to show ROI.

We are no stranger to this problem. And the solution, as ever, is to take a more strategic approach to how we generate our insight. We start every project with what we call the strategic insights phase. This is a structured, repeatable approach to planning user-research we’ve developed that consistently generates the most actionable insight whilst minimising effort.

This article will provide a step-by-step guide of how we plan our research strategy so that you can replicate something similar yourself. Meaning you can set up your future experiments for greater success.

The start of an experimentation programme is crucial. Pressures of getting stakeholders buy-in or achieving quick ROI means the initial experiments are often the most important. A solid foundation of actionable insight from user-research can make a big difference as to how successful your early experiments are.

With hundreds of research tools enabling multiple different research methods, a challenge arises with how we choose which research method will generate the insight that’s most impactful and actionable. Formulating a research strategy for how you’re going to generate your insight is therefore crucial.

When onboarding new clients, we run an intense research phase for the first month. This allows us to get up to speed on the client’s business and customers. More importantly, it provides us with data that allows us to start building our experimentation framework – identifying where our experimentation can make the most impact and what our experimentation should focus on. We find dedicating this time to insights sets our future experiments up for the bigger wins and therefore, a rapid return on investment.

Our approach: Question-led insights

When conducting research to generate insight, we use what we call a question-led approach. Any piece of research we conduct must have the goal of answering a specific question. We identify the questions we need to answer about a client’s business and their website and then conduct only the research we need to answer them. Taking this approach allows us to be efficient, gaining impactful and actionable insights that can drive our experimentation programme.

Following a question-led approach also means we don’t fall into the common pitfalls of user-research:

  • Conducting research for the sake of it
  • Wasting time down rabbit holes within our data or analytics
  • Not getting the actionable insight you need to inform experimentation

There are 5 steps in our question-led approach.

1. Identify what questions you need, or want, to answer about your business, customers or website

The majority of businesses still have questions about their customers they don’t have the answers to. Listing these questions can provide a brain-dump for everything you don’t know but that if you did know would help you design better experiments. Typically these questions will fall into three main categories; your business, your customers and your website.

Although one size does not fit all with the questions we need to answer, we have provided some of the typical questions that we need to answer for clients in e-commerce or SaaS.

SaaS questions:

  • What is the current trial-to-purchase conversion rate?
  • What motivates users on the trial to make a purchase? What prevents users on the trial to make a purchase?
  • What is the distribution between the different plans on offer?
  • What emails are they sending users when they are in their trial? What is the life cycle of these emails?
  • What are the most common questions asked to customer services or via live chat?

We can quite typically end up with a list of 20-30 questions. So the next step is to prioritise what we need to answer first.

2. Prioritise what questions need answering first

We want our initial experiments to be as data-driven and successful as possible. Therefore, we need to tackle the questions that are likely to bring about the most impactful and actionable insights first.

For example, a question like “What elements in the navigation are users interacting with the most?” might be a ‘nice to know’. However, if we don’t expect a navigation experiment to be one we would run any time soon, this may not be a ‘need to know’ and therefore wouldn’t be high priority. On the other hand, a question like “What’s stopping users from adding products to the basket?” is almost certainly a ‘need to know’. Answering this is very likely to generate insight that can be directly turned into an experiment. Rule of thumb is to prioritise the ‘need to know’ questions ahead of the ‘nice to know’.

We also need to get the actionable insight quickly. Therefore, it is important to ensure that we prioritise questions that aren’t too difficult or time consuming to answer. So, a second ranking of ‘ease’ can also help to prioritise our list.

3. Decide the most efficient research techniques to answer these questions

There are many types of research you could use to answer your questions. Typically we find the majority of questions can be answered by one or more of web analytics, on-site or email surveys, usability testing or heatmaps/scrollmaps. There may be more than one way to find your answer.

However, one research method could also answer multiple questions. For example, one round of usability testing might be able to answer multiple questions focused on why a user could be dropping off at various stages of your website. This piece of research would therefore be more impactful, as you are answering multiple questions, and would be more time efficient compared to conducting multiple different types of research.

For each question in our now prioritised list we decide the research method most likely to answer it. If there are multiple options you could rank these by the most likely to get an answer in the shortest time. In some cases we may feel the question was not sufficiently answered by the first research method, so it can be helpful to consider what you would do next in these cases.

4. Plan the pieces of research you will carry out to cover the most questions

You should no have a list of prioritised questions you want to answer and what research method you would use to answer each. From this you can select the pieces of research you should carry out based on which would give you the best coverage of the most important questions. For example, you might see that 5 of your top 10 questions could be answered through usability testing. Therefore, you should prioritise usability testing in the time you have, and the questions you need to answer can help you to design your set of tasks.

After your first round of research, revisit your list of questions and for each question evaluate whether or not you feel it has been sufficiently answered. Your research may also have generated more questions that should be added to the list. Periodically you might also need to re-answer questions where user behaviour has changed due to your experimentation. For example, if initially users were abandoning on your basket page due to a lack of trust, but successful experiments have fixed this, then you may need to re-ask the question to discover new problems on the basket page.

On a regular basis you can then repeat this process again of prioritising the questions, deciding the best research methods and then planning your next set of research.

5. Feed these insights into your experimentation strategy

Once your initial research pieces have been conducted and analysed it is important to compile the insight from them in one place. This has two benefits. The first being the ease in visualising and discovering themes that may be emerging within your data from multiple sources of insight. The second being the benefit that comes from having one source of information that could be shared with others within your business.

As your experimentation programme matures it is likely you will be continuously running research in parallel to your experiments. The insight from this research will answer new questions that will naturally arise and can help inform your experimentation.

Taking this question-led approach means you can be efficient with the time you spend on research, while still maximising your impact. Following our step-by-step guide will provide a solid foundation that you can work upon within your business:

  1. Identify what questions you need, or want, to answer about your business, customers or website
  2. Prioritise what questions need answering first
  3. Decide the most efficient research techniques to answer these questions
  4. Plan the pieces of research you will carry out to cover the most questions
  5. Feed these insights into your experimentation strategy

For more information on how to kick-start experimentation within your business, get in touch here.

Who, When and Where, but what about the Why? Understanding the value of Qualitative Insights: On-site Surveys

Within our data-driven industry, many gravitate towards heavily relying on quantitative data to guide and inform experimentation. With Google Analytics and the metrics it measures (e.g. conversion rate, bounce rate and exit rates) often dominating our focus, it means many undervalue or forget about the other insights we can run.

Numbers, rates and statistics are great for finding out what’s happening on your site and where the opportunities for testing lie. However, what some people still don’t understand is that quantitative insights can only take us so far within conversion rate optimisation. Understanding where to target our tests for the best impact is necessary but it does not provide insight into what our tests should entail. This is where qualitative research takes center stage.

Qualitative research provides us with insight into the “why” behind quantitative research. It provides a deeper understanding into your visitors and customers, which is vital to understanding why they behave and engage with your site in a particular way. Conducting qualitative research is ideal for discovery and exploration and a great way of generating insights which can be used to guide future experimentation.

In this series, we will cover the qualitative insights that we run for our clients at including case studies of when qualitative research has helped with our tests and some of our best practices and tips!

On-site Surveys

By on-site surveys we are referring to targeted pop-up surveys which appear on websites asking either one, or a series of questions to users to gather insights. Hotjar and Qualaroo are two of our favourite data collection tools for this specific insight.

On-site surveys let you target questions to any visitor, or subset of visitors, to any page within your website. These surveys can be prompted to appear to your visitors in a number of ways; time elapsed, specific user behaviour such as a clicked element or exit intent, or custom triggers using Javascript. Understanding the behaviour and intent of website visitors allows us to more effectively interpret motivations and barriers that they may face with your site. These insights can then guide our data-driven tests which aim to emphasise the motivations whilst eliminating the barriers.

On-site surveys have many benefits such as being non-intrusive, immediate in collecting data and they are anonymous which allows for higher ‘ecological’ validity of responses. But most importantly, they have the benefit of gaining real-time feedback from users ‘in the moment’ while they are engaging with your site.

Don’t underestimate the power of an exit survey. An exit survey is triggered when a user shows intent to leave a website, for example, when a user moves their cursor towards the top of the page. Exit surveys are the best non-intrusive qualitative method that provide crucial insights into why visitors are not converting or why your website may have a high exit rate. They often outperform other website surveys in terms of response rates because they minimise the annoyance a survey gives to a user, especially when they were already planning on leaving the site.

But what questions should you be asking in these on-site surveys? Well, that really depends on what you want to get out of this piece of insight. Below are a few examples of the types of questions you can ask:

  • Investigating user intent and bounce rate
    • Why did you come to this site today?
    • What were you hoping to find on this page?
    • Did this page contain the information you were looking for?
  • Understanding usability challenges
    • Were you able to complete your task today? (If yes, why? If no, why not?)
    • Is there anything on the site that doesn’t work the way you expected it to?
  • Uncover missing content
    • What additional information would you like to see on this page?
    • Did this article answer your question?
  • Identify potential motivations
    • What persuaded you to purchase from us?
    • What convinced you to use us rather than a competitor?
    • What was the one thing that influenced you to complete your task/purchase?
  • Identify potential barriers
    • What is preventing you from completing your task?
    • What is stopping you from completing your checkout?  
    • What concerns do you have about purchasing from us?

When launching a survey, it may be difficult to know how long to run it for or how many responses you actually need. In reality, large sample sizes are important when collecting data, however we are more concerned with gaining in-depth understanding into your users, while looking for ideas and inspiration. Therefore we look for thematic saturation, the idea that the data is providing us with no new significant information, instead of large sample sizes. For more information about the sample size required to running an on-site survey and how many responses are necessary, check out our article about on-site surveys and thematic saturation.

At we ensure we are continuously collecting both qualitative and quantitative insights on our clients. On-site surveys are just one of these insights which help to guide and backup our data-driven hypotheses.

An example of when on-site surveys have guided tests to add additional revenue to our clients is with one company within the online pharmacy industry. Our on-site survey asked users what stopped them from choosing a particular product when they were at a specific stage of their user journey. Insights demonstrated that users found it difficult to choose products by themselves with no assistance or help. This informed a test we ran on these specific pages which implemented signposting to particular products through recommendations. We believed that by including elements that could aid users at the product selection stage, it would make it easier for users to select a product, thus eliminating the barrier we found via our on-site survey. Making this change caused an uplift of 3.48% in completed purchases for new users.  

Look out for our next article in this series where we discuss the importance of competitor analysis.  

How we increased revenue by 11% with one small change

Split testing has matured and more and more websites are testing changes. The “test everything” approach has become widespread and this has been a huge benefit for the industry. Companies now know the true impact of changes and can avoid costly mistakes. The beauty of testing is that the gains are permanent, and the losses are temporary.

Such widespread adoption of testing has brought the challenge that many tests have small, or no impact on conversion rates. Ecommerce managers are pushing for higher conversion rates with the request:

“We need to test bigger, more radical things”

Hoping that these bigger tests bring the big wins that they want.

Unfortunately, big changes don’t always bring big wins, and this approach can result in bigger more complex tests, which take more time to create and are more frustrating when they fail.

How a small change can beat a big change

To see how a well thought out, small change can deliver a huge increase in conversion rates, where a big change had delivered none, we can look at a simple example.

This site offers online driver training courses, allowing users to have minor traffic tickets dismissed. Part of the process gives users the option to obtain a copy of their “Driver Record”. The page offering this service to customers, was extremely outdated:

Wireframe to demonstrate the original page layout for the driver record upsell

Conversion and usability experts will panic at this form with its outdated design, lack of inline validation and no value proposition to convince the user to buy.

The first attempt to improve this form was a complete redesign:

Wireframe to show the initial test designed to increase driver record upsells

Although aesthetically more pleasing, featuring a strong value proposition and using fear as a motivator, the impact of this change was far from that expected. Despite rebuilding the entire page, there was almost no impact from the test. The split test showed no statistically significant increase or decrease.

This test had taken many hours of design and development work, with no impact on conversion, so what had gone wrong?

To discover the underlying problem, the team from placed a small Qualaroo survey on the site. This popped up on the page, asking users “What’s stopping you from getting your driver record today?”

Qualaroo (1)


Small on page surveys like this are always extremely valuable in delivering great insights for users, and this was no exception. Despite many complaints about the price (out of scope for this engagement), users repeatedly said that they were having trouble knowing their “Audit Number”.

The audit number is a mandatory field on the form, and the user could find it on their Drivers License. Despite there being an image on the page already showing where to find this, clearly users weren’t seeing it.

The hypothesis for the next version of this test was simple.

“By presenting guidance about where to find the audit number in a standard, user friendly way at the time that this is a problem for the user, fewer users will find this to be an issue when completing the form.”

The test made an extremely small change to the page, adding a small question mark icon next to the audit number field on the form:

Wireframe to show the small addition of a tooltip to the test design

This standard usability method would be clear for users who were hesitating at this step. The lightbox which opened when the icon was clicked, simply reiterated the same image that was on the page.


Despite this being a tiny change, the impact on users was enormous. The test delivered an 11% increase in conversions against the version without the icon. By presenting the right information, at the right time, we delivered a massive increase in conversions without making a big change to the page.

An approach to big wins

So was this a fluke? Were we lucky? Not at all. This test demonstrated the application of a simple but effective approach to testing which can give great results almost every time. There’s often no need to make big or complex changes to the page itself. You can still make radical, meaningful changes with little design or development work.

When looking to improve the conversion rate for a site or page, by following three simple steps you can create an effective and powerful test:

  1. Identify the barrier to conversion.
    A barrier is a reason why a user on the page may not convert. It could be usability-related, such as broken form validation or a confusing button. It could be a concern about your particular product or service, such as delivery methods or refunds. Equally, it could be a general concern for the user, such as not being sure whether your service or product is the right solution to their problem. By using qualitative and quantitative research methods, you can discover the main barriers for user.
  2. Find or create a solution.
    Once you have identified a barrier, you can then work to create a solution. This could be a simple change to the layout of the site; a change to your business practices or policies; supporting evidence or information or compelling persuasive content such as social proof or urgency messaging. The key is to find a solution which directly targets the barrier the user is facing.
  3. Deliver it at the right time.
    The key to a successful test is to deliver your solution to the user when it’s most relevant to them. For example price promises and guarantees should be shown when pricing is displayed; delivery messaging on product pages and at the delivery step in the basket; social proof and trust messaging could be displayed early in the process; and urgency messaging when the user may hesitate. The effectiveness of a message requires it to be displayed on the right page and in the right area for the user to see it and respond to it at the right time.

By combining these three simple steps, you can develop tests which are more effective and have more chance of delivering a big result.

Impact and Ease

Returning to the myth that big results need big tests, you should make sure that you consider the impact of a test and its size as almost completely different things. When you have a test proposal, you should think carefully about how much impact you believe it will have, and look independently at how difficult it will be to build.

At, we assess all tests for Impact and Ease and plot them on a graph:

Dave Graphs

Clearly the tests in the top right corner are the ones you should be aiming to create first. These are the tests that will do the most for your bottom line, in the shortest amount of time.

More impact, more ease

So how do you make sure that you can deliver smaller tests with bigger impact?

Firstly, maximise the impact of your test. You can do this by targeting the biggest barriers for users. By taking a data driven approach to identifying these, you are already giving your test a much higher chance of success. With a strong data-backed hypothesis you already know that you are definitely overcoming a problem for your users.

You can increase the impact by choosing the biggest barriers. If a barrier affects 30% of your users, that will have far more impact than one only mentioned by 5% of your users. Impact is mostly driven by the size of the issue as overcoming it will help the most users.

To get the biggest impact from smaller tests, you need to look at how you can make tests easier to create. By choosing solutions which are simple, you can much more quickly iterate and get winners. Simple but effective ways of developing simple tests can include:

  • Headline testing – headlines are a great way to have a huge impact on a user’s behaviour with very little effort. They are the first part of the page a user will read and allow you to set their mindset for the rest of the session
  • Tooltips and callouts – In forms these can be hugely effective. They are small changes but capture the user’s attention when they are thinking about a specific field. By matching security messaging to credit card fields, privacy messaging to email and phone number fields and giving guidance to users when they have to make difficult selections, it is easy to have an impact on their behaviour with a very small change.
  • Benefit bars can be a very effective way of delivering a strong message without a major change to a site. With a huge potential impact (being delivered on every page), but a small impact on page design and layout (usually slotting in below the navigation), benefit bars they can be very effective in getting your core messaging across to a user.
  • Copy testing – by changing the copy at critical parts of the site you can impact the user’s feelings, thoughts and concerns without any complex design or development work

A simple approach for big wins with small tests

By following the simple three step process, you can greatly increase the impact and rate of your tests, without having to resort to big, radical, expensive changes:

  1. Identify the barrier to conversion.
  2. Find or create a solution.
  3. Deliver it at the right time.

The impact of your testing programme is driven more by the size of the issues you are trying to overcome and the quality of your hypotheses than by the complexity and radical approaches in your testing. Focusing time on discovering those barriers, will pay off many times more than spending the time in design and development.

5 questions you should be asking your customers

On-site survey tools provide an easy way to gather targeted, contextual feedback from your customers. Analysis of user feedback is an essential part of understanding motivations and barriers in the decision making processes.

It can be difficult to know when and how to ask the right questions in order to get the best feedback without negatively affecting the user experience. Here are our top 5 questions and tips on how to get the most out of your on-site surveys.

On-site surveys are a great way to gather qualitative feedback from your customers. Available tools include Qualaroo and Hotjar.
On-site surveys are a great way to gather qualitative feedback from your customers. Available tools include Qualaroo and Hotjar.

1. What did you come to < this site > to do today?

Where: On your landing pages

When: After a 3-5 second delay

Why: First impressions are important and that is why your landing pages should have clear value propositions and effective calls to action. Identifying user intentions and motivations will help you make pages more relevant to your users and increase conversion rates at the top of the funnel.

2. Is there any other information you need to make your decision?

Where: Product / pricing pages

When: After scrolling 50% / when the visitor attempts to leave the page

Why: It is important to identify and prioritise the information your users require to make a decision. It can be tempting to hide extra costs or play down parts of your product or service that are missing but this can lead to frustration and abandonment. Asking this question will help you identify the information that your customers need to make a quick, informed decision.

3. What is your biggest concern or fear about using us?

Where: Product / pricing pages

When: After a 3-5 second delay

Why: Studies have found that “…fear influences the cognitive process of decision-making by leading some subjects to focus excessively on catastrophic events.”.  Asking this question will help you identify and alleviate those fears, and reduce the negative ffect they may be having on your conversion rates.

4. What persuaded you to purchase from us today?

Where: Thank you / confirmation page

When: Immediately after purchase. Ideally embedded in the page (try Wufoo forms)

Why: We find that some of our most useful insights come from users who have just completed a purchase. It’s a good time to ask what specifically motivated a user to purchase. Asking this question will help you identify and promote aspects of your service that are most appealing to your customers.

5. Was there anything that almost stopped you buying today?  

Where: Thankyou / confirmation page

When: Immediately after purchase

Why: We find that users are more clear about what would have stopped them purchasing after they have made a purchase. Asking this question can help you identify the most important barriers that are preventing users from converting. Make sure to address these concerns early in the user journey to avoid surprises and reduce periods of uncertainty.

What questions have you asked your customers recently? Have you asked anything that generated valuable insights? Share in the comments below!

Spotting patterns – the difference between making and losing money in A/B testing.

Wrongly interpreting the patterns in your A/B test results can lose you money. It can lead you to make changes to your site that actually harm your conversion rate.

Correctly interpreting the patterns in your A/B test results will mean you learn more from each test you run. It will will give you confidence that you are only implementing changes that will deliver real revenue impact, and it will help you turn any losing tests into future winners.

At we’ve run and analysed hundreds of A/B and multivariate tests. In our experience, the result of a test will generally fall into one of 5 distinct patterns. We’re going to share these five patterns here, and we’ll tell you what each pattern means in terms of what steps you should take next. Learn to spot these patterns, follow our advice on how to interpret them, and you’ll be making the right decision, more often – making your testing efforts more successful.

To illustrate each of the patterns, we’ll imagine we have run an A/B test on an e-commerce site’s product page and are now looking at the results. We’ll be looking at the increase/decrease in conversion rate that the new version of this page delivered compared to the original page. We’ll be looking at this on a page-by-page basis for the four steps in the checkout process that the visitor goes through in order to complete their purchase (Basket, Checkout, Payment and finally Order Confirmation).

To see the pattern in our results in each case, we’ll plot a simple graph of the conversion rate increase/decrease to each page. We’ll then look at how this increase/decrease in conversion rate has changed as we move through our site’s checkout funnel.

1. The big winner

This is the type of test result we all love. Your new version of a page converts at x% higher to the next step than the original and this x% increase continues uniformly all the way to Order Confirmation.

The graph of our first result pattern would look like this.

The big winner

We see 10% more visitors reaching each step of the funnel.


This pattern is telling us that the new version of the test page successfully encourages 10% more visitors to reach the next step and from there onwards they convert equally as well as existing visitors. The overall result would be a 10% increase in sales. It is clearly logical to implement this new version permanently.

2. The big loser

The negative version of this pattern, where each step shows a roughly equal decrease in conversion rate, is a sign that the change that was made has had a clear negative impact. All is not lost though, often an unsuccessful test can be more insightful than a straightforward winner as the negative result forces you to re-evaluate your initial hypothesis and understand what went wrong. You may have stumbled upon a key conversion barrier for your audience and addressing this barrier in the next test could lead to the positive result you have been looking for.

Graphically this pattern will look like this.

The big loser

We see 10% fewer visitors reaching each step of the funnel.


As the opposite of the big winner, this pattern is telling us that the new version of the test page causes 10% fewer visitors to reach the next step and from there onwards they convert equally as well as existing visitors. The overall result would be a 10% decrease in sales. You would not want to implement this new version of the page.

3. The clickbait

“We increased clickthrus by 307%!” You’ve probably seen sensational headlines like this being thrown around by people in the optimisation industry. Hopefully, like us, you’ve developed a strong sense of cynicism when you read results like this. The first question I always ask is “But how much did sales increase by?”. Chances are, if the result being reported fails to mention the impact on final sales then what they actually saw in their test results was this pattern that we’ve affectionately dubbed “The clickbait”.

Test results that follow this pattern will show a large increase in the conversion rate to the next step but then this improvement quickly fades away in the later steps and finally there is little or no improvement to Order Confirmation.

Graphically this pattern will look like this.

The clickbait


This pattern catches people out as the large improvement to the next step feels as if it should be a positive result. However, often this pattern of results is merely showing that the new version of the page is pushing a large amount of visitors through to the next step who have no real intention of purchasing. This is illustrated by the sudden large drop in the conversion rate improvement at the later steps when all of the unqualified extra traffic abandons the funnel.

As with all tests, whether this result can be deemed a success depends on the specifics of the site you are testing on and what you are looking to achieve. If there are clear improvements to be made on the next step(s) of the funnel that could help to convert the extra traffic from this test, then it could make sense to address those issues first and then re-run this test. However, if these extra visitors are clicking through by mistake or because they are being misled in any way then you may find it difficult to convert them later no matter what changes you make. Instead, you could be alienating potential customers by delivering a poor customer experience. You’ll also be adding a lot of noise to the data of any tests you run on the later pages as there are a lot of extra visitors on those pages who are unlikely to ever purchase.

4. The qualifying change

The third pattern is almost the reverse of the second in that here we actually see a drop in conversion to the next step but an overall increase in conversion to order confirmation.

Graphically this pattern looks like this.

The qualifying change


Taking this pattern as a positive can seem counter-intuitive because of the initial drop in conversion to the next step. Arguably, this type of result is actually as good if not better than a big winner from pattern 1. Here the new version of the test page is having what’s known as a qualifying effect. Visitors who may have otherwise abandoned at a later step in the funnel are leaving at the first step instead. Those visitors who do continue past the test page on the other hand are more qualified and therefore convert at a much higher rate. This explains the positive result to Order Confirmation.

Implementing a change that causes this type of pattern means visitors remaining in the funnel now have expressed a clearer desire to purchase. If visitors are still abandoning at a later stage in the funnel, the likelihood now is that this is being caused by a specific weakness on one of those pages. Having removed a lot of the noise from our data, in the form of the unqualified visitors, we are left with a much more reliable measure of the effectiveness of the later steps in the funnel. This means identifying weaknesses in the funnel itself will be far easier.

As with pattern 2 there are circumstances where a result like this may not be preferable. If you already have very low traffic in your funnel then reducing that further could make it even more difficult to get statistically significant results when testing on the later pages of the funnel. You may want to look at tests to drive more traffic to the start of your funnel before implementing a change like this.

5. The messy result

This final pattern is often the most difficult to extract insight from as it describes results that show very little pattern whatsoever. Here we often see both increases and decreases in conversion rate to the various steps in the funnel.

The messy result


First and foremost, a lack of a discernible pattern in the results of your split-test can be a tell-tale sign of insufficient levels of data. At the early stages of experiments, when data levels are low, it is not uncommon to see results fluctuating up and down. Reading too much into these results at this stage is a common pitfall. Resist the temptation of checking your experiment results too frequently – if at all – in the first few days. Even apparently strong patterns that emerge at these early stages can quickly disappear with a larger sample.

If your test has a large volume of data, and you’re still seeing this type of result, then the likelihood is that your new version of the page is delivering a combination of the effects from the clickbait and the qualifying change patterns. Qualifying some traffic but simultaneously pushing more unqualified traffic through the funnel. If your test involved making multiple changes to a page, try testing the changes separately to pinpoint which individual changes are causing the positive impact and which are causing the negative impact.

Key takeaways

The key point to take from all of these patterns is the importance of tracking and analysing the results at every step of your funnel when you A/B test, rather than just the next step after your test page. It is easy to see how if only the next step was tracked that many tests can have been falsely declared as winners or losers. In short, this is losing you money.

Detailed test tracking will allow you to pinpoint the exact step in your funnel that visitors are abandoning, and how that differs for each variation of the page that you are testing. This can help to answer the more important question of why they are abandoning. If the answer to this is not obvious, running some user tests or watching some recorded user sessions of your test variations can help you to develop these insights and come up with a successful follow up test.

There is a lot more to analysing A/B tests than just reading off a conversion rate increase to any single step in your funnel. Often, the pattern of the results can reveal greater insights than the individual numbers. Avoid jumping to conclusions based on a single increase or decrease in conversion to the next step and always track right the way through to the end of your funnel when running tests. Next time you go to analyse a test result, see which of these patterns it matches and consider the implications for your site.