Kyle Hearnshaw, Author at Conversion.com

How to build an experimentation, CRO or AB testing framework

Everyone approaches experimentation differently. But there’s one thing companies that are successful at experimentation all have in common: a strategic framework that drives experimentation.

In the last ten years we’ve worked with start-ups through to global brands like Facebook, the Guardian and Domino’s Pizza, and the biggest factor we’ve seen impact success is having this strategic framework to inform every experiment.

In this post, you’ll learn

    • Why a framework is crucial if you want your experimentation to succeed
    • How to set a meaningful goal for your experimentation programme
  • How to build a framework around your goal and create your strategy for achieving it

We’ll be sharing the experimentation framework that we use day in, day out with our clients to deliver successful experimentation projects. We’ll also share some blank templates of the framework at the end, so after reading this you’ll be able to have a go at completing your own straight away.

Why use a framework? Going from tactical to strategic experimentation

Using this framework will help you mature your own approach to experimentation, make a bigger impact, get more insight and have more success.

Having a framework:

      • Establishes a consistent approach to experimentation across an entire organisation, enabling more people to run more experiments and deliver value
      • Allows you to spend more time on the strategy behind your experiments and less time on the “housekeeping” of trying to manage your experimentation programme.
    • Enables you to transition from testing tactically to testing strategically.

Let’s explore that last point in detail.

In tactical experimentation every experiment is an island – separate and unconnected to any others. Ideas generally take the form of solutions – “we should change this to be like that” and come from heuristics (aka guessing), best practice or from copying a competitor. There is very little guiding what experiments run where, when and why.

Strategic experimentation on the other hand is focused on achieving a defined goal and has clear strategy for achieving it. The goal is the starting point – a problem with potential solutions explored through the testing of defined hypotheses. All experiments are connected and experimentation is iterative. Every completed experiment generates more insight that prompts further experiments as you build towards achieving the goal.

If strategic experimentation doesn’t already sound better to you then we should also mention the typical benefits you’ll see as a result of maturing your approach in this way.  

    • You’ll increase your win rate – the % of experiments that are successful
    • You’ll increase the impact of each successful experiment – on top of any conversion rate uplifts, experiments will generate more actionable insight
  • You’ll never run out of ideas again – every conclusive experiment will spawn multiple new ideas

Introducing the Conversion.com experimentation framework

As we introduce our framework, you might be surprised by its simplicity. But all good frameworks are simple. There’s no secret sauce here. Just a logical, strategic approach to experimentation.

Just before we get into the detail of our framework a quick note on the role of data. Everything we do should be backed by data. User-research and analytics are crucial sources of insight used to build the layers in our framework. But the experiments we run using the framework are often the best source of data and insight we have. An effective framework should therefore minimise the time it takes to start experimenting. We cannot wait for perfect data to appear before we start, or try and get things right first time. The audiences, areas and levers that we’ll define in our framework come from our best assessment of all the data we have at a given time. They are not static or fixed. Every experiment we run helps us improve and refine them and our framework and strategy is updated continuously as more data becomes available.

Part 1 – Establishing the goal of your experimentation project

The first part of the framework is the most important by far. If you only have time to do one thing after reading this post it should be revisiting the goal of your experimentation.

Most teams don’t set a clear goal for experimentation. It’s a simple as that. Any strategy needs to start with a goal, otherwise how can you differentiate success from wasted effort?

A simple test of whether your experimentation has a clear goal is to ask everyone in your team to explain it. Can they all give exactly the same answer? If not, you probably need to work on this. 

Don’t be lazy and choose a goal like “increase sales” or “growth”. We’re all familiar with the importance of goals being “SMART” (specific, measurable, achievable, relevant, time-bound) when setting personal goals. Apply this when setting the goal for experimentation.

Add focus to your goal with targets, measures and deadlines, and wherever possible be specific rather than general. Does “growth” mean “increase profit” or “increase revenue”? By how much? By when? A stronger goal for experimentation would be something like “Add an additional £10m in profit within the next 12 months”. There will be no ambiguity as to whether you have achieved that or not in 12 months’ time.

Ensure your goal for experimentation is SMART

Some other examples of strong goals for experimentation

    • “Increase the rate of customers buying add-ons from 10% to 15% in 6 months.”
    • “Find a plans and pricing model that can deliver 5% more new customer revenue before Q3”
  • “Determine the best price point for [new product] before it launches in June.”

A clear goal ensures everyone knows what they’re working towards, and what other teams are working towards. This means you can coordinate work across multiple teams and spot any conflicts early on.

Part 2 – Defining the KPIs that you’ll use to measure success

When you’ve defined the goal, the next step is to decide how you’re going to measure it. We like to use a KPI tree here – working backwards from the goal to identify all the metrics that affect it.

For example, if our goal is “Add an additional £10m in profit within the next 12 months” we construct the KPI tree of the metrics that combine to calculate profit. In this simple example let’s say profit is determined by our profit per order times how many orders we get, minus the cost of processing any returns.

Sketching out a KPI tree is an easy way to decide the KPIs you should focus on

These 3 metrics then break down into smaller metrics and so on. You can then decide which of the metrics in the tree you can most influence through experimentation. These then become your KPIs for experimentation. In our example we’ve chosen average order value, order conversion rate and returns rate as these can be directly impacted in experiments. Cost per return on the other hand might be more outside our control.

When you’re choosing KPIs, remember what the K stands for. These are key performance indicators – the ones that matter most. We’d recommend choosing at most 2 or 3. Remember, the more you choose, the more fragmented your experimentation will be. You can track more granular metrics in each experiment, but the overall impact of your experiments will need to be measured in these KPIs.

Putting that all together, you have the first parts of your new framework. This is our starting point – and it is worth the time to get this right as everything else hinges on this.

We present our framework as rows to highlight the importance of starting with the goal and working down from there.

Part 3 – Understanding how your audience impacts your KPIs and goal

Now we can start to develop our strategy for impacting the KPIs and achieving the goal. The first step is to explore how the make-up of our audience should influence our approach.

In any experiment, we are looking to influence behaviour. This is extremely difficult to do. It’s even more difficult if we don’t know who we’re trying to influence – our audience.

We need to understand the motivations and concerns of our users – and specifically how these impact the goal and KPIs we’re trying to move. If we understand this, then we can then focus our strategy on solving the right problems for the right users.

So how do we go about understanding our audience? For each of our KPIs the first question we should ask is “Which groups of users have the biggest influence on this KPI?” With this question in mind we can start to map out our audience.

Start by defining the most relevant dimensions – the attributes that identify certain groups of users. Device and Location are both dimensions, but these may not be the most insightful ways to split your audience for your specific goal and KPIs. If our goal is to “reduce returns by 10% in 6 months”, we might find that there isn’t much difference in returns rate for desktop users compared to mobile users. Instead we might find returns rate varies most dramatically when we split users by the Product Type that they buy.

For each dimension we can then define the smaller segments – the way users should be grouped under that dimension. For example, Desktop, Mobile and Tablet would be segments within the Device dimension.

You can have a good first attempt at this exercise in 5–10 minutes. At the start, accuracy isn’t your main concern. You want to generate an initial map that you can then start validating using data – refining your map as necessary. You might also find it useful to create 3 or 4 different audience maps, each splitting your audience in different ways, that are all potentially valid and insightful for your goal.

Map out your audiences by thinking about the relevant dimensions that could have the greatest influence on your KPIs and overall goal.

Once you have your potential audiences the next step would then be to use data to validate the size and value of these audiences. The aim here isn’t to limit our experiments to a specific audience – we’re not looking to do personalisation quite yet. But understanding our audiences means when we come to designing experiments we’ll know how to cater to the objections and concerns of as many users as possible.

We add the audience dimensions we feel are most relevant to our goal and KPIs to the framework. If it’s helpful you can also show the specific segments below.

Part 4 – Identifying the areas with the greatest opportunity to make an impact

Armed with an better understanding of our audience, we still need to choose when and where to act to be most effective. Areas is about understanding the user journey – and focusing our attention on where we can make the biggest impact.

For each audience, the best time and place to try and influence users will vary. And even within a single audience, the best way to influence user behaviour is going to depend on which stage of their purchase journey the users are at.

As with audiences, we need to map out the important areas. We start by mapping the onsite journeys and funnels. But we don’t limit ourselves to just onsite experience – we need to consider the whole user journey, especially if our goal is something influenced by behaviours that happen offsite. We then need to identify which steps directly impact each of our KPIs. This helps to limit our focus, but also highlights non-obvious areas where there could be value.

Sketch out your entire user journey, including what happens outside the website. Then highlight which areas impact each of your KPIs.

As with audiences, you can sketch out the initial map fairly quickly, then use analytics data to start adding more useful insights. Label conversion and drop-off rates to see where abandonment is high. Don’t just do this once for all traffic, do this repeatedly, once for each of the important audiences identified in the previous step. This will highlight where things are similar but crucially where things are different.

Once you have your area map you can start adding clickthrough and drop-off rates for different audiences to spot opportunities.

So with a good understanding of our audiences and areas we can add these to our framework. Completing these two parts of the framework is easier the more data you have. Start with your best guess at the key audiences and areas, then go out and do your user-research to inform your decisions here. Validate your audiences and areas with quant and qual data.

Add your audiences and areas to your framework. You may have more than 4 of each but that’s harder for us to fit in one image!

Part 5 – Identifying the potential levers that influence user behaviour

Levers are the factors we believe can influence user behaviour: the broad themes that we’ll explore in experimentation. At its simplest, they’re the reasons why people convert, and also the reasons why people don’t convert. For example, trust, pricing, urgency and understanding are all common levers.

To identify levers, first we look for any problems that are stopping users from converting on our KPI – we call these barriers to conversion. Some typical barriers are lack of trust, price, missing information and usability problems.

We then look for any factors that positively influence a user’s chances of converting – what we call conversion motivations. Some typical motivations are social proof (reviews), guarantees, USPs of the product/service and savings and discounts.

Together the barriers and motivations give us a set of potential levers that we can “pull” in and experiment to try and influence behaviour. Typically we’ll try to solve a barrier or make a motivation more prominent and compelling.

Your exact levers will be unique to your business. However there are some levers that come up very frequently across different industries that can make for good starting points.

Ecommerce – Price, social proof (reviews), size and fit, returns, delivery cost, delivery methods, product findability, payment methods, checkout usability

Saas – Free trial, understanding product features, plan types, pricing, cancelling at the end of trial, monthly vs annual pricing, user onboarding

Gaming – welcome bonuses, ongoing bonuses, payment methods, popular games, odds

Where do levers come from? Data. We conduct user-research and gather quantitative and qualitative data to look for evidence of levers. You can read more about how we do that here.

When first building our framework it’s important to remember that we’re looking for evidence of levers, not conclusive proof. We want to assemble a set of candidate levers that we believe are worth exploring. Our experiments will then validate the levers and give us the “proof” that a specific lever can effectively be used to influence user behaviour.

You might start initially with a large set of potential levers – 8 or 10 even. We need a way to validate levers quickly and reduce this set down to the 3–4 most effective. Luckily we have the perfect tool for that in experiments.

Add your set of potential levers to your framework and you’re ready to start planning your experiments.

Part 6 – Defining the experiments to test your hypotheses

The final step in our framework is where we define our experiments. This isn’t an exercise we do just once – we don’t define every experiment we could possibly run from the framework at the start – but using our framework we can start to build the hypotheses that our experiments will explore.

At this point, it’s important to make a distinction between a hypothesis for an experiment and the execution of an experiment. A hypothesis is a statement we are looking to prove true or false. A single hypothesis can then be tested through the execution of an experiment – normally a set of defined changes to certain areas for an audience.

We define our hypothesis first before thinking about the best execution of an experiment to test it, as there are many different executions that could test a single hypothesis. At the end of the experiment the first thing we do is use the results to evaluate whether our hypothesis has been proven or disproven. Depending on this, we then evaluate the execution separately to decide whether we can iterate on it – to get even stronger results – or whether we need to re-test the hypothesis using a different execution.  

The framework makes it easy to identify the hypothesis statements that we will look to prove or disprove in our experiments. We can build a hypothesis statement from the framework using this simple template

“We believe lever [for audience] [on area] will impact KPI.”

The audience and area here are in square brackets to denote that it’s optional whether we want to specify a single audience and area in our hypothesis. Doing so will give us a much more specific hypothesis to explore, but in a lot of cases we may also be interested in testing the effectiveness of the lever across different audiences and different areas – so may want to not specify the audience an area until we define the execution of the experiment.

The framework allows you to quickly create hypotheses for how you’ll impact your KPIs and achieve your goal.

Using the framework

Your first draft of the completed framework will have a large number of audiences, areas and levers, and even multiple KPIs. You’re not going to be able to tackle everything at once. A good strategy should have focus. Therefore you need to do two things before you can define a strategy from the framework.

Prioritise KPIs, audiences and areas

We’re going to be publishing a detailed post of how this framework enables an alternative approach to prioritisation than typical experiment prioritisation.

The core idea is that you need to first prioritise the KPI you most need to impact from your framework in order to achieve your goal. Then evaluate your audiences identify those groups that are the highest priority groups to influence if we want to move that KPI. Then for that audience prioritise those areas of the user-journey that offer the greatest opportunity to influence their behaviour.

This then gives you a narrower initial focus. You can return to the other KPIs at a later date and do the same prioritisation exercise for them.

Validate levers

You need to quickly refine your set of levers and identify the ones that have the greatest potential. If you have run experiments before you should look back through each experiment and identify the key lever (or levers) that were tested. You can then give each lever a “win rate” based on how often experiments using that lever have been successful. If you haven’t yet started experimenting, you likely already have an idea of the potential priority order of your levers based on the volume of evidence for each that you found during your user-research.

However, the best way to validate a lever is to run an experiment to test the impact it can have on our KPI. You need a way to do this quickly. You don’t want to invest significant time and effort testing hypotheses around a lever that turns out not have ever been valid. Therefore for each lever you should identify what we call the minimum viable experiment.

You’re probably familiar with the minimum viable product (MVP) concept. In a minimum viable experiment we look to design the simplest experiment we can that will give us a valid signal as to whether a lever works at influencing user behaviour.

If the results of the minimum viable experiment show a positive signal, we can then justify investing further resource on more experiments to validate hypotheses around this lever. If the minimum viable experiment doesn’t give a positive signal, we might then de-prioritise that lever, or remove it completely from our framework. We’ll also be sharing a post soon going into detail on designing minimum viable experiments.

Creating a strategy

How you create a strategy from the framework will depend on how much experimentation you have done before and therefore how confident you are in your levers. If you’re confident in your levers then we’d recommend defining a strategy that lasts for around 3 months and focuses on exploring the impact of 2-3 of your levers on your highest priority KPI. If you’re not confident in your levers, perhaps having not tested them before, then we’d recommend an initial 3-6 month strategy that looks to run the minimum viable experiment on as many levers as possible. This will enable you to validate your levers quickly so that you can take a more narrow strategy later.

Crucially at the end of each strategic period we can return to the overall framework, update and refine it from what we’ve learnt from our experiments, and then define our strategy for the next period.

For one quarter we might select a single KPI and a small set of prioritised audiences, areas and levers to focus on and validate.

Key takeaways

You can have a first go at creating your framework in about 30 minutes. Then you can spend as long or as little time as you like refining it before you start experimenting. Remember your framework is a living thing that will change and adapt over time as you learn more and get more insight.

  1. Establish the goal of your experimentation project
  2. Define the KPIs that you’ll use to measure success
  3. Understand how your audience impacts your KPIs and goal
  4. Identify the areas with the greatest opportunity to make an impact
  5. Identify the potential levers that influence user behaviour
  6. Define the experiments to test your hypotheses

The most valuable benefit of the framework is that it connects all your experimentation together into a single strategic approach. Experiments are no longer islands, run separately and with little impact on the bigger picture. Using the framework to define your strategy ensures that every experiment is playing a role, no matter how small, in helping you impact those KPIs and achieve your goal.

Alongside this, using a framework also brings a large number of other practical advantages:

  • It’s clearyour one diagram can explain any aspect of your experimentation strategy to anyone that asks or if you need to report on what you’re doing
  • It acts as a sense checkany experiment idea that gets put forward can be assessed based on how it fits within the framework. If it doesn’t fit, it’s easy rejection with a clear reason why
  • It’s easy to come back to – things have a nasty habit of getting in the way of experimentation, but with the framework even if you leave it for a couple of months, it’s easy to come back to it and pick up where you left off
  • It’s easier to show progress and insight one of the biggest things teams struggle with is documenting the results of all their experiments and what was learnt. With the framework, the idea is that the framework updates and changes over time so you know that your previous experiment results have all been factored in and you’re doing what you’re doing for a reason

As we said at the start of this post, there is no special sauce in this framework. It’s just taking a logical approach, breaking down the key parts of an experimentation strategy. The framework we use is the result of over 10 years of experience running experimentation and CRO projects and it looks how it does because it’s what works for us. There’s nothing stopping you from creating your own framework from scratch, or taking ours and adapting it to suit your business or how your teams work. The important thing is to have one, and to use it to go from tactical to strategic experimentation.

You can find a blank Google Slide of our framework here that you can use to create your own.

Alternatively you can download printable versions of the framework if you prefer to work on paper. These templates also allow for a lot more audiences, areas, levers and experiments than we can fit in a slide.

If you would like to learn more, get in touch today!

Introducing: The 9 experimentation principles

At Conversion.com, our team and our clients know first-hand the impact experimentation can have. But we also see all too often the simple mistakes, misconceptions and misinterpretations organisations make that limit the impact, effectiveness and adoption of experimentation.

We wanted to put that right. But we didn’t just want to make another best-practice guide to getting started with CRO or top 10 tips for better experiments. Instead, inspired by the simple elegance of the UK government design principles, we set ourselves the challenge of defining a set of the core experimentation principles.

Our ambition was to create a set of principles that, if followed, should enable anyone to establish experimentation as a problem solving framework for tackling any and all problems their organisation faces. To distill over 10 years of experience in conversion optimisation and experimentation down to a handful of principles that address every common mistake, every common misconception and misinterpretation of what good experimentation looks like.

Many hours of discussion, debate and refinement later, we’re happy to be able to share the end product – the 9 principles of experimentation.

Here are the principles in their simplest form. You can also download a pdf of the experimentation principles that also includes quotes and stories we’ve gathered from experimentation experts at companies such as Just Eat, Booking.com, Microsoft and Facebook. A few snippets of those quotes are included below as a taster.

DOWNLOAD PRINCIPLES PDF

1 – Challenge assumptions, beliefs and doctrine

Experimentation should not be limited to optimising website landing pages, funnels and checkouts. Use experimentation as a tool to challenge the widely held assumptions, ingrained beliefs and doctrine of your organisation. It’s often by challenging these assumptions that you’ll see the biggest returns. Don’t accept “that’s the way it’s always been done” -to do so is to guarantee you’ll get the results you’ve always had. Experimentation provides a level playing field for evaluating competing ideas, scientifically, without the influence of authority or experience.

It was only when we were willing to question our core assumptions through interviews, data collection, and rigorous experimentation that we found answers to why growth had slowed... Click To Tweet

-Rand Fishkin, CEO and Co-founder, SparkToro

2 – Always start with data

It sounds trite to say you should start with data. Yet most people still don’t. Gut-feel still dominates decision making and experiments based on gut-feel rarely lead to meaningful impact or insight. Good experimentation starts with using data to identify and understand the problem you’re trying to solve. Gather data as evidence and build a case for the likely causes of those problems. Once you have gathered enough evidence you can start to formulate hypotheses to be proven or disproven through experiments.

3 – Experiment early and often

In any project, look for the earliest opportunity to run an experiment. Don’t wait until you have already built the product/feature to run an experiment, or you’ll find yourself moulding the results to justify the investment or decisions you’ve already made. Experiment often to regularly sense-check your thinking, remove reliance on gut-feel and make better informed decisions.

4 – One, provable hypothesis per experiment

Every experiment needs a single hypothesis. That hypothesis statement should be clear, concise and provable – a cause-effect statement. A single hypothesis ensures the experiment results can be used to evaluate that hypothesis directly. Competing hypotheses introduce uncertainty. If you have multiple hypotheses, separate these into distinct experiments.

5 – Define the success metric and criteria in advance

Define the primary success metric and the success criteria for an experiment at the same time that you define the hypothesis. Doing so will focus your exploration of possible solutions around their ability to impact this metric. Failing to do so will also introduce errors and bias when analysing results—making the data fit your own preconceived ideas or hopes for the outcome.

Any targets drawn after the experiment is run should be called into question. The evidential value of an experiment comes from targets that were drawn before we started the test Click To Tweet

-Lukas Vermeer, Booking.com

6 – Start with the minimum viable experiment, then iterate

When tackling complex ideas the temptation can be to design a complex experiment. Instead, look for the simplest way to run an experiment that can validate just one part of the idea: the minimum viable experiment. Run this experiment to quickly get data or insight that either gives the green light to continue to more complex implementations, or flags problems early on. Then iterate and scale to larger experiments with confidence that you’re heading in the right direction.

7 – Evaluate the data, hypothesis, execution and externalities separately

When faced with a negative result, it can be tempting to declare an idea dead-in-the-water and abandon it completely. Instead, evaluate the four components of the experiment separately to understand the true cause:

  1. The data – was it correctly interpreted?
  2. The hypothesis – has it actually been proven or disproven?
  3. The execution – was our chosen solution the most effective?
  4. External factors – has something skewed the data?

An iteration with a slightly different hypothesis, or an alternative execution could end in very different results. Evaluating against these four areas separately, for both negative and positive results, gives four areas on which you can iterate and gain deeper insight.

8 – Measure the value of experimentation in impact and insight

The ultimate judge of the value of an experimentation programme are the impact it delivers and the insight it uncovers. Experimentation can only be judged a failure if it doesn’t give us any new insight that we didn’t have before. Negative results that give us new insight can often be more valuable than positive results that we don’t understand.   

9 – Use statistical significance to minimise risk

Use measures of statistical significance when analysing experiments to manage the risk of making incorrect decisions. Achieving 95% statistical significance leaves a 1 in 20 chance of a false positive – seeing a signal where there is no signal. This might not be acceptable for a very high risk experiment with something like product or pricing strategy, so increase your requirements to suit your appetite. Beware experimenting without statistical significance, that’s not much better than guessing.

The best data scientists are skeptics that double-check, triangulate results, and evaluate the positive and the negative results with the same scientific rigor Click To Tweet

-Ron Kohavi, Microsoft

***

These are the 9 principles we felt most strongly define experimentation, but no doubt we could have added others and made a longer list. If you have experimentation principles that you use at your organisation that we haven’t included here we’d be interested to hear about them and why you feel they’re important.

For more detail and even more insights from some of the world leading experts on experimentation, please be sure to download the full experimentation principles.

**

We’re also looking for more stories and anecdotes of both good and bad examples of these principles in action from contributors outside Conversion to include in our further iterations of these principles. If you have something you feel epitomises one of these principles then please get in touch and you could feature in our future posts and content about these principles.

And finally, if you want to be notified when we publish more content about these experimentation principles, drop us an email with your contact details.

For any of the above get in touch at hello@conversion.com.

DOWNLOAD PRINCIPLES PDF

Talking Shop

As published in ERT Magazine (www.ertonline.co.uk) – October 2017 issue 

Alexa and her friends may be delighting users in the home with how they can make life easier, but some companies are taking the first bold steps into voice controlled e-commerce…

The smart-home revolution is in full swing.

The success of the Amazon Echo and its Alexa ‘skills’ platform and the launch of Google Home have taken the idea of voice control and voice-controlled e-commerce from a novelty concept to a legitimate potential revenue channel for retailers willing to take the risk.

Early brands to explore this opportunity include Uber and Just Eat, and earlier this summer Domino’s Pizza launched its Alexa skill in the UK after over a year of offering the same in the US. This allows you to order pizza with just a few words. We’ve yet to see data on how many sales these brands are generating through their voice-control channels, but the phased deployment from Domino’s certainly suggests they are seeing enough value to justify the investment.

Designing a successful voice-controlled experience isn’t going to be easy. Looking at this from a user experience and conversion rate perspective, voice control is a whole new touch-point and interaction type to understand. In traditional conversion rate optimisation for e-commerce sites, potential reasons why a user might abandon and not complete a purchase fall into two categories – usability and persuasion.

Usability issues would be anything that physically prevents the user from being able to complete their desired action – broken pages, links or problems with completing a form or online checkout.

As for persuasion – even a site with no usability issues wouldn’t convert 100 percent of its visitors. There will always be an element in the user’s decision-making process around persuasion. Have they been sufficiently convinced to purchase this product or service? Typical persuasion issues include failing to describe the benefits of a product.

So what does the future look like in a voice-controlled world?

In traditional e-commerce, the user is free to make their own journey through a website and we enable this freedom by displaying a range of content, products, deals and offers, navigation options and search functionality. With voice control, the possible journeys to purchase are far fewer and almost completely invisible to the user at the outset. So with an Alexa skill, the developer must define the possible trigger phrases that the user can use to take a certain set of defined actions.

Crucial 

Skill and experience in voice interaction design will emerge as a crucial requirement for any team looking to develop this channel. Collecting and analysing data on how users are invoking your app/skill, what exact words and phrases they’re using, how they’re describing your products and service and how they’re talking to your app through their journey, will be an essential part of experience optimisation.

Another area that will dominate user experience for voice control will be how the app responds to user mistakes. Frustration will be the worst enemy of voice-controlled services, far more so than it is with websites now. If you’ve been unlucky enough to have to call an automated helpline that uses voice control, you will know how quickly the frustration builds when something goes wrong.

On a website, if the user gets stuck or confused on their journey, it’s relatively easy for them to go back or to navigate away from the page and try again. With voice-control, this isn’t the case. If the user tries a command that isn’t recognised by the app, then it can only respond with a quick error response. Failure to re-engage the user and keep them trying will quickly result in frustration and even abandonment.

Persuade 

So how do you persuade a user to complete their purchase once they’ve started their voice-controlled interaction? How would you describe the benefits of a certain washing machine, laptop or TV when they can only be spoken, and spoken by a robotic voice at that?

The development of chatbots in the past couple of years has seen a lot of investment and progress on how to get an automated response to appear human and more engaging. But this development has all been in how to present text responses rather then voice responses. Voice responses are inherently more complex.

Will developments in Alexa’s AI allow her to improvise responses based on prior knowledge of the user? Personalisation within the voice space could allow Alexa to make tailored recommendations based on my purchase history.

“Alexa, look on Currys for a new kettle.”

“Ok Kyle. There’s a black Breville kettle that would look great with the Breville toaster you bought last month. It’s £39. Is that OK?”

“Sounds good.”

“You bought your last kettle 18 months ago. Shall I add the three-year warranty on this one for an extra £9.99?”

I’m sold.

 

 

Personalisation, what’s the hold up?

The ‘year of personalisation’ has been on the cards for a while now. 

A quick Google search, and you’ll find plenty of articles touting 201X as the year that personalisation will take off. Midway through 2017, we’re still waiting for it to really take hold.

So, what’s holding personalisation back from becoming the norm? Why isn’t every website already perfectly tailored to my individual needs?

There are two main reasons we have yet to see personalisation live up to the great expectations.

The first reason is the expectation itself. The dream of personalisation as it’s sold – a website responsive to the user’s habits – will likely remain just that for all but a handful of organisations that meet very challenging criteria.

The rest of us have more realistic and practical expectations for personalisation where it must prove its worth against many other activities competing for resources.

The second reason is that implementing personalisation is a difficult process, and one where it makes sense to start small and build up. No doubt the majority of organisations are starting to explore personalisation, but the reason we feel it has yet to take off is because they are still in the early stages. Personalisation is hard. It’s not something that can be undertaken lightly and from a conversion optimisation perspective, is only possible if you have already reached the higher levels of experimentation maturity.

So, how do I know if my business is ready? 

Before even thinking about technical capabilities, tools or technology, you should evaluate personalisation in three areas: suitability, profitability and maturity.

Certain types of operating model are better suited to personalisation and offer more opportunity and potential. If you maintain an ongoing relationship with your customers and see a high frequency of engagement e.g. if you get a lot of repeat transactions as an e-commerce site, then personalisation is likely to be more suitable. In general, the greater the frequency of your customers’ visits, the more relevant any previous data about that customer is likely to be, and the experience you create for that customer can be more relevant as a consequence.

On the other hand, for websites that focus on a single engagement, where repeated engagement is unlikely or infrequent, personalisation is likely to be far less effective. Those organisations are likely to have limited data about the user and, consequently, it will be more difficult to create highly relevant experiences. Depending on what model your organisation operates, you might decide that your website is more or less suited to personalisation.

Implementing and maintaining personalisation comes with considerable costs. You should only invest in personalisation if you can demonstrate that the benefits will outweigh the costs involved. The underlying hypothesis of personalisation is that delivering a more relevant experience to a user will increase the likelihood of them converting. As with all hypotheses, this should be tested and validated. Experimentation and testing will allow you to prove the value of personalisation for your business, so that is where you should start.

Personalisation requires a deep understanding of user behaviour. More so than in A/B testing, we need to understand not just why users aren’t converting, but also how segments of users vary in their motivation, ability and trigger. If your organisation is still at the lower levels of experimentation and conversion optimisation maturity, then it will be difficult to implement personalisation experimentation in a way that is effective and manageable. A good way to think of it is as a higher level of experimentation maturity that you should explore once you have exhausted the gains that could be had from general experimentation and conversion rate optimisation.

What is a realistic expectation for personalisation for my business?

It doesn’t have to be the 1-to-1 highly granular customisation that people tend to think it is. There are many different ways to approach personalisation and the approach that is best for your business will depend on a number of factors.

In order to start your discussions about personalisation, here are a few different types that you may want to explore:

  • Behaviour-based personalisation – This is a great place to start as it has a low barrier to entry. Generally, this type of personalisation is based on the user’s behaviour on the site during their current session. Altering the content that you show the user when they return to the homepage based on what type of pages they have visited in this session, (or multiple sessions using cookies), for example.
  • Context-based personalisation – This is where the user’s experience is personalised based on the context of their visit to the site. A basic example of this is personalising landing pages based on the user’s PPC search term, the email they clicked through, or the display ad they’ve clicked. This is more commonly known as segmentation, but really this is just another type of personalisation. This can be a good step towards defining the important audiences/segments that would then feature in more advanced personalisation.
  • Attribute-based personalisation – This is what most people think of when they think about personalisation: using prior knowledge or attributes about a user to personalise their experience. This type generally requires more advanced technology to connect sources of data about a user together in a way that creates what’s known as a Dynamic Customer Profile for each user. This profile will contain all the possible attributes around which an experience can be personalised to that specific user.
  • User-led personalisation – Not all personalisation has to be invisible to the user. In fact, it could be argued that personalisation is more effective when the user can see it happening and is aware that the site is being customised to them. Netflix users know that movie recommendations are based on what they’ve already watched, just as Amazon’s product recommendations are based on what you’ve previously shown an interest in purchasing. This feels more compelling than if you were just shown recommended products without reason.
  • Personalisation via predictive modelling – This is the realm of AI and machine learning, where models can be used to assign a user to the best guess ‘lookalike’ audience based on their first few actions on the site. For example, users that visit the ‘Sale’ section of a site within the first three clicks could be assumed to fit in a ‘bargain hunter’ audience. Then any previous learnings about how to effectively convert bargain hunters could be applied to personalise the experience for this user.

So, will 2018 finally be the ‘year of personalisation’?

I expect we will see a lot more case studies emerging of personalisation proving successful as more organisations start seeing the rewards of their investment in this area. If nothing else, I’d expect 2018 to be the year that organisations individually make their decision whether to invest or not invest in personalisation in a serious way.

Personalisation isn’t going to be suitable for everyone. The dream of 1-to-1 personalisation that runs itself might remain a dream for the majority, but taking the first steps towards investigating its potential is an exercise that every organisation should undertake. As preparation, plotting your current position on our experimentation maturity model will help you to plan the steps you need to take to be ready when the time comes.

From quick wins to cultural shifts – understanding the experimentation maturity model

There are many ways you could attempt to measure conversion optimisation and experimentation maturity. At Conversion.com we work with businesses and teams at all levels of conversion maturity – from businesses just starting out with conversion optimisation that have never launched a test, to businesses with growth and optimisation teams of hundreds of people. From this experience we’ve built up a good picture of what defines maturity.

Our model for maturity focuses on measuring strategic maturity. We believe conversion optimisation maturity shouldn’t be limited by the size of your organisation, team or budget. Any organisation, armed with an understanding of what maturity looks like, where they are currently and what level they would like to reach, should be able to reach the higher stages of experimentation maturity.

For this reason our model does not include basic measures of scale such as number of tests launched per month or size of experimentation team. Nor does it refer to any specific tools or pieces of technology as requirements. In defining this model we wanted to keep things simple. To create a model for maturity that helps to start conversations both with our clients and in any team serious about putting experimentation at the heart of their business.

Our model measures maturity against three key scales: experimentation goals, experimentation strategy and data and technology.

Experimentation goals

What are the goals of your optimisation programme?
If you’re just starting to explore experimentation and conversion optimisation you might have the goal for your programme of simply getting a test live. At the other end of the spectrum, more businesses are emerging now where the goal of experimentation is to be a driving force in the overall strategy of the business. The goals that we set for experimentation in our organisations,and our ambition in this area set the tone for how we approach and deliver experimentation. Organisations that have embraced experimentation set more ambitious goals and these goals require a more mature approach to achieve them. That’s why evaluating the goals for experimentation within your own organisation is the best place to start when evaluating your place on the maturity scale.

Developing your maturity in this area involves shifting the scope of your goals and developing alignment of the goals of experimentation with the overall goals of your business. Moving from goals being about short-term results and impact on KPIs, towards being about answering business questions and informing business decisions and strategy.

It’s important to make a distinction between reality and ambition when trying to plot your current position in this scale. Consider the role that experimentation currently plays in your organisation and how you are currently setting the goals, rather than what you’d like to be your goal for experimentation in an ideal world. The maturity model is most useful as a tool for assessing where you are now, where you want to be in the future, and what needs to change to close the gap between the two.

Experimentation strategy

Where does your strategy for experimentation come from?
Experimentation goals and experimentation strategy are closely linked, with strategy being how you achieve the goals you’ve set. If you are just starting to explore experimentation, you may not have thought too much yet about an overall strategy. Early on, experimentation strategy tends to be largely tactical in nature, with ideas generated on an ad-hoc basis and experiment prioritisation based on most urgent priority or a simple impact/ease model. Each experiment is treated as an individual exercise.

Advanced optimisation teams plan their strategy for achieving their optimisation goals across both the short-term and long-term. Long-term strategic planning should focus on prioritisation at the high level of goals and priorities. Conversion optimisation is an ongoing process. It’s not possible to do everything at once, and mature teams plan and prioritise the areas that they will focus on right now and those that they’ll focus on later in the year. In this way they can keep their focus narrow and ensure there is a clear plan for achieving their goals.

Advanced optimisation teams view testing not as a tool for increasing conversion rates but as a tool for answering questions. Starting with the big picture, they identify the business questions that need to be answered. They then break these problems down to define the tests and research that they need to complete to validate their hypotheses and answer that question.

As we move up the maturity stages, optimisation strategy becomes more thematic. Experiments are considered now as one tool for exploring a specific theme or conversion lever. At this level, experimentation is organised as a series of projects, each made up of a combination of targeted user-research pieces and experiments. These projects align to business strategy, and experimentation starts to play a leading role in overall business strategy.  

Data & technology strategy

How do you detect and measure the things that matter?
The quality of insight gained from experimentation is directly correlated to the quality of data that you collect about what happened. If your goal is just to get some experiments live there is probably less emphasis on ensuring those experiments have a solid grounding in data. Ensuring the data the experiments produce when they do run is reliable and actionable can often be more of an afterthought. Advanced optimisation teams will be a lot more deliberate, with data and insight playing leading roles in generating test hypotheses, and experiment data being a valuable source of insight for the business and the people in it. Maturity here is being confident in your data so that you can challenge it, ask probing questions of experiment impact, and be able to confidently produce the answers.

Technology plays a key role in this, but is only as good as the strategy for using it. The specific tools you use aren’t as important, for example, as your ability to connect your tools and data sets together. A set of simple but connected tools can deliver greater quality of insight that one advanced but isolated tool. Start with your experimentation tool, and connect it to any other tools you have such as surveys, session recording and heatmaps. In particular, connect it to your back-end reporting systems so that the impact of experiments can be measured against the KPIs that really matter, and that people look at on a daily basis.

Maturity levels and where you place

Now that we’ve explored the 3 scales that we use to measure maturity we can define approximate levels of maturity to give us an overall scale and tool for evaluating our own place. Really though, maturity is a continuous scale rather than something discreetly split into levels. When reviewing the levels below you may place yourself at different levels for each of the 3 scales. This is very common. There is often one part of our approach that we know is probably holding us back – a weak link in the chain. This model should help formalise and pinpoint that weakness and start the conversations for how to overcome it.

Maturity model levels

If you’re looking to develop the maturity of your experimentation and conversion optimisation strategy then we’d be happy to help. Just drop an email to hello@conversion.com and we’ll organise a free maturity consultation with one of our team.

CRO is like poker

Conversion rate optimisation (CRO) and poker have a lot of similarities, and it’s more than just the opportunity to either make or lose a lot of money.

 

Anyone can play

Anyone can take a seat at a poker table and play a few hands. The game is relatively easy to pick up and there really isn’t any prerequisite knowledge needed apart from knowing how a deck of cards works.

The same can be said of CRO. There are plenty of tools out there that will allow you to start doing the basics of CRO in a couple of hours. Your free Google Analytics account can give you a pretty good understanding of where people are abandoning your site. Sign up for an Optimizely account and you can start running your first A/B tests as soon as you add the code to your pages.

The problem is, because it’s so easy to start doing something that feels like CRO, many companies think they’re doing CRO already so don’t seek help to do it better. Everyone starts playing with the assumption that they will win after all. But only the players willing to invest adequate time and even money into getting better will make consistent returns in the long run. That might mean reading up on the theory, looking at what others have done to be successful, or even getting professional help.  

Anyone can win the odd hand

The reason people get addicted to poker is that from time to time they probably will win a big hand and make some money. The problem is that over the long run the relatively infrequent big wins will be cancelled out by the all-too-frequent losses.

The same is true of CRO. Anyone can run a test and it’s within the realms of possibility that you might just get a winner too, maybe even a big one at that. We know from experience that small changes to sites can have big impact so you certainly can stumble upon these impactful changes.

If you want to be making a sustained impact on your conversion rate over time though, you’ll need a CRO strategy in place that can deliver these big wins on a regular basis.

Over time, a data-driven strategy will deliver better results

In poker a beginner’s luck will run out. It doesn’t matter too much what happens hand to hand, it matters what happens over the long-run – over hundreds of hands. A successful poker player adopts strategies that give them statistically better odds of winning. Over time, this statistical advantage is what means they are still there at the final table, with the biggest stack of chips. They may throw a few big plays here and there, but the majority of play is about being smart and using the data available to make good decisions consistently.

In CRO each split-test we run is like a hand of poker for the poker player. Being successful at CRO is not necessarily about getting a big uplift in one test, nor is it about being successful with every test you run. Being successful at CRO is about using the data you have available to you to devise testing strategies that deliver continuous improvement over time. There may be the odd test along the road that does deliver a 20, 30, 40% uplift in conversion rate.

The mark of a good CRO professional, however, is not getting that 40% winner, it’s what they do after that 40% winner to iterate on it and go further. It’s how they learn and adapt when a test doesn’t deliver an uplift to turn the data from that losing hand into a winning hand next time.

Finally, you play your opponent, not the cards

This is a well known mantra of poker and it stems from the fact that you have little control over what cards you’re dealt so can’t rely on good cards to win hands. Instead, by gathering data on your opponent such as their play style – how they play hands in which they win and how they play hands in which they lose for example – you can devise strategies to beat them no matter what hand you’ve been dealt.

This is true in CRO, although I wouldn’t suggest that you think of your potential customers as your opponents necessarily.

You might not have much control over the hand you’re dealt in terms of the product you’re selling or the service you’re offering. What you can control is how you use what you’ve been dealt, and it’s essential to understand how your visitors think so that you can decide how best to influence them using what you have. Likewise, there is only so much that web analytics data can tell you about why visitors are abandoning your checkout. You need to understand the motivations and thought processes of visitors at each stage of your funnel to know how to make them take the action you want.

CRO and poker have the same appeal. The simplicity of the objective – getting people to buy or getting people to fold. The potential for great returns if you’re successful. The thrill of getting that big uplift in a test or winning that big hand. Both CRO and poker though aren’t easy, and both need a lot of time and effort invested to do well.

There are a lot more unsuccessful poker players than successful ones as a result, and I think the same is probably true in CRO. Hopefully this post has given you a good idea of what can makes the difference.

Mine your spam email – it’s full of tips on how to be more persuasive.

Spam email copywriters have to work hard. They are the illegal street traders of the email world, flogging fake meds and pushing casino offers down the alley that is your spam folder.

You can’t succeed in the cut-throat world of spam without using a few clever tricks and persuasion techniques, and the spam folder can be a veritable gold mine of inspiration and ideas for how to be more persuasive.

To demonstrate, here is a screenshot of my spam folder. This covers about a week.

product-icon

Almost every email is using one or more persuasion techniques to persuade me to click. Here are my favourites:

Making the sender a person

Just under half of these emails claim to be sent from a person rather than a company. The sender column in each case shows the full name of a person. This is an effective persuasive technique for a number of reasons.

  • A person’s full name adds legitimacy, no matter what the content of the email.
  • A person’s name, rather than the company name, suggests this is a specific member of staff getting in touch with me directly.
  • Names have associated familiarity. For example, the second email is from Amber. Perhaps I met someone called Amber recently. This could be her getting in touch with me again. It’s worth a quick click just to be sure.
  • All the names have something in common – they’re womens’ names. I’d be surprised if targeting a man with emails from what appear to be women was an accident.

In a sea of emails where the senders are companies, a person’s name immediately distinguishes that email as more worthy of my attention. In the spam email business, attention equals clicks.

Outside of spam emails, giving your business a human face (and name) can be equally as effective. On-site customer service is an area where this can work well. Live-chat popups will frequently now show the name, and even sometimes a friendly picture, of the agent that you’ll be talking to. If you’re a lead generation business, a worthwhile test could be to make your contact form more personal, with names and photos of your service team. At Conversion.com we carry out a lot of email surveys and we’ll always ask for a customer service agent’s name to use as the sender of our emails. It looks less like an automated email, and this generates a higher response rate.

Addressing your customers by name

At some point I have given my first name to the people over at Gala Bingo and 888.com. It is good to see that it’s being put to good use. They have both used my name as the first word in their subject lines.

888

gala

We are all primed to notice mentions of our own name, whether spoken or written. Most of us will at some point have found ourselves suddenly listening to someone’s conversation because we hear them mention our name. It doesn’t even have to be our name, often just a word that sounds similar can have the same effect.

When scanning this long list of emails, my first name is bound to stand out and grab my attention. Spammers know this is an effective strategy. They are so keen to use it that they will even take a gamble on the part before the @ in your email address being your name and address you by that. My full email address would still stand out – the digital equivalent of my name – and chances are that I will read the subject line. Quite an achievement when most of these emails will normally be deleted before they are even seen.

A customer’s name is a powerful persuasive weapon when used effectively. The customer experience immediately feels more personalised when names are used. If you can personalise the content at the same time then you’re in a very strong position.

It’s often stated as best practice when collecting customer information to remove as many fields as possible. Many sign-up forms have moved to being just an email address and password, with no name field. Whilst this may get you a few extra initial sign-ups at first, your effectiveness at converting those sign-ups to sales may be impacted by not knowing that customer’s name. The safest bet is always to split-test it and measure the conversion rate to sale of the name vs no-name cohorts.

Using a question to generate an answer

The third email down in my list (apparently from Eva Webster) is asking me a direct question.

Eva

The question stands out. This particular question is phrased like a challenge, and the promise of a challenge might actually be sufficient to get my attention. People often check their emails when bored, so it doesn’t take much to get their initial interest. Plus it’s human nature when challenged in some way to want to prove that you are up to the task.

Using questions in your copy is an effective technique in general because, when someone asks a question, you can’t help but instantly think of your answer. In the case of spam email this might just be enough to stop you in your tracks as you scan down your inbox. Using a question as a headline can be an effective way to capture your reader’s attention and establish their mindset as ready to engage with the rest of your content.

Questions work particularly well in certain industries. Take cosmetics for example. There’s a mould for cosmetic industry TV adverts where they start with a model asking you a direct question such as “Do you want longer, fuller lashes?”. Starting with a question is so effective in this industry as it plays on the insecurities of the audience. Even if you didn’t want longer fuller lashes, chances are you’re now aware that maybe you should do. Then luckily for you the rest of the adverts tells you exactly how you can get those longer, fuller lashes that you didn’t know you needed. It’s a very effective way to capture the customer’s attention and get them thinking about your product.

Using fear of missing out to motivate

From the sheer volume of spam they are sending my way it does seem like 888.com are determined to try every trick in the book in the hope that one might work on me. Here is an example of them using the scarcity principle to try and provoke a response.

scarcity

This is nicely phrased to give the impression that I am wasting a great opportunity. The “Hurry!” at the end is both commanding me to take action and emphasising that there is a limited timeframe involved. This email is much more likely to get my attention than one where there is no sense of urgency.

This fear of missing out is not a new concept, and examples of its use are everywhere. Low stock indicators on ecommerce sites, next-day delivery countdown timers and simple limited time offers are fairly commonplace. Some fashion retailers will even have a “last chance to see” section of the site that only contains items that you might miss out on if you don’t buy them now.

Nearly all of the emails in this list use one technique or another to try and persuade me  to click. Some of the best use multiple techniques combined.  Here are the four key techniques we’ve seen in just this small selection of emails.

  • Making the sender a person
  • Addressing your customers by name
  • Using a question to provoke an answer
  • Using fear of missing out to motivate

Why not take a look through your junk mail folder and see how many different persuasion techniques you can spot being used?

Where else can we see persuasion techniques in action?

We’ve used my spam folder here as an example, but persuasion techniques like these are in use everywhere you look. Next time you find yourself compelled to open a particular email,  influenced by a certain advert, or buying something online, ask yourself these quick questions and see what persuasion techniques you were influenced by.

  • What was the first thing about this that caught my attention?
  • What did I see next that made me engage further?
  • What about this eventually made me take action?

When you find persuasion techniques working on you, look for ways you can use them in your own marketing. After all, if they’ve worked on you they will probably work on other people too.

Spotting patterns – the difference between making and losing money in A/B testing.

Wrongly interpreting the patterns in your A/B test results can lose you money. It can lead you to make changes to your site that actually harm your conversion rate.

Correctly interpreting the patterns in your A/B test results will mean you learn more from each test you run. It will will give you confidence that you are only implementing changes that will deliver real revenue impact, and it will help you turn any losing tests into future winners.

At Conversion.com we’ve run and analysed hundreds of A/B and multivariate tests. In our experience, the result of a test will generally fall into one of 5 distinct patterns. We’re going to share these five patterns here, and we’ll tell you what each pattern means in terms of what steps you should take next. Learn to spot these patterns, follow our advice on how to interpret them, and you’ll be making the right decision, more often – making your testing efforts more successful.

To illustrate each of the patterns, we’ll imagine we have run an A/B test on an e-commerce site’s product page and are now looking at the results. We’ll be looking at the increase/decrease in conversion rate that the new version of this page delivered compared to the original page. We’ll be looking at this on a page-by-page basis for the four steps in the checkout process that the visitor goes through in order to complete their purchase (Basket, Checkout, Payment and finally Order Confirmation).

To see the pattern in our results in each case, we’ll plot a simple graph of the conversion rate increase/decrease to each page. We’ll then look at how this increase/decrease in conversion rate has changed as we move through our site’s checkout funnel.

1. The big winner

This is the type of test result we all love. Your new version of a page converts at x% higher to the next step than the original and this x% increase continues uniformly all the way to Order Confirmation.

The graph of our first result pattern would look like this.

The big winner

We see 10% more visitors reaching each step of the funnel.

Interpretation

This pattern is telling us that the new version of the test page successfully encourages 10% more visitors to reach the next step and from there onwards they convert equally as well as existing visitors. The overall result would be a 10% increase in sales. It is clearly logical to implement this new version permanently.

2. The big loser

The negative version of this pattern, where each step shows a roughly equal decrease in conversion rate, is a sign that the change that was made has had a clear negative impact. All is not lost though, often an unsuccessful test can be more insightful than a straightforward winner as the negative result forces you to re-evaluate your initial hypothesis and understand what went wrong. You may have stumbled upon a key conversion barrier for your audience and addressing this barrier in the next test could lead to the positive result you have been looking for.

Graphically this pattern will look like this.

The big loser

We see 10% fewer visitors reaching each step of the funnel.

Interpretation

As the opposite of the big winner, this pattern is telling us that the new version of the test page causes 10% fewer visitors to reach the next step and from there onwards they convert equally as well as existing visitors. The overall result would be a 10% decrease in sales. You would not want to implement this new version of the page.

3. The clickbait

“We increased clickthrus by 307%!” You’ve probably seen sensational headlines like this being thrown around by people in the optimisation industry. Hopefully, like us, you’ve developed a strong sense of cynicism when you read results like this. The first question I always ask is “But how much did sales increase by?”. Chances are, if the result being reported fails to mention the impact on final sales then what they actually saw in their test results was this pattern that we’ve affectionately dubbed “The clickbait”.

Test results that follow this pattern will show a large increase in the conversion rate to the next step but then this improvement quickly fades away in the later steps and finally there is little or no improvement to Order Confirmation.

Graphically this pattern will look like this.

The clickbait

Interpretation

This pattern catches people out as the large improvement to the next step feels as if it should be a positive result. However, often this pattern of results is merely showing that the new version of the page is pushing a large amount of visitors through to the next step who have no real intention of purchasing. This is illustrated by the sudden large drop in the conversion rate improvement at the later steps when all of the unqualified extra traffic abandons the funnel.

As with all tests, whether this result can be deemed a success depends on the specifics of the site you are testing on and what you are looking to achieve. If there are clear improvements to be made on the next step(s) of the funnel that could help to convert the extra traffic from this test, then it could make sense to address those issues first and then re-run this test. However, if these extra visitors are clicking through by mistake or because they are being misled in any way then you may find it difficult to convert them later no matter what changes you make. Instead, you could be alienating potential customers by delivering a poor customer experience. You’ll also be adding a lot of noise to the data of any tests you run on the later pages as there are a lot of extra visitors on those pages who are unlikely to ever purchase.

4. The qualifying change

The third pattern is almost the reverse of the second in that here we actually see a drop in conversion to the next step but an overall increase in conversion to order confirmation.

Graphically this pattern looks like this.

The qualifying change

Interpretation

Taking this pattern as a positive can seem counter-intuitive because of the initial drop in conversion to the next step. Arguably, this type of result is actually as good if not better than a big winner from pattern 1. Here the new version of the test page is having what’s known as a qualifying effect. Visitors who may have otherwise abandoned at a later step in the funnel are leaving at the first step instead. Those visitors who do continue past the test page on the other hand are more qualified and therefore convert at a much higher rate. This explains the positive result to Order Confirmation.

Implementing a change that causes this type of pattern means visitors remaining in the funnel now have expressed a clearer desire to purchase. If visitors are still abandoning at a later stage in the funnel, the likelihood now is that this is being caused by a specific weakness on one of those pages. Having removed a lot of the noise from our data, in the form of the unqualified visitors, we are left with a much more reliable measure of the effectiveness of the later steps in the funnel. This means identifying weaknesses in the funnel itself will be far easier.

As with pattern 2 there are circumstances where a result like this may not be preferable. If you already have very low traffic in your funnel then reducing that further could make it even more difficult to get statistically significant results when testing on the later pages of the funnel. You may want to look at tests to drive more traffic to the start of your funnel before implementing a change like this.

5. The messy result

This final pattern is often the most difficult to extract insight from as it describes results that show very little pattern whatsoever. Here we often see both increases and decreases in conversion rate to the various steps in the funnel.

The messy result

Interpretation

First and foremost, a lack of a discernible pattern in the results of your split-test can be a tell-tale sign of insufficient levels of data. At the early stages of experiments, when data levels are low, it is not uncommon to see results fluctuating up and down. Reading too much into these results at this stage is a common pitfall. Resist the temptation of checking your experiment results too frequently – if at all – in the first few days. Even apparently strong patterns that emerge at these early stages can quickly disappear with a larger sample.

If your test has a large volume of data, and you’re still seeing this type of result, then the likelihood is that your new version of the page is delivering a combination of the effects from the clickbait and the qualifying change patterns. Qualifying some traffic but simultaneously pushing more unqualified traffic through the funnel. If your test involved making multiple changes to a page, try testing the changes separately to pinpoint which individual changes are causing the positive impact and which are causing the negative impact.

Key takeaways

The key point to take from all of these patterns is the importance of tracking and analysing the results at every step of your funnel when you A/B test, rather than just the next step after your test page. It is easy to see how if only the next step was tracked that many tests can have been falsely declared as winners or losers. In short, this is losing you money.

Detailed test tracking will allow you to pinpoint the exact step in your funnel that visitors are abandoning, and how that differs for each variation of the page that you are testing. This can help to answer the more important question of why they are abandoning. If the answer to this is not obvious, running some user tests or watching some recorded user sessions of your test variations can help you to develop these insights and come up with a successful follow up test.

There is a lot more to analysing A/B tests than just reading off a conversion rate increase to any single step in your funnel. Often, the pattern of the results can reveal greater insights than the individual numbers. Avoid jumping to conclusions based on a single increase or decrease in conversion to the next step and always track right the way through to the end of your funnel when running tests. Next time you go to analyse a test result, see which of these patterns it matches and consider the implications for your site.