General Archives | Page 2 of 5 |

The Perception Gap: Can we ever really know what users want?

Have you ever heard of Mazagran? A coffee-flavoured bottled soda that Starbucks and Pepsi launched back in the mid-1990s? No, you haven’t, and there is a good reason for that!

Starbucks correctly collected market research that told them customers wanted a cold, sweet, bottled coffee beverage that they could conveniently purchase in stores.

So surely Mazagran was the answer?

Evidently not! Mazagran was not what the consumers actually wanted. The failure of this product was down to the asymmetry that existed between what the customers wanted and what Starbucks believed the customer wanted.

Despite Starbucks conducting market research, this gap in communication still occurred, often known as the perception gap. Luckily for Starbucks, Mazagran was a stepping stone to the huge success that came with bottled Frappucinos; what the consumers actually wanted.

What is the perception gap and why does it occur?

Perception is seen as the (active) process of assessing information in your surroundings. A perception gap occurs when you attempt to communicate this assessment of information but it is misunderstood by your audience.

Assessing information in your surroundings is strongly influenced by communication. Due to different forms of human communication, a perception gap can occur when communication styles are different to your own. Not only can these gaps occur, but they vary in size. This depends on the different levels of value that you, or your customers, attach to each factor. In addition, many natural cognitive biases can influence the degree of the perception gap, biasing ourselves to believe we know what other people are thinking, more than we actually do.

Perception gaps in ecommerce businesses

Perception gaps mainly occur in social situations, but they can also heavily impact e-commerce businesses, from branding and product to marketing and online experience.

Perception gaps within ecommerce mainly appear due to customers forming opinions about your company and products on their broader experiences and beliefs. One thing that is for sure, perception gaps certainly occur between websites and their online users. Unfortunately, they are often the start of vicious cycles, where small misinterpretations of what the customer wants or needs are made worse when we try to fix them. Ultimately, this means we are losing out on turning visitors into customers.

Starbucks and Pepsi launching Mazagran was an example of how perception gaps can lead to the failure of new products. McDonalds launching their “Good to Know” campaign is an example of how understanding this perception gap can lead to branding success.

This myth-busting campaign was launched off the back of comprehensive market research using multiple techniques. McDonalds understood the differences between what they thought of themselves e.g. fast food made with high quality ingredients, and what potential customers thought of McDonalds, e.g. chicken nuggets made of chicken beaks and feet. Understanding that this perception gap existed allowed them to address these in their campaign, which has successfully changed users perceptions of their brand.

For most digital practices, research plays an important part in allowing a company or brand to understand their customer base. However, conducting and analysing research is often where the perception gap begins to form.

For example, say you are optimising a checkout flow for a retailer. You decide to run an on-site survey to gather some insight into why users may not be completing the forms, and therefore are not purchasing. After analysing the results it seems the top reason users are not converting is they are finding the web form confusing. Now this where the perception gap is likely to form. Do users want the form to be shortened? Do they want more clarity or explanation around form fields? Is it the delivery options that they may not understand? 

Not being the user means we will never fully understand the situation that the user is in. Making assumptions of this builds on the perception gap.

Therefore, reducing the perception gap is surely a no-brainer when it comes to optimising our websites. But is it as easy as it seems? 

In order to reduce the perception gap you need to truly understand your customer base. If you don’t, then there is always going to be an asymmetry between what you know about your customers and what you think you know about your customers.

How to reduce perception gaps

Sadly, perception gaps are always going to exist due to our interpretation of the insights we collect and the fact that we ourselves are not the actual user. However, the following tips may help to get the most out of your testing and optimisation by reducing the perception gap:

  1. Challenge assumptions – too often we assume we know about our customer, how they are interacting with our site and what they are thinking. Unfortunately, these assumptions can get cemented over time into deeply held beliefs of how users think and behave. However, challenging these assumptions leads to true innovation and new ideas that may not have been thought of before. With this in mind, assumptions can be answered by the research we conduct.
  2. Always optimise based on two supporting evidences – the perception gap is more likely to occur when research into a focus area is limited or based on one source of insight. Taking a multiple-measure approach means insights are likely to be more valid and reliable.
  3. Read between the lines – research revolves around listening to your customers but more importantly it is about reading between the lines. It is the difference between asking for their responses and then actually understanding them. As Steve Jobs once said “Customers don’t know what they want”; whether you believe that or not, understanding their preferences is still vital for closing the perception gap.
  4. Shift focus to being customer-led – being more customer-led, as opposed to product-led will place a higher value on research of your customers. With more emphasis on research, this should lead to a great knowledge and understanding of your customer base, which in turn should reduce the perception gap that has the potential to form.


The perception gap is something that is always going to exist and is something we have to accept. Conducting research, and a lot of it, is certainly a great way to reduce the perception gap that will naturally occur. However, experimentation is really the only means to truly confirm whether the research and insight you collected into your customer base are valid and significantly improve the user experience. One quote that has always made me think is by Flint McLaughlin who said “we don’t optimise web pages, we optimise for the sequence of thought”. This customer-led view when it comes to experimentation can only result in success.

How to measure A/B tests for maximum impact and insight

One of the core principles of experimentation is that we measure the value of experimentation in impact and insight. We don’t expect to get winning tests all the time, but if we test well, then we should always expect to draw insights from them. The only real ‘failed test’, is a test that doesn’t win and we learn nothing from.

In our eagerness to start testing, it’s common that we come up with an idea (hopefully at least based on data with an accompanying hypothesis!), get it designed and built and set it live. Most of the thought goes into the design and execution of the idea, and often less thought goes into how to measure the test to ensure we get the insight we need.

By the end of this article you should have:

  • A strong knowledge of why tracking multiple goals is important
  • A framework to structure your goals, so you know what’s relevant for each test

In every experiment it’s important to define a primary goal upfront – the goal that will ultimately judge the test a win/loss. It’s rarely enough to just track this one goal though. The problem is that if the test wins, great, but we may not understand fully why. Similarly if the test loses and we only track the main goal, then the only insight we are left with is that it didn’t win. In this case, we don’t just have a losing test, we also have a test where we lose the ability to learn – the second key measure of how we get value from testing. And remember, most tests lose!

If we don’t track other goals and interactions in the test we will miss the behavioural nuances and the other micro-interactions that can give us valuable insight as to how the test affected user behaviour. This is particularly important in tests where a positive result on the main KPI could actually harm another key business metric.

One example from a test we ran recently was for a camera vendor. We introduced add to basket CTAs on a product listing page, so that users who knew which product they wanted wouldn’t have to navigate down to the product page to purchase.

This led to a positive uplift on orders however, it had a negative effect on average order value. The reason for this was that the product page was an important place where users could also discover accessories for their products, including product care packages. As the test was encouraging users to add the main product, they were then less inclined to buy accessories and add-ons. The margins for accessories and add-on products are far higher than cameras, so a lower average order value driven by fewer accessories is definitely a negative outcome.

Insights from well tracked tests should be a key part of how your testing strategy develops as new learnings inform better iterations and open up new areas to testing by revealing user behaviour that you were previously unaware of.

In any test, there can be an almost endless number of things you could measure and the solution to not tracking enough shouldn’t be to track everything. Measure too much and you’ll potentially be swamped analysing data points that don’t have any value and you’ll curry no favour with your developers who have to implement all the tracking! Measure too little and you may miss valuable insights that could turn a losing test into a winning test. The challenge is to measure the right things for each test.

What to measure?

Your North Star Metric

Every test should be aligned to the strategic goal of testing, which goes without saying. That strategic goal should always have a clear measurable goal. For an ecommerce site it will likely be orders, or revenue. Leads for a lead gen site/page. Number of pages or page scroll for a content site – so on and so forth. This KPI will be the key measurement of whether your test succeeds or fails and for that reason, we call it the North Star metric. In essence, regardless of whatever else happens in the test, if we can’t move the needle of this metric, the test doesn’t win. Unsurprisingly, this metric should be tracked in every test you run.

You’ll know if the test wins, but what other effects did it have on your site? What effect did it have on purchase behaviour and revenue? Did it lead to a decrease in some other metrics which might be important to the business?

The performance of the North Star metric determines whether or not your hypothesis is proven or disproven. Your hypothesis in turn should be directly related to your primary objective.
The performance of the North Star metric determines whether or not your hypothesis is proven or disproven. Your hypothesis in turn should be directly related to your primary objective.

Guardrail Metrics

You should also be defining ‘guardrail metrics’. These tend to be second tier metrics that relate to key business metrics, which if they perform negatively could call into question the interpretation of how successful the test is. If the test loses but these perform well, it’s also probably a good sign you’re on the right track. They don’t, on their own, define the success or failure like the North Star metric, but they contextualise the North Star metric when reporting on the test.

For an ecommerce site, if we assume the North Star metric is orders, then two obvious guardrail metrics would be revenue and order value. If we run a test that increases orders, but as a result, users buy less items, or lower value items as in the example above, this would decrease AOV and could harm revenue.

Tests can become much more insightful just by adding two more metrics. Not only can we see the test drove more orders, but we can also see that our execution had an effect on the value and quantity of products being bought. This gives us the opportunity to either change the execution of the test to address the negative impact on our guardrail metrics. In this sense, measuring tests effectively is a core part of an iterative test and learn approach.

At a minimum, you should be tracking your North Star metrics and guardrail metrics. These will tell you the impact of the test on the bottom line for the business.

Your guardrail metrics will generally be closely related to your North Star metric.
Your guardrail metrics will generally be closely related to your North Star metric.

Secondary Metrics

Some tests you run may only impact your North Star metric – a test on the payment step of a funnel is a good example where the most likely outcome will either mean more orders or less orders, and not much else. What you’ll learn is whether that change pushed users over the line.

Most other tests, however, will have a number of different effects. Your test may radically change the way users interact with the page and measuring your tests at a deeper level than just the North Star and guardrail metrics will help you understand what effect the change has on user behaviour.

We work with an online food delivery company where meal deals are the main way customers browse and shop. Given the amount of the meal deals they have, one issue we found through our initial insights was that users struggle to navigate through them all to find something relevant. We ran a test where we introduced filtering options to the meal deal page, which included how many people the deal feeds, what types of food the deal contains, saving amounts and the price points. Along with they key metrics, we also tracked all the filter options in the test.

This test didn’t drive any additional orders, in fact not many users interacted with the filter suggesting it wasn’t very useful in helping users curate the meal deals. However, what we did notice was that users that did use it by far chose to filter meal deals by price and secondly by how many people they feed. So a ‘flat’ test, but now we know two very important pieces of information that users look for when selecting deals.

This in turn led to a series of tests around how we better highlight price and how many people the meal feeds at different parts of the user journey and on the meal deal offers themselves. These insights have helped shape the direction of our testing strategy by shedding light on user preferences. If we had only tracked the North Star and guardrail metrics, these insights would have been lost.

For each test you run, really think through what the possible user journeys and interactions could be as a result of the test and make sure you track these. It doesn’t mean track everything, but start to see tests as a way of learning about your users not just a way to drive growth.

Secondary metrics help contextualise your North Star and Guardrail metrics, as well as shed light on other behaviours.
Secondary metrics help contextualise your North Star and Guardrail metrics, as well as shed light on other behaviours.


If you’ve managed to track your North Star, guardrail and some secondary metrics in your tests, you’re in a great place. One other thing you’ll want to think about is how to segment your data. Segmenting your test results will be hugely important, especially when you get different user groups that respond differently on your site. Device is an obvious segment that you should be looking with all your test. We’ve seen tests that have had double digit uplifts on desktop, but haven’t moved the needle at all on mobile.

If your test involves introducing a new feature or piece of functionality that users can interact with, it’s helpful to create a segment for users that interact with that feature. This will help shed light over how interaction with this new functionality affects the user behaviour.

Key takeaways

Successful tests are measured by impact and insight. The only ‘failed’ test is one that doesn’t win and you don’t learn anything. Insightful tests allow you to better understand why a test performs the way it did and mean that you can learn, iterate and improve more rapidly, leading to better more effective testing.

  • Define your North Star metric – The performance of this metric will define if the test succeeds or fails. This should be directly linked to the key goal of the test.
  • Use guardrail metrics – Ensure your test isn’t having any adverse effects on other important business metrics.
  • Track smaller micro-interactions – These don’t decide the fate of your test but they do generate deeper insight into user-behaviour that can inform future iterations.
  • Segment by key user groups – Squeeze even more insight from your tests by looking at how different groups of users react to your changes.

If you would like to learn more about our approach, get in touch today!

How to build an experimentation, CRO or AB testing framework

Everyone approaches experimentation differently. But there’s one thing companies that are successful at experimentation all have in common: a strategic framework that drives experimentation.

In the last ten years we’ve worked with start-ups through to global brands like Facebook, the Guardian and Domino’s Pizza, and the biggest factor we’ve seen impact success is having this strategic framework to inform every experiment.

In this post, you’ll learn

    • Why a framework is crucial if you want your experimentation to succeed
    • How to set a meaningful goal for your experimentation programme
  • How to build a framework around your goal and create your strategy for achieving it

We’ll be sharing the experimentation framework that we use day in, day out with our clients to deliver successful experimentation projects. We’ll also share some blank templates of the framework at the end, so after reading this you’ll be able to have a go at completing your own straight away.

Why use a framework? Going from tactical to strategic experimentation

Using this framework will help you mature your own approach to experimentation, make a bigger impact, get more insight and have more success.

Having a framework:

      • Establishes a consistent approach to experimentation across an entire organisation, enabling more people to run more experiments and deliver value
      • Allows you to spend more time on the strategy behind your experiments and less time on the “housekeeping” of trying to manage your experimentation programme.
    • Enables you to transition from testing tactically to testing strategically.

Let’s explore that last point in detail.

In tactical experimentation every experiment is an island – separate and unconnected to any others. Ideas generally take the form of solutions – “we should change this to be like that” and come from heuristics (aka guessing), best practice or from copying a competitor. There is very little guiding what experiments run where, when and why.

Strategic experimentation on the other hand is focused on achieving a defined goal and has clear strategy for achieving it. The goal is the starting point – a problem with potential solutions explored through the testing of defined hypotheses. All experiments are connected and experimentation is iterative. Every completed experiment generates more insight that prompts further experiments as you build towards achieving the goal.

If strategic experimentation doesn’t already sound better to you then we should also mention the typical benefits you’ll see as a result of maturing your approach in this way.  

    • You’ll increase your win rate – the % of experiments that are successful
    • You’ll increase the impact of each successful experiment – on top of any conversion rate uplifts, experiments will generate more actionable insight
  • You’ll never run out of ideas again – every conclusive experiment will spawn multiple new ideas

Introducing the experimentation framework

As we introduce our framework, you might be surprised by its simplicity. But all good frameworks are simple. There’s no secret sauce here. Just a logical, strategic approach to experimentation.

Just before we get into the detail of our framework a quick note on the role of data. Everything we do should be backed by data. User-research and analytics are crucial sources of insight used to build the layers in our framework. But the experiments we run using the framework are often the best source of data and insight we have. An effective framework should therefore minimise the time it takes to start experimenting. We cannot wait for perfect data to appear before we start, or try and get things right first time. The audiences, areas and levers that we’ll define in our framework come from our best assessment of all the data we have at a given time. They are not static or fixed. Every experiment we run helps us improve and refine them and our framework and strategy is updated continuously as more data becomes available.

Part 1 – Establishing the goal of your experimentation project

The first part of the framework is the most important by far. If you only have time to do one thing after reading this post it should be revisiting the goal of your experimentation.

Most teams don’t set a clear goal for experimentation. It’s a simple as that. Any strategy needs to start with a goal, otherwise how can you differentiate success from wasted effort?

A simple test of whether your experimentation has a clear goal is to ask everyone in your team to explain it. Can they all give exactly the same answer? If not, you probably need to work on this. 

Don’t be lazy and choose a goal like “increase sales” or “growth”. We’re all familiar with the importance of goals being “SMART” (specific, measurable, achievable, relevant, time-bound) when setting personal goals. Apply this when setting the goal for experimentation.

Add focus to your goal with targets, measures and deadlines, and wherever possible be specific rather than general. Does “growth” mean “increase profit” or “increase revenue”? By how much? By when? A stronger goal for experimentation would be something like “Add an additional £10m in profit within the next 12 months”. There will be no ambiguity as to whether you have achieved that or not in 12 months’ time.

Ensure your goal for experimentation is SMART

Some other examples of strong goals for experimentation

    • “Increase the rate of customers buying add-ons from 10% to 15% in 6 months.”
    • “Find a plans and pricing model that can deliver 5% more new customer revenue before Q3”
  • “Determine the best price point for [new product] before it launches in June.”

A clear goal ensures everyone knows what they’re working towards, and what other teams are working towards. This means you can coordinate work across multiple teams and spot any conflicts early on.

Part 2 – Defining the KPIs that you’ll use to measure success

When you’ve defined the goal, the next step is to decide how you’re going to measure it. We like to use a KPI tree here – working backwards from the goal to identify all the metrics that affect it.

For example, if our goal is “Add an additional £10m in profit within the next 12 months” we construct the KPI tree of the metrics that combine to calculate profit. In this simple example let’s say profit is determined by our profit per order times how many orders we get, minus the cost of processing any returns.

Sketching out a KPI tree is an easy way to decide the KPIs you should focus on

These 3 metrics then break down into smaller metrics and so on. You can then decide which of the metrics in the tree you can most influence through experimentation. These then become your KPIs for experimentation. In our example we’ve chosen average order value, order conversion rate and returns rate as these can be directly impacted in experiments. Cost per return on the other hand might be more outside our control.

When you’re choosing KPIs, remember what the K stands for. These are key performance indicators – the ones that matter most. We’d recommend choosing at most 2 or 3. Remember, the more you choose, the more fragmented your experimentation will be. You can track more granular metrics in each experiment, but the overall impact of your experiments will need to be measured in these KPIs.

Putting that all together, you have the first parts of your new framework. This is our starting point – and it is worth the time to get this right as everything else hinges on this.

We present our framework as rows to highlight the importance of starting with the goal and working down from there.

Part 3 – Understanding how your audience impacts your KPIs and goal

Now we can start to develop our strategy for impacting the KPIs and achieving the goal. The first step is to explore how the make-up of our audience should influence our approach.

In any experiment, we are looking to influence behaviour. This is extremely difficult to do. It’s even more difficult if we don’t know who we’re trying to influence – our audience.

We need to understand the motivations and concerns of our users – and specifically how these impact the goal and KPIs we’re trying to move. If we understand this, then we can then focus our strategy on solving the right problems for the right users.

So how do we go about understanding our audience? For each of our KPIs the first question we should ask is “Which groups of users have the biggest influence on this KPI?” With this question in mind we can start to map out our audience.

Start by defining the most relevant dimensions – the attributes that identify certain groups of users. Device and Location are both dimensions, but these may not be the most insightful ways to split your audience for your specific goal and KPIs. If our goal is to “reduce returns by 10% in 6 months”, we might find that there isn’t much difference in returns rate for desktop users compared to mobile users. Instead we might find returns rate varies most dramatically when we split users by the Product Type that they buy.

For each dimension we can then define the smaller segments – the way users should be grouped under that dimension. For example, Desktop, Mobile and Tablet would be segments within the Device dimension.

You can have a good first attempt at this exercise in 5–10 minutes. At the start, accuracy isn’t your main concern. You want to generate an initial map that you can then start validating using data – refining your map as necessary. You might also find it useful to create 3 or 4 different audience maps, each splitting your audience in different ways, that are all potentially valid and insightful for your goal.

Map out your audiences by thinking about the relevant dimensions that could have the greatest influence on your KPIs and overall goal.

Once you have your potential audiences the next step would then be to use data to validate the size and value of these audiences. The aim here isn’t to limit our experiments to a specific audience – we’re not looking to do personalisation quite yet. But understanding our audiences means when we come to designing experiments we’ll know how to cater to the objections and concerns of as many users as possible.

We add the audience dimensions we feel are most relevant to our goal and KPIs to the framework. If it’s helpful you can also show the specific segments below.

Part 4 – Identifying the areas with the greatest opportunity to make an impact

Armed with an better understanding of our audience, we still need to choose when and where to act to be most effective. Areas is about understanding the user journey – and focusing our attention on where we can make the biggest impact.

For each audience, the best time and place to try and influence users will vary. And even within a single audience, the best way to influence user behaviour is going to depend on which stage of their purchase journey the users are at.

As with audiences, we need to map out the important areas. We start by mapping the onsite journeys and funnels. But we don’t limit ourselves to just onsite experience – we need to consider the whole user journey, especially if our goal is something influenced by behaviours that happen offsite. We then need to identify which steps directly impact each of our KPIs. This helps to limit our focus, but also highlights non-obvious areas where there could be value.

Sketch out your entire user journey, including what happens outside the website. Then highlight which areas impact each of your KPIs.

As with audiences, you can sketch out the initial map fairly quickly, then use analytics data to start adding more useful insights. Label conversion and drop-off rates to see where abandonment is high. Don’t just do this once for all traffic, do this repeatedly, once for each of the important audiences identified in the previous step. This will highlight where things are similar but crucially where things are different.

Once you have your area map you can start adding clickthrough and drop-off rates for different audiences to spot opportunities.

So with a good understanding of our audiences and areas we can add these to our framework. Completing these two parts of the framework is easier the more data you have. Start with your best guess at the key audiences and areas, then go out and do your user-research to inform your decisions here. Validate your audiences and areas with quant and qual data.

Add your audiences and areas to your framework. You may have more than 4 of each but that’s harder for us to fit in one image!

Part 5 – Identifying the potential levers that influence user behaviour

Levers are the factors we believe can influence user behaviour: the broad themes that we’ll explore in experimentation. At its simplest, they’re the reasons why people convert, and also the reasons why people don’t convert. For example, trust, pricing, urgency and understanding are all common levers.

To identify levers, first we look for any problems that are stopping users from converting on our KPI – we call these barriers to conversion. Some typical barriers are lack of trust, price, missing information and usability problems.

We then look for any factors that positively influence a user’s chances of converting – what we call conversion motivations. Some typical motivations are social proof (reviews), guarantees, USPs of the product/service and savings and discounts.

Together the barriers and motivations give us a set of potential levers that we can “pull” in and experiment to try and influence behaviour. Typically we’ll try to solve a barrier or make a motivation more prominent and compelling.

Your exact levers will be unique to your business. However there are some levers that come up very frequently across different industries that can make for good starting points.

Ecommerce – Price, social proof (reviews), size and fit, returns, delivery cost, delivery methods, product findability, payment methods, checkout usability

Saas – Free trial, understanding product features, plan types, pricing, cancelling at the end of trial, monthly vs annual pricing, user onboarding

Gaming – welcome bonuses, ongoing bonuses, payment methods, popular games, odds

Where do levers come from? Data. We conduct user-research and gather quantitative and qualitative data to look for evidence of levers. You can read more about how we do that here.

When first building our framework it’s important to remember that we’re looking for evidence of levers, not conclusive proof. We want to assemble a set of candidate levers that we believe are worth exploring. Our experiments will then validate the levers and give us the “proof” that a specific lever can effectively be used to influence user behaviour.

You might start initially with a large set of potential levers – 8 or 10 even. We need a way to validate levers quickly and reduce this set down to the 3–4 most effective. Luckily we have the perfect tool for that in experiments.

Add your set of potential levers to your framework and you’re ready to start planning your experiments.

Part 6 – Defining the experiments to test your hypotheses

The final step in our framework is where we define our experiments. This isn’t an exercise we do just once – we don’t define every experiment we could possibly run from the framework at the start – but using our framework we can start to build the hypotheses that our experiments will explore.

At this point, it’s important to make a distinction between a hypothesis for an experiment and the execution of an experiment. A hypothesis is a statement we are looking to prove true or false. A single hypothesis can then be tested through the execution of an experiment – normally a set of defined changes to certain areas for an audience.

We define our hypothesis first before thinking about the best execution of an experiment to test it, as there are many different executions that could test a single hypothesis. At the end of the experiment the first thing we do is use the results to evaluate whether our hypothesis has been proven or disproven. Depending on this, we then evaluate the execution separately to decide whether we can iterate on it – to get even stronger results – or whether we need to re-test the hypothesis using a different execution.  

The framework makes it easy to identify the hypothesis statements that we will look to prove or disprove in our experiments. We can build a hypothesis statement from the framework using this simple template

“We believe lever [for audience] [on area] will impact KPI.”

The audience and area here are in square brackets to denote that it’s optional whether we want to specify a single audience and area in our hypothesis. Doing so will give us a much more specific hypothesis to explore, but in a lot of cases we may also be interested in testing the effectiveness of the lever across different audiences and different areas – so may want to not specify the audience an area until we define the execution of the experiment.

The framework allows you to quickly create hypotheses for how you’ll impact your KPIs and achieve your goal.

Using the framework

Your first draft of the completed framework will have a large number of audiences, areas and levers, and even multiple KPIs. You’re not going to be able to tackle everything at once. A good strategy should have focus. Therefore you need to do two things before you can define a strategy from the framework.

Prioritise KPIs, audiences and areas

We’re going to be publishing a detailed post of how this framework enables an alternative approach to prioritisation than typical experiment prioritisation.

The core idea is that you need to first prioritise the KPI you most need to impact from your framework in order to achieve your goal. Then evaluate your audiences identify those groups that are the highest priority groups to influence if we want to move that KPI. Then for that audience prioritise those areas of the user-journey that offer the greatest opportunity to influence their behaviour.

This then gives you a narrower initial focus. You can return to the other KPIs at a later date and do the same prioritisation exercise for them.

Validate levers

You need to quickly refine your set of levers and identify the ones that have the greatest potential. If you have run experiments before you should look back through each experiment and identify the key lever (or levers) that were tested. You can then give each lever a “win rate” based on how often experiments using that lever have been successful. If you haven’t yet started experimenting, you likely already have an idea of the potential priority order of your levers based on the volume of evidence for each that you found during your user-research.

However, the best way to validate a lever is to run an experiment to test the impact it can have on our KPI. You need a way to do this quickly. You don’t want to invest significant time and effort testing hypotheses around a lever that turns out not have ever been valid. Therefore for each lever you should identify what we call the minimum viable experiment.

You’re probably familiar with the minimum viable product (MVP) concept. In a minimum viable experiment we look to design the simplest experiment we can that will give us a valid signal as to whether a lever works at influencing user behaviour.

If the results of the minimum viable experiment show a positive signal, we can then justify investing further resource on more experiments to validate hypotheses around this lever. If the minimum viable experiment doesn’t give a positive signal, we might then de-prioritise that lever, or remove it completely from our framework. We’ll also be sharing a post soon going into detail on designing minimum viable experiments.

Creating a strategy

How you create a strategy from the framework will depend on how much experimentation you have done before and therefore how confident you are in your levers. If you’re confident in your levers then we’d recommend defining a strategy that lasts for around 3 months and focuses on exploring the impact of 2-3 of your levers on your highest priority KPI. If you’re not confident in your levers, perhaps having not tested them before, then we’d recommend an initial 3-6 month strategy that looks to run the minimum viable experiment on as many levers as possible. This will enable you to validate your levers quickly so that you can take a more narrow strategy later.

Crucially at the end of each strategic period we can return to the overall framework, update and refine it from what we’ve learnt from our experiments, and then define our strategy for the next period.

For one quarter we might select a single KPI and a small set of prioritised audiences, areas and levers to focus on and validate.

Key takeaways

You can have a first go at creating your framework in about 30 minutes. Then you can spend as long or as little time as you like refining it before you start experimenting. Remember your framework is a living thing that will change and adapt over time as you learn more and get more insight.

  1. Establish the goal of your experimentation project
  2. Define the KPIs that you’ll use to measure success
  3. Understand how your audience impacts your KPIs and goal
  4. Identify the areas with the greatest opportunity to make an impact
  5. Identify the potential levers that influence user behaviour
  6. Define the experiments to test your hypotheses

The most valuable benefit of the framework is that it connects all your experimentation together into a single strategic approach. Experiments are no longer islands, run separately and with little impact on the bigger picture. Using the framework to define your strategy ensures that every experiment is playing a role, no matter how small, in helping you impact those KPIs and achieve your goal.

Alongside this, using a framework also brings a large number of other practical advantages:

  • It’s clearyour one diagram can explain any aspect of your experimentation strategy to anyone that asks or if you need to report on what you’re doing
  • It acts as a sense checkany experiment idea that gets put forward can be assessed based on how it fits within the framework. If it doesn’t fit, it’s easy rejection with a clear reason why
  • It’s easy to come back to – things have a nasty habit of getting in the way of experimentation, but with the framework even if you leave it for a couple of months, it’s easy to come back to it and pick up where you left off
  • It’s easier to show progress and insight one of the biggest things teams struggle with is documenting the results of all their experiments and what was learnt. With the framework, the idea is that the framework updates and changes over time so you know that your previous experiment results have all been factored in and you’re doing what you’re doing for a reason

As we said at the start of this post, there is no special sauce in this framework. It’s just taking a logical approach, breaking down the key parts of an experimentation strategy. The framework we use is the result of over 10 years of experience running experimentation and CRO projects and it looks how it does because it’s what works for us. There’s nothing stopping you from creating your own framework from scratch, or taking ours and adapting it to suit your business or how your teams work. The important thing is to have one, and to use it to go from tactical to strategic experimentation.

You can find a blank Google Slide of our framework here that you can use to create your own.

Alternatively you can download printable versions of the framework if you prefer to work on paper. These templates also allow for a lot more audiences, areas, levers and experiments than we can fit in a slide.

If you would like to learn more, get in touch today!

Talking PIE over breakfast – our prioritisation workshop

Recently, we continued our workshop series with one of our solutions partners, Optimizely, discussing the prioritisation of experiments.

The workshop session was led by Kyle Hearnshaw, Head of Conversion Strategy at, with support from Stephen Pavlovich, CEO and Nils Van Kleef, Solutions Engineer at Optimizely.

Our most popular workshop to date, we gathered over 40 ecommerce professionals including representatives from brands such as EE, John Lewis and Just Eat together, all keen to talk about one of their biggest challenges – prioritisation. Throughout the morning we discussed why we prioritise, popular prioritisation methods and finally how we at prioritise experiments.

For those of you that couldn’t make the session, we want to share some insights into prioritisation so you too can apply learnings next time you are challenged with prioritising experiments. Keep an eye out on our blog too, as later in the year we’ll be posting a longer step-by-step explanation of our approach to personalisation.

Why Prioritise?

There is never usually a shortage of ideas to test however we are often faced with a shortage of resources to build, run and analyse experiments as well as traffic in which to run experiments on. We need to make sure we prioritise the experiments that are going to do the most to help us achieve our goal in the shortest of time.

So, what is the solution?

Popular Prioritisation Methods

In order to identify those tests that have the maximum impact with the efficient use of resources we need to find the most effective prioritisation method. So, let’s take a look at what is out there:

1. PIE model

Potential: How much improvement can be made on the pages?

Importance: How valuable is the traffic to the pages?

Ease: How complicated will the test be to implement on page / template?

We think PIE is simple and easy to use, analysing only 3 factors. However, some concerns with this model are  that the scoring can be very subjective and there can be an overlap between Potential and Importance.

2. Idea Scores from Optimizely 

 Optimizely’s method is an extended version of the PIE model adding the factor of ‘love’ into the equation. Again, we commend this model for its simplicity however, subjectivity from scoring can still mean the overall score is subjective.

3. PXL model from ConversionXL

 The PXL is a lot more complex than the previous two, giving eight to data and insight which we think is very important. In addition, the PXL model goes a way towards eliminating subjectivity by limiting scoring to either 1 or 0 in most columns. One limitation of this model is that it doesn’t account for differences in page value rather than just page traffic, nor does it give you a way to factor in learnings from past experiments. It also has the potential to be very time consuming and you may not easily be able to complete all columns for every experiment.

Prioritisation at Conversion

When deciding on our prioritisation model we wanted to ensure that we were prioritising the right experiments, making sure the model accounted for insights and results, removing any possibility for subjectivity and allowing for the practicalities of running an experimentation programme. So, we came up with the SCORE model:

The biggest difference with our approach is that prioritisation happens at two separate stages. We want to avoid a situation where we are trying to prioritise a large number of experiments with different hypotheses, KPIs, target audiences and target pages against each other. In our approach individual experiments are prioritised at the ‘Order’ stage however, we minimise the need for directly prioritising experiments against each other by first prioritising at the strategy stage.

We use our experimentation framework to build our strategy by defining a goal, agreeing our KPIs and then by prioritising relevant audiences, areas and levers. Potential audiences we can experiment on are prioritised on volume, value and influence. Potential areas are prioritised on volume, value and potential. Levers – what user-research has show could influence user behaviour – are prioritised on win rate (if we’ve run experiments on this lever before), confidence (how well supported the lever is in our data) or both.

Next we ensure we cultivate the right ideas for our concepts. We believe structured ideation around a single hypothesis generates better ideas. Again, utilising the experimentation framework we define our hypotheses: “We believe lever for audience, on area will impact KPI.” Once the hypothesis has been defined we then brainstorm the execution.

The order of our experiments come from prioritising the concepts that come out of our ideation sessions. Concepts can be validated quickly by running minimum viable experiments. MVEs allow us to test concepts without over-investing and also allows us to test more hypotheses in a shorter timeframe.

Next, we create an effective roadmap. We start by identifying the main swimlanes (pairs of audiences and areas that can support experiments) and then we estimate experiment duration based on a minimum detectable effect. A roadmap should include tests across multiple levers, this allows you to gather more insight and spreads the risk of over-emphasising in one area.

Finally, it’s time to run and analyse the experiments (execution).

We believe our SCORE model is effective for prioritising experimentation projects because it puts more emphasis on prioritising and getting the right strategy first before ever trying to prioritise experiments against each other. It is structured, rewards data and insight and allows for the practicalities of experimentation – we can review and update our strategy as new data comes in. The only limitation is that it does take time in order to prioritise the strategy effectively. But, if we’re going to invest time anywhere we believe it should be on getting the strategy right.

Our conclusions

The workshop was a great success. We had some great feedback from those involved and some actionable ideas for our attendees to take away.

We recommend having a go at using the SCORE prioritisation model. In the next few weeks we’ll be sharing a detailed post on our experimentation framework but you can apply SCORE within your own approach by reviewing how you define and prioritise your experimentation strategy. See whether this helps you to produce a roadmap which is informed by data and insight, absent of subjectivity and effective in helping your business test the most valuable ideas first.

If you have any questions or would like to know more, please get in touch.

To attend our future events, keep an eye out here.

5 steps to kick-start your experimentation programme with actionable insights

Experimentation has to be data-driven.

So why are businesses still kicking off their experimentation programmes without good data? We all know running experiments on gut-feel and instinct is only going to get you so far.

One problem is the ever-growing number of research methods and user-research tools out there. Prioritising what research to conduct is difficult. Especially when you are trying to maximise success with your initial experiments and need to get those experiments out the door quickly to show ROI.

We are no stranger to this problem. And the solution, as ever, is to take a more strategic approach to how we generate our insight. We start every project with what we call the strategic insights phase. This is a structured, repeatable approach to planning user-research we’ve developed that consistently generates the most actionable insight whilst minimising effort.

This article will provide a step-by-step guide of how we plan our research strategy so that you can replicate something similar yourself. Meaning you can set up your future experiments for greater success.

The start of an experimentation programme is crucial. Pressures of getting stakeholders buy-in or achieving quick ROI means the initial experiments are often the most important. A solid foundation of actionable insight from user-research can make a big difference as to how successful your early experiments are.

With hundreds of research tools enabling multiple different research methods, a challenge arises with how we choose which research method will generate the insight that’s most impactful and actionable. Formulating a research strategy for how you’re going to generate your insight is therefore crucial.

When onboarding new clients, we run an intense research phase for the first month. This allows us to get up to speed on the client’s business and customers. More importantly, it provides us with data that allows us to start building our experimentation framework – identifying where our experimentation can make the most impact and what our experimentation should focus on. We find dedicating this time to insights sets our future experiments up for the bigger wins and therefore, a rapid return on investment.

Our approach: Question-led insights

When conducting research to generate insight, we use what we call a question-led approach. Any piece of research we conduct must have the goal of answering a specific question. We identify the questions we need to answer about a client’s business and their website and then conduct only the research we need to answer them. Taking this approach allows us to be efficient, gaining impactful and actionable insights that can drive our experimentation programme.

Following a question-led approach also means we don’t fall into the common pitfalls of user-research:

  • Conducting research for the sake of it
  • Wasting time down rabbit holes within our data or analytics
  • Not getting the actionable insight you need to inform experimentation

There are 5 steps in our question-led approach.

1. Identify what questions you need, or want, to answer about your business, customers or website

The majority of businesses still have questions about their customers they don’t have the answers to. Listing these questions can provide a brain-dump for everything you don’t know but that if you did know would help you design better experiments. Typically these questions will fall into three main categories; your business, your customers and your website.

Although one size does not fit all with the questions we need to answer, we have provided some of the typical questions that we need to answer for clients in e-commerce or SaaS.

SaaS questions:

  • What is the current trial-to-purchase conversion rate?
  • What motivates users on the trial to make a purchase? What prevents users on the trial to make a purchase?
  • What is the distribution between the different plans on offer?
  • What emails are they sending users when they are in their trial? What is the life cycle of these emails?
  • What are the most common questions asked to customer services or via live chat?

We can quite typically end up with a list of 20-30 questions. So the next step is to prioritise what we need to answer first.

2. Prioritise what questions need answering first

We want our initial experiments to be as data-driven and successful as possible. Therefore, we need to tackle the questions that are likely to bring about the most impactful and actionable insights first.

For example, a question like “What elements in the navigation are users interacting with the most?” might be a ‘nice to know’. However, if we don’t expect a navigation experiment to be one we would run any time soon, this may not be a ‘need to know’ and therefore wouldn’t be high priority. On the other hand, a question like “What’s stopping users from adding products to the basket?” is almost certainly a ‘need to know’. Answering this is very likely to generate insight that can be directly turned into an experiment. Rule of thumb is to prioritise the ‘need to know’ questions ahead of the ‘nice to know’.

We also need to get the actionable insight quickly. Therefore, it is important to ensure that we prioritise questions that aren’t too difficult or time consuming to answer. So, a second ranking of ‘ease’ can also help to prioritise our list.

3. Decide the most efficient research techniques to answer these questions

There are many types of research you could use to answer your questions. Typically we find the majority of questions can be answered by one or more of web analytics, on-site or email surveys, usability testing or heatmaps/scrollmaps. There may be more than one way to find your answer.

However, one research method could also answer multiple questions. For example, one round of usability testing might be able to answer multiple questions focused on why a user could be dropping off at various stages of your website. This piece of research would therefore be more impactful, as you are answering multiple questions, and would be more time efficient compared to conducting multiple different types of research.

For each question in our now prioritised list we decide the research method most likely to answer it. If there are multiple options you could rank these by the most likely to get an answer in the shortest time. In some cases we may feel the question was not sufficiently answered by the first research method, so it can be helpful to consider what you would do next in these cases.

4. Plan the pieces of research you will carry out to cover the most questions

You should no have a list of prioritised questions you want to answer and what research method you would use to answer each. From this you can select the pieces of research you should carry out based on which would give you the best coverage of the most important questions. For example, you might see that 5 of your top 10 questions could be answered through usability testing. Therefore, you should prioritise usability testing in the time you have, and the questions you need to answer can help you to design your set of tasks.

After your first round of research, revisit your list of questions and for each question evaluate whether or not you feel it has been sufficiently answered. Your research may also have generated more questions that should be added to the list. Periodically you might also need to re-answer questions where user behaviour has changed due to your experimentation. For example, if initially users were abandoning on your basket page due to a lack of trust, but successful experiments have fixed this, then you may need to re-ask the question to discover new problems on the basket page.

On a regular basis you can then repeat this process again of prioritising the questions, deciding the best research methods and then planning your next set of research.

5. Feed these insights into your experimentation strategy

Once your initial research pieces have been conducted and analysed it is important to compile the insight from them in one place. This has two benefits. The first being the ease in visualising and discovering themes that may be emerging within your data from multiple sources of insight. The second being the benefit that comes from having one source of information that could be shared with others within your business.

As your experimentation programme matures it is likely you will be continuously running research in parallel to your experiments. The insight from this research will answer new questions that will naturally arise and can help inform your experimentation.

Taking this question-led approach means you can be efficient with the time you spend on research, while still maximising your impact. Following our step-by-step guide will provide a solid foundation that you can work upon within your business:

  1. Identify what questions you need, or want, to answer about your business, customers or website
  2. Prioritise what questions need answering first
  3. Decide the most efficient research techniques to answer these questions
  4. Plan the pieces of research you will carry out to cover the most questions
  5. Feed these insights into your experimentation strategy

For more information on how to kick-start experimentation within your business, get in touch here.

An evening of experimentation…with Facebook and

Recently we held, what we certainly thought, was our best event yet. An evening of experimentation co-hosted by our friends at Facebook.

We were lucky enough to be joined by Vince Darley (Head of Growth at Deliveroo), Brian Hale (Vice President of Growth Marketing at Facebook) and Denise Moreno (Director of Growth Marketing at Facebook), who shared some incredible insights.

Held at the Facebook offices in Rathbone Square, London, we welcomed a variety of guests from top brands across an array of industries, to hear all about the key principles of experimentation.

Proceedings began with our very own Stephen Pavlovich, CEO and Founder of If you’re working in the marketing or ecommerce space, it’s likely that you’ve come across the basic principles of experimentation. However, Stephen wanted to explain what experimentation REALLY is.

It’s not as simple as just running a few A/B tests on your site to find out which button colour converts the best, but experimentation, once taken to place where your business has reached an advanced level of maturity, can really begin to answer the wider, more important questions. For example, which product should you launch next? Or how should you structure your commercial model? We believe that experimentation really should be at the heart of every single business.

Next up, we had Vince Darley, Head of Growth at Deliveroo. Vince has had years of experience leading experimentation teams at huge businesses like King and Ocado. He wanted to share some of the knowledge and experience he’s gained over many years at the top with our guests.

Vince Darley, Head of Growth at Deliveroo
Vince Darley, Head of Growth at Deliveroo

Vince really showed the breadth of his knowledge by making sure there was something there for everybody through his ‘Experimentation Three-Course Meal’. He began by sharing some of the basic rules for anybody getting started with experimentation, his experience at King working on applications like the formidable Candy Crush, sharing some of the most important lessons he has learnt including the best way to conduct high impact experimentation. Finally, he drew on his expert knowledge to share some of his most advanced secrets for experimentation, allowing the audience to leave with some indispensable tips around using and interpreting data.

With a tough act to follow, next our audience was treated to a double act by Brian Hale, Vice President of Growth Marketing, and Denise Moreno, Director of Growth Marketing at Facebook.

Brian Hale, Vice President of Growth Marketing at Facebook
Brian Hale, Vice President of Growth Marketing at Facebook

They discussed five key issues around creating a growth team and how experimentation should inform the process and ways of working. It was incredible for our audience to hear about how one of the most iconic growth teams in the world was formed. They were provided with actionable insight that will help them when they try to drive the expansion of their own internal teams. Finally, they shared some of their own experimentation principles from years of experience, and some real life examples from Facebook around testing on messenger and ads from some of the earliest stages of the platform.

Alongside some fantastic content from our speakers, the evening also marked the launch of our experimentation principles project. Inspired by the simple elegance of the UK government design principles, we have decided to collate our 11 years experience at, and define a set of core experimentation principles. They tackle the simple mistakes, misconceptions and misinterpretations that organisations make, that limit the impact, effectiveness and adoption of experimentation.

Our 9 key principles of experimentation
Our 9 key principles of experimentation


Every member of the audience received a copy of these to help elevate their experimentation programme. Luckily, this wasn’t limited to our guests, and you too can elevate your programme. If you’d like to download a copy of the key principles you should be considering, with input from our friends at Facebook, Just Eat, and Microsoft, then download your copy here today.

If you’d like to hear more about how you can use experimentation to drive growth in your business, then get in touch.

Keep an eye on our events page to make sure you don’t miss out. 

How to make your ideation sessions go down a (brain)storm

We recently kicked off a workshop series with one of our solution partners, Optimizely, starting proceedings off (quite fittingly) with a session on the topic of ideation.

The workshop session was led by Kyle Hearnshaw, Head of Conversion Strategy at, with support from Stephen Pavlovich, CEO and Nils Van Kleef, Solutions Engineer at Optimizely.

We run ideation sessions all the time, whether between ourselves or in collaboration with our clients. But this had a slightly different twist. We gathered a group of representatives, from multiple different businesses, across multiple different verticals, in a room to experiment with how well different approaches to ideation perform. Every attendee left with an understanding of the different approaches, and insight into which approach they felt they could apply most effectively with their own teams.

For those of you that couldn’t make the session, we want to share some of those ideas so that you too can run ideation sessions that generate impactful ideas.


But first, why does the ideation process matter?

Ideation injects creativity into a data-driven process

Data is incredibly important in driving all of the decisions we make in experimentation. Data is great at telling us where the problems are, but it isn’t good at telling us how to solve these problems. This is where creativity plays a key role in experimentation. Ideation is our opportunity to inject creativity into what we do, to explore new concepts and experiment with potential solutions as we hone in on the optimal solution for each problem.

Our time is limited

For many of us, there’s barely enough hours in the day to get through all our emails, let alone spend hours in unproductive brainstorming. We need to optimise how we spend our time in ideation to get a good return on our time investment. The ideation process needs to reliably generate high quality ideas that can immediately be tested through experimentation.

More people doesn’t always mean better ideas

We’ve been taught that collaboration should generate more and better ideas, but this is only true when people can contribute effectively. When more people are in the room, there is often a tendency for people to become protective of their own ideas, as opposed to sharing, discussing and letting their ideas evolve through the input of the rest of the group. Everyone present in an ideation session should be able to, and also expected to contribute effectively.

Subtle differences in execution can have a big impact on results and build time

The difference between an execution of an experiment that wins versus an execution that loses can be extremely small. Two ideas that seem similar can often perform very differently when exposed to real users. One way of testing an idea might require a lot more effort to create than another. Ideation plays a key role in both defining and refining ideas.

Top companies attended to take away some actionable insights from the ideation session
Top companies attended to take away some actionable insights from the ideation session

Our top 4 tips for effective ideation, no matter your approach

1. Separate the hypothesis and the execution 

Ideas that come up in an ideation session come in all shapes and sizes. Some will come to you fully formed with a well-defined hypothesis and a sketch of the execution of the experiment. Others will be less-formed, and need to be refined into a clear hypothesis first before ideating around the best execution of that idea. When an idea is raised, identify whether it’s a hypothesis or a specific execution. For hypotheses, you can then ideate on solutions. For executions you can step back and ideate around the underlying hypothesis of the idea.

2. Break ideas up into sequences of experiments – starting with the minimum viable experiment

One way of doing this is to think about how the idea might fit into a sequence of experiments. Think about the idea you want to explore. Is it the ‘minimum viable experiment’? Or is it a more formed exploration of a specific solution in this area? Think about how you might take your idea and iterate on it across a range of experiments, continuing to improve, and reap results of the idea you have produced to finally reach that ‘perfect’ version.

3. Is this idea iterative, innovative or disruptive?

When you reach that stage of an ideation session where the ideas start to dry up, a useful exercise is to group your ideas into 3 types. Which are iterative ideas – tweaks, optimisations or designs? Which are innovative ideas – new experiences, journeys or usability? And which are groundbreaking, disruptive ideas that will affect product, pricing, or even the company proposition? Try and make sure you have a good number of each category.

4. Have a plan to weed out your bad ideas

Remember back at school, when you were told to put your hand up and express your ideas, whatever they were? There’s no such thing as a bad idea, right? Wrong.

Contrary to common belief, there is such thing as a bad idea. We would encourage you to have a completely open system for idea creation. Allow people to come up with whatever ideas they like. But, also have a system for critique and review. Some ideas will simply not be good, others will be good, but won’t be possible.

There has to be a good mix of realistic, and ambitious ideas. The last thing anybody wants to do is waste their precious time talking about ideas that are simply ‘dead in the water’. One approach we like for this is using The Disney Method.


The 3 approaches to ideation we tested

We’re going to share with you the three approaches that we explored in our ideation workshop.

Unstructured Ideation

Quite often when we talk to people and ask how they ideate, the answer is “Well, we get everyone in a room and talk about some ideas”. This is the more common type of ideation session, and by no means the approach that we would recommend. But, it does provide a useful baseline, and if you apply all of our tips above it can, in the right circumstances, still be effective.

That said, it’s pretty much a free-for-all. Everyone shouts their ideas, there is very little focus, it’s unstructured. For this session, we gave attendees a vague goal of increasing conversion rate and a specific page of a website to improve – to mimic the setup of one of these sessions in a conversion optimisation context.


+ Anywhere, anytime

+ Anyone can do it


– Anyone can do it (the input expected from each attendee isn’t clear)

– The ideas generated tend to be unrelated and broad

– Small number of high-quality ideas

Structured Ideation

In most cases, when we talk about ideation we’re talking about the creation of ideas for AB tests and experiments. What many people fail to remember, is that experiments are just the end-product of a larger strategic process. At Conversion, we build our strategy around our experimentation framework.

Most unstructured ideation sessions tend to be around a loosely defined goal and perhaps a KPI to be improved. However, in order to conduct an effective ideation session we need more structure and focus.

Only after you have defined your Goal and KPIs, then used data to understand and define your Audiences, Areas and Levers, should you start to ideate for experiment concepts. Agree the specific audience, area and lever that you’re going to ideate on and make sure everyone knows this, and has seen any relevant data and research before they attend. The session will then be more focused around solving a specific problem, and structured.

For this session we gave attendees a completed experimentation framework and defined the Goal, KPIs, Audience, Area and Lever we wanted them to ideate on.


+ Customer-focused

+ Impactful concepts


– Needs upfront research

– Takes longer to get started

Crazy 8s

This is one of our favourite and most enjoyable methods of ideation. It forces every member of the group to produce 8 ideas, in 8 minutes. We’ve adapted the concept from one originally adopted by Google.

Our adaptation of the original Crazy 8s system is to apply it to a structured ideation setup. So again we have defined our Goal, KPIs, Audience, Area and Lever. Then we use the Crazy 8s ideation process on that specific lever to generate a large number of ambitious ideas in a short space of time. Rather than one person generating 8 ideas on one lever, we often rotate the paper so that you have 1 minute to add a new idea on a lever that nobody else has come up with yet. In this way you can cover multiple levers in one session if you’re ambitious.

The purpose of using Crazy 8s is to force everyone in the ideation session to contribute, but also to stretch all of the attendees to contribute more ambitious, creative ideas then they might generate without the added time pressure. It also encourages people to draw ideas to save time, which can bring out new ideas as people get more visual.


+ Large number of ideas

+ Includes all the structured ideation benefits


– Needs an introduction

– Less collaborative


Our conclusions

Overall, we thought the workshop was a great success. We had some great feedback from those involved and some brilliant ideas for our attendees to take away.

The key takeaway was that structure is crucial for effective ideation. And by that we don’t mean the minute-by-minute structure of the session itself, we mean the structure, focus and setup of what you’re ideating about. A structured approach not only generates more ideas, but generates crucially more impactful, creative and ambitious ideas. As the host of the session you will walk away with both the quantity and quality of ideas you need to design your experiments.

Ideation is critical to experimentation. In order to create an effective experimentation roadmap, you must engage in effective ideation. Following just a few of these techniques, will have you well on your way. But of course, if you’d like to know more, do get in touch.


If you’d like to attend future events, keep an eye on our events page. hosts… ‘Experimentation Maturity: What advanced testing teams do differently’

We are very excited about the release of our Ecommerce Performance Report for 2018 in partnership with Econsultancy. The report covers various concepts in-depth ranging from the growth of the ecommerce market to the future of experimentation however, one thing got us talking at HQ – experimentation maturity.

Out of the 400 ecommerce professionals surveyed, 50% stated that they perceived the value of experimentation to be high/very high however, only 14% stated that their business recognised it as a strategic priority. The disparity between value and strategic prioritisation got us thinking about the roadblocks to experimentation maturity and how we can overcome them.

Kyle Hearnshaw, Head of Conversion Strategy, Stephen Pavlovich, our CEO and James Gray, Senior Optimisation Manager at Just Eat, led our second independent event where we brought practitioners from the industry together to discuss five key themes around experimentation maturity:

  1. What defines a mature approach to experimentation and conversion optimisation?
  2. How can you measure your growth in maturity over time?
  3. How does your organisation stack up compared to 450+ respondents in our report?
  4. What challenges do we face in developing maturity?
  5. How can you overcome these challenges?

Measuring experimentation

In the past we have seen organisations use the size of the experimentation team, the number of experiments launched or the complexity of experimentation as metrics to measure their experimentation maturity.

At, we believe such metrics should be avoided and that maturity should be measured against quality. This means moving the goalposts so that we are benchmarking against experimentation goals, experimentation strategy and data and technology strategy.

What are you trying to achieve via experimentation? 

Setting goals in which to measure the success of experimentation is pivotal for any organisation. At, we recommend following three steps to develop robust experimentation goals:

  1. List your key business challenges – the most mature experimentation programmes drive the strategic direction of businesses. If you feel a long way off this, don’t panic. Listing business challenges and making sure experiments have a measurable impact against these is a huge step on the journey to maturity.
  2. Set a specific goal for experimentation – individual experiments need specific goals and KPIs. In order to translate business challenges into specific experimentation goals we recommend using a goal tree map.
  3. Plan a roadmap to develop maturity – recognising your business’ position on the maturity scale is important in identifying necessary steps to reach maturity. At, we have created an experimentation maturity assessment so you can easily plot where you sit on the scale; be honest, a realistic benchmark allows you to identify the key steps to take in order to progress.

How do you organise and deliver experimentation? 

Experimentation strategy often gets forgotten about, as a consequence holistic experimentation can be derailed and the link between business strategy and goals can break.

In order to prevent this disconnection we recommend that you:

  1. Set regular points to review strategy – stand back at regular intervals to look at the big picture. Involve stakeholders from across the business and brainstorm strategic priorities.
  2. Organise experiments into projects/themes – Ad hoc tactical experiments can be chaotic therefore, we recommend defining projects that group related research, experiments and iterations. This allows goals and outcomes to be measured at a project level.
  3. 10x your communication – The goal here is to get more people involved in experimentation. Some great examples were shared at our event – Just Eat talked us through their approach to ensuring experimentation strategy is shared and reviewed across areas of the business with their experimentation forum. Within this forum every product manager pitches their experiments, detailing what they’d like to test and how they plan to measure the results. We think this is an excellent way to create open communication across a company.

How do you measure the impact of experimentation?

Evaluating the impact of experimentation is pivotal in order to determine the success of testing. But what measurements are important?

  1. Define standards on experiment data – at, we talk about ‘North Star’ metrics. Experiments that have an impact on these metrics should get noticed by senior stakeholders. This tactic resonated with those at our maturity event – organisations voiced that they struggled to get executive buy-in without proving value through results.
  2. Strive for a single customer view – a single customer view is an aggregated, uniform and comprehensive representation of customer data and behaviour. Going beyond single conversion metrics is a huge leap in the path to maturity however, it is not easily achieved. We recommend integrating testing tools with business intelligence tools in order to gather data such as lifetime value.
  3. Build in segmentation as early as possible – no matter where you are on the maturity scale we highly recommend identifying key audiences in order to lay the foundations for personalisation. With this in place, you can report on audience behaviours within experiments and uncover greater insights from your experiments.

So, what next? 

Identify your organisation’s maturity level using our maturity model. Whether you are just getting started or are an advanced team, there are actions you can take to reach the highest levels of maturity.

We thoroughly enjoyed hosting our second independent event. Insightful discussions were had across a variety of organisations and we are proud to have been able to offer advice to assist organisations on progressing towards their own experimentation maturity.

If you’d like to attend future events, keep an eye on our events page

Prepare for Launch: Lessons from 1,000 A/B Test Launches

In this article, we provide a guide for the A/B test launch process that will help you to keep your website safe and to keep your colleagues and/or clients happy. 

You’ve spent weeks, maybe months, preparing for this A/B test. You’ve seen it develop from a hypothesis, to a wireframe, through design, build and QA. Your team (or client, if you work agency-side) are excited for it to go live and all that’s left to push is the big red button. (Or the blue one, if you’re using Optimizely). Real users are about to interact with your variation and, hopefully, it’ll make them more likely to convert: to buy a product, to register for an account or simply to make that click.

But for all the hours you’ve put into preparing this test, the work is not over yet. At Conversion, we’ve launched thousands of A/B tests for our clients. The vast majority of those launches have gone smoothly, but launching a test can be intense and launching it properly is crucial. While we’re flexible and work with and around our clients, there are some fixed principles we adhere to when we launch an A/B test.

Get the basics right

Let’s start with the simplest step: always check that you’ve set the test up correctly in your testing platform. The vast majority of errors I have witnessed in the launching of tests have been minor errors in this part of the process. Make sure that you have:

  • Targeted the correct page or pages;
  • Allocated traffic to your Control and Variation/s;
  • Included the right audience in your test.

Enough said.

Map out the user journey

You and your team might know your business and its website better than anyone, but being too close to a subject can sometimes leave you with blinkered vision. By the end of the development process, you’ll be so close to your build that you might not be able to view it objectively.

Remember that your website will have many different users and use cases. Sure, you’re hoping that your user will find their way from the product page, to the basket page, to the payment page more easily in your variation. But, have you considered how your change will impact users who want to apply a voucher? Do returning users do something new users don’t? Could your change alienate them in some way? How does your test affect users who are logged in as well as logged out? (Getting that last one wrong caused my team a sleepless night earlier this year!)

Make sure you have thought about the different use cases happening on your website. Ask yourself:

  • Have I considered all devices? If the test is for mobile users, have you considered landscape and portrait?
  • Does your test apply across all geographies? If not, have you excluded the right ones?
  • Have you considered how a returning user’s journey differs from that of a new user?

One of the best ways to catch small errors is to involve colleagues who haven’t been as close to the test during the QA process. Ask them to try and identify user cases that you hadn’t considered. And if they do manage to find new ones, add these to your QA checklist to make sure future tests are checked for their impact on these users.

Test your goals

No matter how positively your users receive the changes you’ve made in your variation, your A/B test will only be successful if you can report back to your team or client with confidence. It’s important that you add the right goals to your results page, and that they fire as intended.

At Conversion, shortly before we launch a test, we work our way through both the Control and Variation and deliberately trigger each goal we’ve included: pageviews, clicks and custom goals too. We then check that these goals have been captured in two ways:

  1. We use the goals feature in our Optimizely Chrome Extension to see the goal firing in real-time.
  2. A few minutes later, we check to see that the action has been captured against the goal in the testing platform.

This can take a little time (and let’s be honest, it’s not the most interesting task) but it’ll save you a lot of time down the line if you find a goal isn’t firing as intended.

Know your baseline

From the work  you’ve done in preparation, you should know how many people you expect to be included in your experiment e.g. how many mobile users in Scotland you’re likely to get in a two-week period. In the first few minutes and hours after you’ve launched a test, it’s important to make sure that the numbers you’re seeing in your testing platform are close to what you’d expect them to be.

(If you don’t have a clear notion of how many users you expect to receive into your test, use your analytics platform to define your audience and review the number of visits over a comparable period. Alternatively, you could use your testing platform to run an A/A test where you do not make any changes in the variation. That way, you can get an idea of the traffic levels for that page).

If you do find that the number of visits to your test is lower than you’d expect, make sure that you have set the correct traffic allocation up in your testing tool. It may also be worth checking that your testing tool snippet is implemented correctly on the page. If you find that the number of visits to your test is higher than you’d expect, make sure you’re targeting the right audience and not including any groups you’d planned to exclude. (Handy hint: check you haven’t accidentally used the OR option in the audience builder instead of the AND option. It can catch you out!) Also, make sure that you’re measuring like-for-like i.e. are you looking at unique visits in your analytics tool and comparing it to unique hits to your test.

Keep your team informed

At Conversion, our Designers and Developers are involved in the QA process and so they know when a test is about to launch. (We’ve recently added a screen above our bank of desks showing the live test results. That way everyone can celebrate [or commiserate] the fruits of their labour!) When the test has been live for a few minutes, and we’re happy that goals are firing, we let our client know and ask them to keep an eye on it too.

Check the test regularly

So the test is live. Having a test live on a site (especially when you’re managing that for a client) is a big responsibility. Provided you’ve taken all the right steps earlier in the process, you should have nothing to worry about, but you should take precautions nonetheless.

Once you’ve pressed the Play button, go over to the live site and make sure you can see the test. Try and get bucketed into both the Control and Variation to sense check that the test is now visible to real users.

At Conversion, there’ll be someone monitoring the test results, refreshing every few minutes, for the first couple of hours the test is live. We’ll check in on the test every day that it runs. That person also checks that there’s at least one hit against each goal and that the traffic level is as expected.

A couple of hours into the running of a test, we’ll make sure that any segments we have set up (e.g. Android users, logged in users, users who interacted with our new element) are firing. You don’t want to run a test for a fortnight and then find that you can’t report back on key goals and segments.

(Tip: if you’re integrating analytics tools into your test make sure they’re switched on and check inside of those tools soon after the test launches to make sure you have heatmap, clickmap or session recording data coming through).

Make sure you have a way to pause the test if you spot anything amiss, and we’d recommend not launching on a Friday, unless someone can check the results over the weekend.

Finally, don’t be afraid to pause

After all the buildup and excitement of launching, it can feel pretty depressing having to press the pause button if you suspect something isn’t quite right. Maybe a goal isn’t firing or you’ve forgotten to add a segment that would come in very handy when it’s time to report on the results. Don’t be afraid to pause the test. In most cases, it will be worth a small amount of disruption at the start, to have trustworthy numbers at the other end. Hopefully, you’ll spot these issues early on. When this happens, we prefer to reset the results to ensure they’re as accurate as they can be.


Launching an A/B test can be a real thrill. You finally get to know whether that ear-worm of an idea for an improvement will actually work. In the few hours either side of that launch, make sure you’ve done what you need to do to preserve confidence in the results to come and to keep your team and client happy:

  • Get the basics right: it’s easy to make a small error in the Settings. Double check these.
  • Map out the user journey: know how users are likely to be impacted by your changes.
  • Test your goals: make sure you’ve seen some data against each goal from your QA work.
  • Know your baseline: check the initial results against traffic levels in your analytics tools.
  • Keep your team informed: don’t hog all the fun, and let others validate the results with you.
  • Check regularly: don’t go back to a lit firework; do go back to a live test…regularly.
  • Don’t be afraid to pause: pause your test if needed. It needs to be the best version it can be.

Who, When and Where, but what about the Why? Understanding the value of Qualitative Insights: Competitor Analysis

Numbers, rates and statistics are great for finding out what’s happening on your site and where opportunities for testing lie, but quantitative insights can only take us so far. This series covers the importance of the qualitative insights we run for our clients at 

Last time, we looked at the value of on-site surveys and just how effective they can be when used correctly. 

Competitor Analysis

Competitor analysis is a vital part of understanding your industry.

Anyone familiar with a SWOT analysis knows that understanding who your competitors are, as well as what they’re doing can allow you to understand your place in the market, differentiate yourself from the competition and stand out in the right way.

However, when we look online we see that strategies are more often than not a case of the blind leading the blind, whereby we copy elements from others that we like, but with no insight into if they’re effective, nor what makes them effective.

Of course, we will never truly be able to answer these questions without access to competitor data and a variety of tests exploring the element – but never fear. My hope is that by the end of this article you will be equipped with a framework that gives your competitor analysis…the competitive edge.

Just like in many other aspects of life, knowing yourself is just as important as knowing others. The same applies for a competitor analysis. The better understanding you have of your users, their motivations and barriers, as well as the key site areas for improvement, the better you will be able to diagnose competitor elements that may be of use.

Often a competitor analysis can be an exhaustive task, spanning every page of competitor sites with few actionable insights at the end. Therefore, it is always better to focus your competitor analysis on one specific area at a time. Whether it is the landing page, the checkout funnel, or a product page; by focusing on one area it becomes easier to identify where your experience differs and formulate experiments around this.

At Conversion, we begin by mapping out a client’s main customer journey before using insights to identify key levers on the site (these are the key themes we feel can have an impact on conversion rates). Combining this with analytics data shows us where a site may be underperforming, and this is a great place to start looking at competitors.

How do I conduct a competitor analysis?

I will show you an example using a credit card company, Company X.

After examining our quantitative data, we have established that Company X has a low conversion rate on its application form.

We begin by comparing Company X to its closest competitors. In doing so, we realise that many competitors are chunking their application forms into bite-size steps.

Often, this is where many people would stop and act quickly to replicate this experience on their own sites. However, just because everyone else is doing it, does that make it the best way? The reality is, we still don’t know whether this is the best way to present an application form.

In order to find out, it is important now that we look beyond the client’s industry – this is a great exercise to help us think beyond what our close competitors are doing. How does your registration form compare to Amazon? Does your size guide match up to Asos?

Taking industry best practice, combining this with competitor research and then sprinkling on the uniqueness of your site and users, often leaves you with a test idea that is worth prioritising.

Understanding what your competitors do can help you frame your strategy and optimisation efforts. It is an insight rich exercise that is good for looking at the industry at a macro level, as well as honing in on particular levers and how competitors utilise them.

Competitor analysis
Competitor analysis template for (Click to view)

Here is the standard template we use at Conversion when we begin a competitor analysis. This is a great starting point and can be tweaked and framed to suit different industry needs. With such a large scope of potential insights to gain, a one size fits all approach can rarely be taken. That’s why we use four different templates depending on our desired outcomes.

I will now share two frameworks – one for a broad competitors analysis, and another for a more in-depth analysis.

Broad competitor analysis

If you haven’t conducted a competitor analysis before, this is your first step.

You’ll want to identify 4-5 key competitors within your industry – these can vary from both the low-end to the high-end of the market and will be useful for both understanding what you do and cross-referencing it with your market position.

Starting with your own site first – map out the main user journey as you would a storyboard. An ecommerce site for example, may have a funnel like this:

Landing page -> Category Page -> Product details page -> Basket page -> Checkout

You get the idea.

Take screenshots of each step and make note of the key elements of each page.

Competitor Analysis
Structured Overview of Key Direct Competitors (Click to view)

Now do the same for your competitors, noting any clear contrasts in the tone, content or functionality of the sites.

At Conversion, we would use the template above for mapping this out as it creates a strong basis for comparing sites at a later stage and allows us to add small notes to each site as we go along.

You will soon start to establish patterns across the sites and often these will be the hygiene factors that are consistent within your industry. But most importantly, you should look for the key differences across the sites as these will help form the basis of future test ideas. Maybe all your competitors have a guest checkout – this test concept could have been at the bottom of your backlog, but now you have more context on the industry you will look at your prioritisation differently.

A step further

Now that we have a better understanding of what your competitors are doing in general, let’s take a more focused look at a key element. Using my earlier example of a guest checkout, here is how we would explore this idea.

Competitor Analysis
Visual Map of Competitor Funnel (Click to view)

Once again, we are mapping out the flow – but here we would focus on plotting each step of the guest checkout process, comparing each competitor’s execution at each step. This is a great point to go beyond your competitors and look more broadly at how other companies are addressing this.

Looking further ahead, you may want to do a competitor analysis that looks at a specific lever, e.g. how are other sites presenting social proof to users, what are the ways in which you can include trust elements online. The possibilities are (almost) endless. Always remember though that a competitor analysis should have a goal or key question that you are seeking to answer.

When combined effectively with other qualitative insights such as usability testing and on-site surveys, a competitor analysis can give you a really focused understanding of how your customers behave as well as inspiration for how to improve your website experience.

Through testing these ideas, you will gain a clear understanding of what works best for your users and how to make your website stand out from the crowd.

Look out for our next article in this series where we discuss the importance of heat-maps and scroll-maps.’s first independent event: What framework for personalisation?

Recently, we hosted our first ever independent event (we know, big deal!) and we chose to kick things off with one of the most en vogue topics of the moment in CRO – personalisation.

Led by Kyle Hearnshaw, Head of Conversion Strategy, (…and personalisation expert) and Stephen Pavlovich, our CEO, the event was held in a roundtable format. We wanted to engage with other practitioners from the industry and discuss five key themes:

  1. Cutting through the noise: Debunking personalisation myths
  2. What does personalisation mean to your business?
  3. Where do conversion optimisation, experimentation and personalisation meet?
  4. Is website personalisation right for your business? How can you tell?
  5. A framework for personalisation strategy

What’s up with personalisation?

With personalisation said to be just around the corner since at least 2014, we saw a good opportunity to get a real sense of what stage some leading organisations are at with personalisation, as well as what it means to the business.

We were not surprised to see that most of the companies present at the event were only just getting started with personalisation, whilst some had it on that radar but were yet to begin. But, it was clear however that no organisation could claim to be well underway with their personalisation programme.

These initial discussions satisfied our prognosis that everyone thinks that others are doing personalisation, but in reality very few companies are because of the expectation and complexity it poses.

So, how do you do it?

At, we created a four step framework to enable any company to use personalisation within experimentation.

1. Define goals and KPIs

“Why should we run this personalisation campaign?” The paramount question you should be asking yourselves. The first step you should take when considering personalisation is to define the goals and KPIs that will be used to measure success. An example of a goal could be to increase repeat customer revenue and our main KPIs would be conversion rate and average order value (AOV).

2. Evaluate capability 

The second step is to evaluate capability around our goals and KPIs. We aim to confirm whether it is possible to act on these and how we can do it.

You might wonder why this isn’t the first step of the framework? The reason is, evaluating capability can be a big, time-consuming task. If you don’t have a clear objective in mind to evaluate your capabilities against, you could end up spending a lot of time looking for capabilities that aren’t actually needed. Defining the goals and KPIs keeps us focused on answering whether we have the right data required to target specific users and if so, is this data accessible on the site for us to use in testing?

First set the goals you would like to achieve and evaluate if it’s possible and how. Don’t decide on what is possible first, and then shoehorn in a goal that fits.

3. Identify and prioritise audiences

The third step is the big one, this is where you identify and prioritise your audiences or audience groups for your personalisation project.

How do you know who you should target? Well, what matters here most, is that your audience is meaningful.

A meaningful audience is one that is identifiable, impactful and shows distinct behaviour. This means that each audience needs a clear profile that defines how a user in that group is identified and targeted. Audiences need enough volume and value to be worth the effort and users should behave differently enough to merit a personalised experience.

4. Experiment 

This is the last step! Now that we have our audiences defined, each audience can be treated as a conversion optimisation project where we would be looking to understand the key conversion levers that influence our audiences behaviour, and then experiment on it.

Realistically, each organisation will have more than one goal and KPI. We gathered from our event that it wasn’t only the number of orders and amount of revenue that were potential metrics for personalisation projects, but the number of customers that visit the store, or the number of driver downloads on your support site could also be worthwhile.

What should you do next? 

Now that we have a process tailored to personalisation, we can all start straight away, right?

Well, this depends on your organisation’s maturity model in experimentation and conversion optimisation. Personalisation requires a deep understanding of your users, more so than A/B testing and should only be approached if you have already reached the highest levels of experimentation maturity.

If you are just getting started with experimentation, we would recommend you first focus on gaining insights on your users and maximise the gains you can have from general experimentation and conversion rate optimisation. Personalisation is a long-term investment. So, if your organisation isn’t ready today, positioning yourself on the maturity model will help you to plan the steps you need to take to get there.

If your company lives and breathes experimentation, and you are considering optimising conversion further by increasing the relevance of customer experiences through personalisation, it is crucial that you take the time to integrate it in your wider digital strategy. Get support from the business, as it is likely that you will meet similar challenges to the ones that we have heard from clients that are already doing personalisation: lack of resources, difficulty in proving the value of personalisation and internal political issues (e.g. crossover between departments and markets).

Overall, we are extremely proud to have organised our first independent event and glad to know that everyone who attended the event left learning something new and, we are convinced, with plenty of ideas to take back to the office.

Looking to develop your approach to personalisation? If you have a question about how we can help you, then please do get in touch with


Talking Shop

As published in ERT Magazine ( – October 2017 issue 

Alexa and her friends may be delighting users in the home with how they can make life easier, but some companies are taking the first bold steps into voice controlled e-commerce…

The smart-home revolution is in full swing.

The success of the Amazon Echo and its Alexa ‘skills’ platform and the launch of Google Home have taken the idea of voice control and voice-controlled e-commerce from a novelty concept to a legitimate potential revenue channel for retailers willing to take the risk.

Early brands to explore this opportunity include Uber and Just Eat, and earlier this summer Domino’s Pizza launched its Alexa skill in the UK after over a year of offering the same in the US. This allows you to order pizza with just a few words. We’ve yet to see data on how many sales these brands are generating through their voice-control channels, but the phased deployment from Domino’s certainly suggests they are seeing enough value to justify the investment.

Designing a successful voice-controlled experience isn’t going to be easy. Looking at this from a user experience and conversion rate perspective, voice control is a whole new touch-point and interaction type to understand. In traditional conversion rate optimisation for e-commerce sites, potential reasons why a user might abandon and not complete a purchase fall into two categories – usability and persuasion.

Usability issues would be anything that physically prevents the user from being able to complete their desired action – broken pages, links or problems with completing a form or online checkout.

As for persuasion – even a site with no usability issues wouldn’t convert 100 percent of its visitors. There will always be an element in the user’s decision-making process around persuasion. Have they been sufficiently convinced to purchase this product or service? Typical persuasion issues include failing to describe the benefits of a product.

So what does the future look like in a voice-controlled world?

In traditional e-commerce, the user is free to make their own journey through a website and we enable this freedom by displaying a range of content, products, deals and offers, navigation options and search functionality. With voice control, the possible journeys to purchase are far fewer and almost completely invisible to the user at the outset. So with an Alexa skill, the developer must define the possible trigger phrases that the user can use to take a certain set of defined actions.


Skill and experience in voice interaction design will emerge as a crucial requirement for any team looking to develop this channel. Collecting and analysing data on how users are invoking your app/skill, what exact words and phrases they’re using, how they’re describing your products and service and how they’re talking to your app through their journey, will be an essential part of experience optimisation.

Another area that will dominate user experience for voice control will be how the app responds to user mistakes. Frustration will be the worst enemy of voice-controlled services, far more so than it is with websites now. If you’ve been unlucky enough to have to call an automated helpline that uses voice control, you will know how quickly the frustration builds when something goes wrong.

On a website, if the user gets stuck or confused on their journey, it’s relatively easy for them to go back or to navigate away from the page and try again. With voice-control, this isn’t the case. If the user tries a command that isn’t recognised by the app, then it can only respond with a quick error response. Failure to re-engage the user and keep them trying will quickly result in frustration and even abandonment.


So how do you persuade a user to complete their purchase once they’ve started their voice-controlled interaction? How would you describe the benefits of a certain washing machine, laptop or TV when they can only be spoken, and spoken by a robotic voice at that?

The development of chatbots in the past couple of years has seen a lot of investment and progress on how to get an automated response to appear human and more engaging. But this development has all been in how to present text responses rather then voice responses. Voice responses are inherently more complex.

Will developments in Alexa’s AI allow her to improvise responses based on prior knowledge of the user? Personalisation within the voice space could allow Alexa to make tailored recommendations based on my purchase history.

“Alexa, look on Currys for a new kettle.”

“Ok Kyle. There’s a black Breville kettle that would look great with the Breville toaster you bought last month. It’s £39. Is that OK?”

“Sounds good.”

“You bought your last kettle 18 months ago. Shall I add the three-year warranty on this one for an extra £9.99?”

I’m sold.