Anna Tiplady, Author at Conversion.com

People are aware of cognitive biases but do we know what to do about them?

Decision making is part of our everyday lives. We ask ourselves, “Should I have a coffee or a tea? Should I take the bus or the tube today? How should I respond to this email?”

But are we really aware of just how many decisions the average human makes in just one day? Go on have a guess…

On average, we make a staggering 35,000 decisions per day! Taking into account the 8 or so hours we spend asleep, that works out to be over 2,100 decisions per hour. If we thought consciously about each decision, we would be faced with a debilitating challenge that would prevent us from living out our normal lives. Thankfully our brains have developed shortcuts, called heuristics, which allow us to make judgements quickly and efficiently, simplifying the decision making process.

Heuristics are extremely helpful in many situations, but they can result in errors in judgement when processing information – this is referred to as a cognitive bias.

How can cognitive biases impact our decisions?

Cognitive biases can lead us to false conclusions and as a consequence influence our future behaviour.

In order to illustrate this I am going to take you through a famous study conducted by Daniel Kahneman showing the impact of the anchoring bias. In Kahneman’s experiment, a group of judges with over 15 years’ experience each were asked to look at a case in which a woman had been caught shoplifting multiple times.

In between reviewing the case and suggesting a possible sentence, the judges were asked to roll a pair of dice. Unbeknown to the judges this was the “anchor”. The dice were rigged, and would either give a total of 3 or 9.

Astonishingly, the number rolled anchored the judges when making their sentencing recommendations. Those who rolled 3 sentenced the woman to an average of 5 months in prison; those who threw 9 sentenced her to 8 months.

If judges with 15 years’ experience can be influenced so easily by something so arbitrary about something so important – then what hope do the rest of us have?

Another example of biases impacting important decisions can be found in the Brexit campaigns. We can all remember the “£350 million a week” bus, which suggested that instead of sending that money to the EU we could use it to fund the NHS instead.

There were many other examples of false stories published in the British media. These shocking statements are influential because humans have a tendency to think that statements that come readily to mind are more concrete and valid. This is an example of the availability bias

But how is this relevant for experimentation?

With experimentation, we are tasked with changing the behaviours of users to achieve business goals. The user is presented with a situation and stimuli that impact their emotional responses and dictate which cognitive biases affect the user’s decision making.

When we run experiments without taking this into account we are superficially covering up problems and not looking at the root causes. In order to truly change behaviour we must change the thought process of the user. This is where our behavioural bias framework comes into play…

Step 1. Ensure you have established your goal. Without a goal you will not be able to determine the success of your experiments.

Step 2. Identify the target behaviours that need to occur in order to achieve your goal. At this point it is important to analyse the environment you have created for your users. What stimulus is there to engage them? What action does the user need to take to achieve the goal? Is there a loyal customer base that return and carry out the desired actions again and again?

Step 3. Identify how current customers behave. Is there a gap between current behaviours and target behaviours?

Step 4. Now start pairing current negative biases with counteracting biases. At this point research is imperative. Your customers will behave differently depending on their environment, social and individual contexts. Research methods you can use include surveys, moderated and unmoderated user testing, evidence from previous tests as well as scientific research. Both Google Scholar and Deep Dyve are excellent scientific research resources. 

Step 5. Which is the best solution to test? 

There are three important things to consider at this point. 

Value – What is the return for the business?  Volume – How many visitors will you be testing?  Evidence – Have you proven value in this area in previous tests?

Joining the dots.

To bring this framework to life I’m going to run through an example…

Let’s pretend I work for a luxury food brand. I have identified my target goal which is purchases and mapped out how my current users behave on the site. I find that users are exiting the site when they are browsing product pages. Product pages are one of our highest priority areas.

I have conducted a website review which flagged some negative customer reviews. This is not a big issue for us, after all we are reliant on individual taste and we have an abundance of positive reviews. Nevertheless, it seems to be a sticking point for users.

A potential bias at play causing users to exit is the negativity bias. This bias tells us that things of a negative nature have a greater impact than neutral or positive things.

Instead of removing the negative reviews we are going to maintain the brands openness to feedback and leave them onsite. Nevertheless, we still want to reduce exit rate so we are going to test a counteracting bias, the visual depiction effect.

The visual depiction bias states that people are more inclined to want to buy a product when it is shown in a way which helps them to visualise themselves using it. So in our product images we will now add in a fork (this study was actually conducted! Check it out).

The results from the experiment will determine whether our counteracting bias (visual depiction effect) overcame the current one (negativity bias).

So, to conclude…the behavioural bias framework should be used to understand the gap between your customers’ current behaviours and your intended goal. This will allow you to hypothesise potential biases at play and run experiments that bridge the gap between existing and aspirational behaviours.

To find out more about our approach to experimentation, get in touch today!

Reactive or proactive: The best approach to iteration

Iterating on experiments is often reactive and conducted as an afterthought. A lot of time is spent producing a ‘perfect’ test and if results are unexpected, iterations are run as a last hope to gain value from the time and effort spent on the test. But why subjectively try and execute the perfect experiment in the first instance and postpone the opportunity to uncover learnings along the way by running a minimum viable experiment which is then iterated on?

Experimentation is run at varying levels of maturity (see our Maturity Model for more information on this) however we see businesses time and time again getting stuck in the infant stages due to their focus on individual experiments. We see teams wasting time and resource trying to run one ‘perfect’ experiment when the core concept has not been validated.

In order to validate levers quickly without over investing in resource we should ensure hypotheses are executed in their most simple form – the minimum viable experiment (MVE). From here, success of an MVE gives you the green light to test more complex implementations and failure flags problems with the concept/execution early on.

A few years ago, we learnt the importance of this approach the hard way. Based off the back of one hypothesis for an online real estate business, ‘Adding the ability to see properties on a map will help users find the right property and increase enquiries’, we built a complete map view in Optimizely. A heavy amount of resource was used only to find out within the experiment that the map had no impact on user behaviour. What should we have done? Ran an MVE requiring the minimum resource in order to test the concept. What would this have looked like? Perhaps a fake door test in order to test the demand of the map functionality from users.

This blog aims to give:

  • An understanding of the minimum viable approach to experimentation
  • A view of potential challenges and tips to overcome them
  • A clear overview of the benefits of MVEs

The minimum viable approach

A minimum viable experiment looks for the simplest way to run an experiment that validates the concept. This type of testing isn’t about designing ‘small tests’, it is about doing specific, focused experiments that give you the clearest signal of whether or not the hypothesis is valid. Of course, it helps that MVEs are often small so we can test quickly! It is important to challenge yourself by assessing every component of the test and its likelihood of impacting the way the user responds to an experiment. That way, you will be efficient with your resource and yield the same effect on proving the validity of the concept. Running the minimum viable experiment allows you to validate your hypothesis without over investing in levers that turn out to be ineffective.

If the MVE wins, then iterations can be ran to find the optimal execution – gaining learnings along the way. If the test loses, you can look at the execution more thoroughly and determine whether bad execution impacted the test. If so, re-run the MVE. If not, bin the hypothesis to avoid wasting resource on unfruitful concepts.

All hypotheses can be reduced to an MVE, see below a visual example of an MVE testing stream.

Potential challenges to MVEs and tips to overcome them

Although this approach is the most effective, it is not often fully understood, resulting in pushback from stakeholders. Stakeholders are invested in the website and moreover protective of their product. As a result, the expectation from experimentation is that a perfect execution of a problem will be tested which could be implemented immediately should the test win. However, what is not considered is the huge amount of resource this would require without any validity that the hypothesis was correct or that the style of execution was optimal.

In order to overcome this challenge we focus on working with experimentation, marketing and product teams in order to challenge assumptions around MVEs. This education piece is pivotal for stakeholder buy-in. Over the last 9 months, we have been running experimentation workshops with one of the largest online takeaway businesses in Europe and a huge focus of these sessions has been on the minimum viable experiment.

Overview of the benefits of MVEs

Minimum viable experiments have a multitude of benefits. Here, we aim to summarise a few of these:

Efficient experiments

The minimum viable experiment of a concept allows you to utilise the minimum amount of resource required to see if a concept is worth pursuing further or not.

Validity of the hypothesis is clear

Executing experiments in their most simple form ensures the impact of the changes are evident. As a result, concluding the validity of the experiment is uncomplicated.

Explore bigger solutions to achieve the best possible outcome

Once the MVE has been proven, this justifies investing further resource in exploring bigger solutions. Iterating on experiments allows you to refine solutions to achieve the best possible execution of the hypothesis.

Key takeaways

  • A minimum viable experiment involves testing a hypothesis in its simplest form, allowing you to validate concepts early on and optimise the execution via iterations.
  • Push back on MVEs are usually due to a lack of awareness of the process and benefits they yield. Educate in order to show teams how effective this type of testing is, not only in gaining the best possible final execution for tests but also in utilising resource with efficiency.
  • The main benefit of the minimum viable approach is that you spend time and resource on levers that impact your KPIs.

Talking PIE over breakfast – our prioritisation workshop

Recently, we continued our workshop series with one of our solutions partners, Optimizely, discussing the prioritisation of experiments.

The workshop session was led by Kyle Hearnshaw, Head of Conversion Strategy at Conversion.com, with support from Stephen Pavlovich, CEO and Nils Van Kleef, Solutions Engineer at Optimizely.

Our most popular workshop to date, we gathered over 40 ecommerce professionals including representatives from brands such as EE, John Lewis and Just Eat together, all keen to talk about one of their biggest challenges – prioritisation. Throughout the morning we discussed why we prioritise, popular prioritisation methods and finally how we at Conversion.com prioritise experiments.

For those of you that couldn’t make the session, we want to share some insights into prioritisation so you too can apply learnings next time you are challenged with prioritising experiments. Keep an eye out on our blog too, as later in the year we’ll be posting a longer step-by-step explanation of our approach to personalisation.

Why Prioritise?

There is never usually a shortage of ideas to test however we are often faced with a shortage of resources to build, run and analyse experiments as well as traffic in which to run experiments on. We need to make sure we prioritise the experiments that are going to do the most to help us achieve our goal in the shortest of time.

So, what is the solution?

Popular Prioritisation Methods

In order to identify those tests that have the maximum impact with the efficient use of resources we need to find the most effective prioritisation method. So, let’s take a look at what is out there:

1. PIE model

Potential: How much improvement can be made on the pages?

Importance: How valuable is the traffic to the pages?

Ease: How complicated will the test be to implement on page / template?

We think PIE is simple and easy to use, analysing only 3 factors. However, some concerns with this model are  that the scoring can be very subjective and there can be an overlap between Potential and Importance.

2. Idea Scores from Optimizely 

 Optimizely’s method is an extended version of the PIE model adding the factor of ‘love’ into the equation. Again, we commend this model for its simplicity however, subjectivity from scoring can still mean the overall score is subjective.

3. PXL model from ConversionXL

 The PXL is a lot more complex than the previous two, giving eight to data and insight which we think is very important. In addition, the PXL model goes a way towards eliminating subjectivity by limiting scoring to either 1 or 0 in most columns. One limitation of this model is that it doesn’t account for differences in page value rather than just page traffic, nor does it give you a way to factor in learnings from past experiments. It also has the potential to be very time consuming and you may not easily be able to complete all columns for every experiment.

Prioritisation at Conversion

When deciding on our prioritisation model we wanted to ensure that we were prioritising the right experiments, making sure the model accounted for insights and results, removing any possibility for subjectivity and allowing for the practicalities of running an experimentation programme. So, we came up with the SCORE model:

The biggest difference with our approach is that prioritisation happens at two separate stages. We want to avoid a situation where we are trying to prioritise a large number of experiments with different hypotheses, KPIs, target audiences and target pages against each other. In our approach individual experiments are prioritised at the ‘Order’ stage however, we minimise the need for directly prioritising experiments against each other by first prioritising at the strategy stage.

We use our experimentation framework to build our strategy by defining a goal, agreeing our KPIs and then by prioritising relevant audiences, areas and levers. Potential audiences we can experiment on are prioritised on volume, value and influence. Potential areas are prioritised on volume, value and potential. Levers – what user-research has show could influence user behaviour – are prioritised on win rate (if we’ve run experiments on this lever before), confidence (how well supported the lever is in our data) or both.

Next we ensure we cultivate the right ideas for our concepts. We believe structured ideation around a single hypothesis generates better ideas. Again, utilising the experimentation framework we define our hypotheses: “We believe lever for audience, on area will impact KPI.” Once the hypothesis has been defined we then brainstorm the execution.

The order of our experiments come from prioritising the concepts that come out of our ideation sessions. Concepts can be validated quickly by running minimum viable experiments. MVEs allow us to test concepts without over-investing and also allows us to test more hypotheses in a shorter timeframe.

Next, we create an effective roadmap. We start by identifying the main swimlanes (pairs of audiences and areas that can support experiments) and then we estimate experiment duration based on a minimum detectable effect. A roadmap should include tests across multiple levers, this allows you to gather more insight and spreads the risk of over-emphasising in one area.

Finally, it’s time to run and analyse the experiments (execution).

We believe our SCORE model is effective for prioritising experimentation projects because it puts more emphasis on prioritising and getting the right strategy first before ever trying to prioritise experiments against each other. It is structured, rewards data and insight and allows for the practicalities of experimentation – we can review and update our strategy as new data comes in. The only limitation is that it does take time in order to prioritise the strategy effectively. But, if we’re going to invest time anywhere we believe it should be on getting the strategy right.

Our conclusions

The workshop was a great success. We had some great feedback from those involved and some actionable ideas for our attendees to take away.

We recommend having a go at using the SCORE prioritisation model. In the next few weeks we’ll be sharing a detailed post on our experimentation framework but you can apply SCORE within your own approach by reviewing how you define and prioritise your experimentation strategy. See whether this helps you to produce a roadmap which is informed by data and insight, absent of subjectivity and effective in helping your business test the most valuable ideas first.

If you have any questions or would like to know more, please get in touch.

To attend our future events, keep an eye out here.

Conversion.com hosts… ‘Experimentation Maturity: What advanced testing teams do differently’

We are very excited about the release of our Ecommerce Performance Report for 2018 in partnership with Econsultancy. The report covers various concepts in-depth ranging from the growth of the ecommerce market to the future of experimentation however, one thing got us talking at HQ – experimentation maturity.

Out of the 400 ecommerce professionals surveyed, 50% stated that they perceived the value of experimentation to be high/very high however, only 14% stated that their business recognised it as a strategic priority. The disparity between value and strategic prioritisation got us thinking about the roadblocks to experimentation maturity and how we can overcome them.

Kyle Hearnshaw, Head of Conversion Strategy, Stephen Pavlovich, our CEO and James Gray, Senior Optimisation Manager at Just Eat, led our second independent event where we brought practitioners from the industry together to discuss five key themes around experimentation maturity:

  1. What defines a mature approach to experimentation and conversion optimisation?
  2. How can you measure your growth in maturity over time?
  3. How does your organisation stack up compared to 450+ respondents in our report?
  4. What challenges do we face in developing maturity?
  5. How can you overcome these challenges?

Measuring experimentation

In the past we have seen organisations use the size of the experimentation team, the number of experiments launched or the complexity of experimentation as metrics to measure their experimentation maturity.

At Conversion.com, we believe such metrics should be avoided and that maturity should be measured against quality. This means moving the goalposts so that we are benchmarking against experimentation goals, experimentation strategy and data and technology strategy.

What are you trying to achieve via experimentation? 

Setting goals in which to measure the success of experimentation is pivotal for any organisation. At Conversion.com, we recommend following three steps to develop robust experimentation goals:

  1. List your key business challenges – the most mature experimentation programmes drive the strategic direction of businesses. If you feel a long way off this, don’t panic. Listing business challenges and making sure experiments have a measurable impact against these is a huge step on the journey to maturity.
  2. Set a specific goal for experimentation – individual experiments need specific goals and KPIs. In order to translate business challenges into specific experimentation goals we recommend using a goal tree map.
  3. Plan a roadmap to develop maturity – recognising your business’ position on the maturity scale is important in identifying necessary steps to reach maturity. At Conversion.com, we have created an experimentation maturity assessment so you can easily plot where you sit on the scale; be honest, a realistic benchmark allows you to identify the key steps to take in order to progress.

How do you organise and deliver experimentation? 

Experimentation strategy often gets forgotten about, as a consequence holistic experimentation can be derailed and the link between business strategy and goals can break.

In order to prevent this disconnection we recommend that you:

  1. Set regular points to review strategy – stand back at regular intervals to look at the big picture. Involve stakeholders from across the business and brainstorm strategic priorities.
  2. Organise experiments into projects/themes – Ad hoc tactical experiments can be chaotic therefore, we recommend defining projects that group related research, experiments and iterations. This allows goals and outcomes to be measured at a project level.
  3. 10x your communication – The goal here is to get more people involved in experimentation. Some great examples were shared at our event – Just Eat talked us through their approach to ensuring experimentation strategy is shared and reviewed across areas of the business with their experimentation forum. Within this forum every product manager pitches their experiments, detailing what they’d like to test and how they plan to measure the results. We think this is an excellent way to create open communication across a company.

How do you measure the impact of experimentation?

Evaluating the impact of experimentation is pivotal in order to determine the success of testing. But what measurements are important?

  1. Define standards on experiment data – at Conversion.com, we talk about ‘North Star’ metrics. Experiments that have an impact on these metrics should get noticed by senior stakeholders. This tactic resonated with those at our maturity event – organisations voiced that they struggled to get executive buy-in without proving value through results.
  2. Strive for a single customer view – a single customer view is an aggregated, uniform and comprehensive representation of customer data and behaviour. Going beyond single conversion metrics is a huge leap in the path to maturity however, it is not easily achieved. We recommend integrating testing tools with business intelligence tools in order to gather data such as lifetime value.
  3. Build in segmentation as early as possible – no matter where you are on the maturity scale we highly recommend identifying key audiences in order to lay the foundations for personalisation. With this in place, you can report on audience behaviours within experiments and uncover greater insights from your experiments.

So, what next? 

Identify your organisation’s maturity level using our maturity model. Whether you are just getting started or are an advanced team, there are actions you can take to reach the highest levels of maturity.

We thoroughly enjoyed hosting our second independent event. Insightful discussions were had across a variety of organisations and we are proud to have been able to offer advice to assist organisations on progressing towards their own experimentation maturity.

If you’d like to attend future events, keep an eye on our events page