Talking PIE over breakfast – our prioritisation workshop

Go back to blog

Pin It

Recently, we continued our workshop series with one of our solutions partners, Optimizely, discussing the prioritisation of experiments.

The workshop session was led by Kyle Hearnshaw, Head of Conversion Strategy at Conversion.com, with support from Stephen Pavlovich, CEO and Nils Van Kleef, Solutions Engineer at Optimizely.

Our most popular workshop to date, we gathered over 40 ecommerce professionals including representatives from brands such as EE, John Lewis and Just Eat together, all keen to talk about one of their biggest challenges – prioritisation. Throughout the morning we discussed why we prioritise, popular prioritisation methods and finally how we at Conversion.com prioritise experiments.

For those of you that couldn’t make the session, we want to share some insights into prioritisation so you too can apply learnings next time you are challenged with prioritising experiments. Keep an eye out on our blog too, as later in the year we’ll be posting a longer step-by-step explanation of our approach to personalisation.

Why Prioritise?

There is never usually a shortage of ideas to test however we are often faced with a shortage of resources to build, run and analyse experiments as well as traffic in which to run experiments on. We need to make sure we prioritise the experiments that are going to do the most to help us achieve our goal in the shortest of time.

So, what is the solution?

Popular Prioritisation Methods

In order to identify those tests that have the maximum impact with the efficient use of resources we need to find the most effective prioritisation method. So, let’s take a look at what is out there:

1. PIE model

Potential: How much improvement can be made on the pages?

Importance: How valuable is the traffic to the pages?

Ease: How complicated will the test be to implement on page / template?

We think PIE is simple and easy to use, analysing only 3 factors. However, some concerns with this model are  that the scoring can be very subjective and there can be an overlap between Potential and Importance.

2. Idea Scores from Optimizely 

 Optimizely’s method is an extended version of the PIE model adding the factor of ‘love’ into the equation. Again, we commend this model for its simplicity however, subjectivity from scoring can still mean the overall score is subjective.

3. PXL model from ConversionXL

 The PXL is a lot more complex than the previous two, giving eight to data and insight which we think is very important. In addition, the PXL model goes a way towards eliminating subjectivity by limiting scoring to either 1 or 0 in most columns. One limitation of this model is that it doesn’t account for differences in page value rather than just page traffic, nor does it give you a way to factor in learnings from past experiments. It also has the potential to be very time consuming and you may not easily be able to complete all columns for every experiment.

Prioritisation at Conversion

When deciding on our prioritisation model we wanted to ensure that we were prioritising the right experiments, making sure the model accounted for insights and results, removing any possibility for subjectivity and allowing for the practicalities of running an experimentation programme. So, we came up with the SCORE model:

The biggest difference with our approach is that prioritisation happens at two separate stages. We want to avoid a situation where we are trying to prioritise a large number of experiments with different hypotheses, KPIs, target audiences and target pages against each other. In our approach individual experiments are prioritised at the ‘Order’ stage however, we minimise the need for directly prioritising experiments against each other by first prioritising at the strategy stage.

We use our experimentation framework to build our strategy by defining a goal, agreeing our KPIs and then by prioritising relevant audiences, areas and levers. Potential audiences we can experiment on are prioritised on volume, value and influence. Potential areas are prioritised on volume, value and potential. Levers – what user-research has show could influence user behaviour – are prioritised on win rate (if we’ve run experiments on this lever before), confidence (how well supported the lever is in our data) or both.

Next we ensure we cultivate the right ideas for our concepts. We believe structured ideation around a single hypothesis generates better ideas. Again, utilising the experimentation framework we define our hypotheses: “We believe lever for audience, on area will impact KPI.” Once the hypothesis has been defined we then brainstorm the execution.

The order of our experiments come from prioritising the concepts that come out of our ideation sessions. Concepts can be validated quickly by running minimum viable experiments. MVEs allow us to test concepts without over-investing and also allows us to test more hypotheses in a shorter timeframe.

Next, we create an effective roadmap. We start by identifying the main swimlanes (pairs of audiences and areas that can support experiments) and then we estimate experiment duration based on a minimum detectable effect. A roadmap should include tests across multiple levers, this allows you to gather more insight and spreads the risk of over-emphasising in one area.

Finally, it’s time to run and analyse the experiments (execution).

We believe our SCORE model is effective for prioritising experimentation projects because it puts more emphasis on prioritising and getting the right strategy first before ever trying to prioritise experiments against each other. It is structured, rewards data and insight and allows for the practicalities of experimentation – we can review and update our strategy as new data comes in. The only limitation is that it does take time in order to prioritise the strategy effectively. But, if we’re going to invest time anywhere we believe it should be on getting the strategy right.

Our conclusions

The workshop was a great success. We had some great feedback from those involved and some actionable ideas for our attendees to take away.

We recommend having a go at using the SCORE prioritisation model. In the next few weeks we’ll be sharing a detailed post on our experimentation framework but you can apply SCORE within your own approach by reviewing how you define and prioritise your experimentation strategy. See whether this helps you to produce a roadmap which is informed by data and insight, absent of subjectivity and effective in helping your business test the most valuable ideas first.

If you have any questions or would like to know more, please get in touch.

To attend our future events, keep an eye out here.

Comments (0)

There are no comments.

Post your comment here

Get involved in the conversation! Leave a comment and we or someone else in the community will be sure to reply.

 
 

Get in touch

If you're looking for a personal, results-focused approach to conversion optimization, we'd love to hear from you.

Get In Touch

For the best experience, please return your device to portrait mode.