Talking PIE over breakfast - our prioritization workshop

Anna Tiplady

Recently, we continued our workshop series with one of our solutions partners, Optimizely, discussing the prioritization of experiments.

The workshop session was led by Kyle Hearnshaw, Head of Conversion Strategy at Conversion.com, with support from Stephen Pavlovich, CEO and Nils Van Kleef, Solutions Engineer at Optimizely.

Our most popular workshop to date, we gathered over 40 ecommerce professionals including representatives from brands such as EE, John Lewis and Just Eat together, all keen to talk about one of their biggest challenges – prioritization. Throughout the morning we discussed why we prioritize, popular prioritization methods and finally how we at Conversion.com prioritize experiments.

For those of you that couldn’t make the session, we want to share some insights into prioritization so you too can apply learnings next time you are challenged with prioritizing experiments. Keep an eye out on our blog too, as later in the year we’ll be posting a longer step-by-step explanation of our approach to personalization.

Why Prioritise?

There is never usually a shortage of ideas to test however we are often faced with a shortage of resources to build, run and analyze experiments as well as traffic in which to run experiments on. We need to make sure we prioritize the experiments that are going to do the most to help us achieve our goal in the shortest of time.

So, what is the solution?

Popular Prioritisation Methods

In order to identify those tests that have the maximum impact with the efficient use of resources we need to find the most effective prioritization method. So, let’s take a look at what is out there:

1. PIE model

Potential: How much improvement can be made on the pages?

Importance: How valuable is the traffic to the pages?

Ease: How complicated will the test be to implement on page / template?

We think PIE is simple and easy to use, analyzing only 3 factors. However, some concerns with this model are  that the scoring can be very subjective and there can be an overlap between Potential and Importance.

2. Idea Scores from Optimizely 

 Optimizely’s method is an extended version of the PIE model adding the factor of ‘love’ into the equation. Again, we commend this model for its simplicity however, subjectivity from scoring can still mean the overall score is subjective.

3. PXL model from ConversionXL

 The PXL is a lot more complex than the previous two, giving eight to data and insight which we think is very important. In addition, the PXL model goes a way towards eliminating subjectivity by limiting scoring to either 1 or 0 in most columns. One limitation of this model is that it doesn’t account for differences in page value rather than just page traffic, nor does it give you a way to factor in learnings from past experiments. It also has the potential to be very time consuming and you may not easily be able to complete all columns for every experiment.

Prioritisation at Conversion

When deciding on our prioritization model we wanted to ensure that we were prioritizing the right experiments, making sure the model accounted for insights and results, removing any possibility for subjectivity and allowing for the practicalities of running an experimentation program. So, we came up with the SCORE model:

The biggest difference with our approach is that prioritization happens at two separate stages. We want to avoid a situation where we are trying to prioritize a large number of experiments with different hypotheses, KPIs, target audiences and target pages against each other. In our approach individual experiments are prioritized at the ‘Order’ stage however, we minimise the need for directly prioritizing experiments against each other by first prioritizing at the strategy stage.

We use our experimentation framework to build our strategy by defining a goal, agreeing our KPIs and then by prioritizing relevant audiences, areas and levers. Potential audiences we can experiment on are prioritized on volume, value and influence. Potential areas are prioritized on volume, value and potential. Levers – what user-research has show could influence user behavior – are prioritized on win rate (if we’ve run experiments on this lever before), confidence (how well supported the lever is in our data) or both.

Next we ensure we cultivate the right ideas for our concepts. We believe structured ideation around a single hypothesis generates better ideas. Again, utilising the experimentation framework we define our hypotheses: “We believe lever for audience, on area will impact KPI.” Once the hypothesis has been defined we then brainstorm the execution.

The order of our experiments come from prioritizing the concepts that come out of our ideation sessions. Concepts can be validated quickly by running minimum viable experiments. MVEs allow us to test concepts without over-investing and also allows us to test more hypotheses in a shorter timeframe.

Next, we create an effective roadmap. We start by identifying the main swimlanes (pairs of audiences and areas that can support experiments) and then we estimate experiment duration based on a minimum detectable effect. A roadmap should include tests across multiple levers, this allows you to gather more insight and spreads the risk of over-emphasising in one area.

Finally, it’s time to run and analyze the experiments (execution).

We believe our SCORE model is effective for prioritizing experimentation projects because it puts more emphasis on prioritizing and getting the right strategy first before ever trying to prioritize experiments against each other. It is structured, rewards data and insight and allows for the practicalities of experimentation – we can review and update our strategy as new data comes in. The only limitation is that it does take time in order to prioritize the strategy effectively. But, if we’re going to invest time anywhere we believe it should be on getting the strategy right.

Our conclusions

The workshop was a great success. We had some great feedback from those involved and some actionable ideas for our attendees to take away.

We recommend having a go at using the SCORE prioritization model. In the next few weeks we’ll be sharing a detailed post on our experimentation framework but you can apply SCORE within your own approach by reviewing how you define and prioritize your experimentation strategy. See whether this helps you to produce a roadmap which is informed by data and insight, absent of subjectivity and effective in helping your business test the most valuable ideas first.

If you have any questions or would like to know more, please get in touch.

To attend our future events, keep an eye out here.

Join 5,000 other people who get our newsletter updates