Experimentation Framework

Everyone approaches experimentation differently, but there’s one thing highly successful experimentation teams all have in common: a strategic framework that drives experimentation.

Here’s ours.

The Framework

Our Experimentation Framework is the structure we use to organize, focus, and direct all of our work.

At the heart of everything we do, the framework serves a number of purposes:

The Framework

Our Experimentation Framework is the structure we use to organize, focus, and direct all of our work.

At the heart of everything we do, the framework serves a number of purposes:

  • It allows us to achieve clarity and alignment with our clients.
  • It helps us structure our work, ensuring that every experiment is part of an overarching strategy geared towards achieving our clients’ goals.
  • It allows us to feed learnings from past experiments back into our testing strategy to generate new, high-impact ideas for the future.

The Framework structure

As we introduce our framework, you might be surprised by its simplicity. The framework has a very straightforward, logical structure – which is largely why it’s so effective.

To start off with, then, the framework consists of six levels, starting with the most general at the top – Goals – and working down to the most particular at the bottom – Experiments (see below).

We work with each of our clients to populate their framework and start building a roadmap that’s maximally impactful for their business.

To give you a better sense as to how the framework works, we’ll now go through all six of these levels individually.

Note: One thing to know about our framework is that it’s not static. As each experimentation program progresses, our framework and strategy are continuously updated to better account for the new data we’ve been able to collect. However, by working with our clients to fill out the entire framework at the outset of a program, we put ourselves in a strong position to start driving results right from the outset.

Goals

Most teams fail to set a clear goal for experimentation – but without a goal, how can you differentiate success from wasted effort?The first part of our Experimentation Framework, therefore, is about working with you to set a SMART goal for your program.

This gives us something realistic but ambitious to aim for, and helps us remain focused on the kinds of experiments that are actually likely to take us closer to this goal.

An example of a good SMART goal that we might set for a program is, ‘“Add an additional £10m in profit within the next 12 months”.

KPIs

With your SMART goal set, we next drill down into the specific KPI’s that we might want to target in order to hit this goal.

Given that we’re a CRO team, it’s difficult for us to influence things like number of visits to a website. We can, however, exert a strong influence on things like AOV and order conversion rate.

As a result, if our overarching goal is to increase revenue, we may choose AOV and order conversion rate as two of the KPIs that we plan to target. 

Audience’s

Next, we can start to develop our strategy for impacting the KPIs and achieving the goal. The first step is to explore how the make-up of your audience should influence our approach.

To do this, we first need to ask “which groups of users have the biggest influence on each KPI?” With this question in mind we can start to map out our audience.

We start by identifying the most relevant dimensions – the attributes that identify certain groups of users, e.g. location, device, new/returning, etc.

For each dimension we can then define the smaller segments – the way users should be grouped under that dimension. For example, Desktop, Mobile and Tablet would be segments within the Device dimension.

Once we’ve decided on your potential audiences, the next step is to use data to validate the size and value of these audiences.

The aim here isn’t to limit our experiments to a specific audience – we’re not looking to do personalization quite yet. But understanding your audiences means when we come to designing experiments we’ll know how to cater to the objections and concerns of as many users as possible.

Areas

Once we have a better understanding of your audience, we next need to choose when and where to act to be most effective. The Areas part of our framework is about understanding the user journey – and focusing our attention on where we can make the biggest impact.

To begin, we first map out the important areas of your user journey. We start by mapping the onsite journeys and funnels, but we don’t limit ourselves to just onsite experience – we need to consider the whole user journey, especially if our goal is something influenced by behaviors that happen offsite.

As with audiences, we sketch out this initial map fairly quickly, but we then use analytics data to start adding more useful insights. For example, are there areas of your user journey where abandonment is particularly high? Are there places where the opposite is true?

We run this analysis for each of the audiences we identified in the previous step, with the goal of highlighting where things are similar between audiences but crucially where they are different too.

Levers

Levers are the factors we believe can influence user behavior: the broad themes that we’ll explore in experimentation. At its simplest, they’re the reasons why people convert, and also the reasons why people don’t convert.

Our Lever Framework is central to everything we do as an agency, so if you’re interested in learning more about it, click here.

To identify levers, first we look for any problems that are stopping users from converting on our KPI – we call these barriers to conversion. Some typical barriers are lack of trust, price, missing information and usability problems.

We then look for any factors that positively influence a user’s chances of converting – what we call conversion motivations. Some typical motivations are social proof (reviews), guarantees, USPs of the product/service and savings and discounts.

Typically, we have two broad means of identifying potentially impactful levers:

  1. Research – we use a range of research methods to identify the barriers and motivations that are influencing user behavior on your website. These research methods vary from program to program, but will typically include an analytics review, competitor analyses, heuristics review, user testing, surveys, heatmapping, scrollmapping, as well as others.
  2. Meta-analysis – as an agency, we work hard to store, tag, and categorize every experiment we run. This means we now have access to a huge database of learnings & experiment results, which we use to inform the direction . For example, if you are a B2B SaaS business, we will analyze past experiment results to identify levers that have been effective for past clients in the same or adjacent industries.

Together the barriers and motivations give us a set of potential levers that we can “pull” in an experiment to try and influence behavior.

Experiments

The final step in our framework is where we define our experiments.

Worth noting: this isn’t an exercise we do just once – we don’t define every experiment we could possibly run at the start of a program. Instead, here we’re using our framework to start building the initial hypotheses that want to explore with our experiments.

We define our hypothesis first before thinking about the best execution of an experiment to test it, as there are many different executions that could test a single hypothesis. At the end of the experiment the first thing we do is use the results to evaluate whether our hypothesis has been proven or disproven. Depending on this, we then evaluate the execution separately to decide whether we can iterate on it – to get even stronger results – or whether we need to re-test the hypothesis using a different execution.

With the entire framework populated, we’re then in a position to begin prioritizing different experiments, before designing, building, and finally running the experiments themselves.