Introducing: The 9 experimentation principles

Kyle Hearnshaw

At Conversion.com, our team and our clients know first-hand the impact experimentation can have. But we also see all too often the simple mistakes, misconceptions and misinterpretations organizations make that limit the impact, effectiveness and adoption of experimentation.

We wanted to put that right. But we didn’t just want to make another best-practice guide to getting started with CRO or top 10 tips for better experiments. Instead, inspired by the simple elegance of the UK government design principles, we set ourselves the challenge of defining a set of the core experimentation principles.

Our ambition was to create a set of principles that, if followed, should enable anyone to establish experimentation as a problem solving framework for tackling any and all problems their organization faces. To distill over 10 years of experience in conversion optimization and experimentation down to a handful of principles that address every common mistake, every common misconception and misinterpretation of what good experimentation looks like.

Many hours of discussion, debate and refinement later, we’re happy to be able to share the end product – the 9 principles of experimentation.

Here are the principles in their simplest form. You can also download a pdf of the experimentation principles that also includes quotes and stories we’ve gathered from experimentation experts at companies such as Just Eat, Booking.com, Microsoft and Facebook. A few snippets of those quotes are included below as a taster.

DOWNLOAD PRINCIPLES PDF

1 – Challenge assumptions, beliefs and doctrine

Experimentation should not be limited to optimizing website landing pages, funnels and checkouts. Use experimentation as a tool to challenge the widely held assumptions, ingrained beliefs and doctrine of your organization. It’s often by challenging these assumptions that you’ll see the biggest returns. Don’t accept “that’s the way it’s always been done” -to do so is to guarantee you’ll get the results you’ve always had. Experimentation provides a level playing field for evaluating competing ideas, scientifically, without the influence of authority or experience.

“It was only when we were willing to question our core assumptions through interviews, data collection, and rigorous experimentation that we found answers to why growth had slowed…”

-Rand Fishkin, CEO and Co-founder, SparkToro

2 – Always start with data

It sounds trite to say you should start with data. Yet most people still don’t. Gut-feel still dominates decision making and experiments based on gut-feel rarely lead to meaningful impact or insight. Good experimentation starts with using data to identify and understand the problem you’re trying to solve. Gather data as evidence and build a case for the likely causes of those problems. Once you have gathered enough evidence you can start to formulate hypotheses to be proven or disproven through experiments.

3 – Experiment early and often

In any project, look for the earliest opportunity to run an experiment. Don’t wait until you have already built the product/feature to run an experiment, or you’ll find yourself moulding the results to justify the investment or decisions you’ve already made. Experiment often to regularly sense-check your thinking, remove reliance on gut-feel and make better informed decisions.

4 – One, provable hypothesis per experiment

Every experiment needs a single hypothesis. That hypothesis statement should be clear, concise and provable – a cause-effect statement. A single hypothesis ensures the experiment results can be used to evaluate that hypothesis directly. Competing hypotheses introduce uncertainty. If you have multiple hypotheses, separate these into distinct experiments.

5 – Define the success metric and criteria in advance

Define the primary success metric and the success criteria for an experiment at the same time that you define the hypothesis. Doing so will focus your exploration of possible solutions around their ability to impact this metric. Failing to do so will also introduce errors and bias when analyzing results—making the data fit your own preconceived ideas or hopes for the outcome.

“Any targets drawn after the experiment is run should be called into question. The evidential value of an experiment comes from targets that were drawn before we started the test”

-Lukas Vermeer, Booking.com

6 – Start with the minimum viable experiment, then iterate

When tackling complex ideas the temptation can be to design a complex experiment. Instead, look for the simplest way to run an experiment that can validate just one part of the idea: the minimum viable experiment. Run this experiment to quickly get data or insight that either gives the green light to continue to more complex implementations, or flags problems early on. Then iterate and scale to larger experiments with confidence that you’re heading in the right direction.

7 – Evaluate the data, hypothesis, execution and externalities separately

When faced with a negative result, it can be tempting to declare an idea dead-in-the-water and abandon it completely. Instead, evaluate the four components of the experiment separately to understand the true cause:

  1. The data – was it correctly interpreted?
  2. The hypothesis – has it actually been proven or disproven?
  3. The execution – was our chosen solution the most effective?
  4. External factors – has something skewed the data?

An iteration with a slightly different hypothesis, or an alternative execution could end in very different results. Evaluating against these four areas separately, for both negative and positive results, gives four areas on which you can iterate and gain deeper insight.

8 – Measure the value of experimentation in impact and insight

The ultimate judge of the value of an experimentation program are the impact it delivers and the insight it uncovers. Experimentation can only be judged a failure if it doesn’t give us any new insight that we didn’t have before. Negative results that give us new insight can often be more valuable than positive results that we don’t understand.   

9 – Use statistical significance to minimise risk

Use measures of statistical significance when analyzing experiments to manage the risk of making incorrect decisions. Achieving 95% statistical significance leaves a 1 in 20 chance of a false positive – seeing a signal where there is no signal. This might not be acceptable for a very high risk experiment with something like product or pricing strategy, so increase your requirements to suit your appetite. Beware experimenting without statistical significance, that’s not much better than guessing.

“The best data scientists are skeptics that double-check, triangulate results, and evaluate the positive and the negative results with the same scientific rigor”

-Ron Kohavi, Microsoft

These are the 9 principles we felt most strongly define experimentation, but no doubt we could have added others and made a longer list. If you have experimentation principles that you use at your organization that we haven’t included here we’d be interested to hear about them and why you feel they’re important.

For more detail and even more insights from some of the world leading experts on experimentation, please be sure to download the full experimentation principles.

We’re also looking for more stories and anecdotes of both good and bad examples of these principles in action from contributors outside Conversion to include in our further iterations of these principles. If you have something you feel epitomises one of these principles then please get in touch and you could feature in our future posts and content about these principles.

And finally, if you want to be notified when we publish more content about these experimentation principles, drop us an email with your contact details.

For any of the above get in touch at hello@conversion.com.

DOWNLOAD PRINCIPLES PDF

Join 5,000 other people who get our newsletter updates