Testing velocity: 6 strategies to ramp up your experiment launch speed

Frazer Mawson

As a battle-hardened experimentation agency with more than fourteen years experience running advanced experimentation programs for our clients, we here at Conversion have spent a lot of time thinking about how we can make our programs as impactful as possible.

As a result of all this thinking, we’ve identified three key factors (to be discussed in the next section) that we believe determine the success or failure of any given experimentation program. 

In this post, we’re going to zone in on one of these factors – specifically, testing velocity – by explaining what it is and why it’s important. With the basics covered, we’ll then go on to share some of the strategies that we’ve used to increase our agency-wide testing velocity by more than 30%.

So, in summary, throughout this post we’re going to cover:

  1. What is testing velocity?
  2. Why is testing velocity important?
  3. How to increase your testing velocity

What is testing velocity?

Testing velocity is a measure of how quickly you can get your experiments up and running, from research and ideation through to launch. 

It’s one of the three core factors that determine the success of any CRO program – the other two are testing volume, which is the number of experiments you’re able to run in any given time period, and testing value, which is a calculation of your win-rate multiplied by your average conversion uplift. 

So, in essence, the more experiments you can run (volume), the quicker you can launch them (velocity), and the better those experiments are (value), the greater your ROI will be. Or,

Volume x Velocity x Value = experimentation impact!

 

One important thing to understand about these factors is that they’re all closely related. That means that when you work to improve one of them, you’re also likely to impact one or both of the others as well. 

For example, let’s say that you decide to improve your testing velocity by cutting down the time you spend on user research. All things being equal, you may find that this change has a negative impact on your win rate, and that this reduced win rate cancels out any of the improvements that came from increasing your testing velocity. 

This example illustrates an important point: when you take measures to increase your testing velocity, you need to make absolutely certain that these measures aren’t negatively affecting either of the other two factors.

All of the strategies we discuss in the final part of this blog post have been designed with this in mind. 

Why is testing velocity important?

On a big picture level, testing velocity is important because it impacts your ROI. 

But why’s that so?

Well, generally speaking, the quicker you can take your experiments from ideation to launch:

6 strategies to increase your testing velocity

When trying to improve your testing velocity, the goal should always be to increase your launch speed while at the same time ensuring that your experiment quality – or, what we refer to as ‘value’ – stays high. 

The strategies we’re going to unveil in this section have allowed us to do exactly that.

In fact, since 2018 we’ve improved our average launch speed by more than 32%.

Also worth noting is that we managed to achieve and maintain this uplift during a time when our agency more than doubled in size.

Read on to find out how we did it.

A graph showing our testing velocity improvement over the last four years

Our internal data about our agency-wide testing velocity over time

1. Data gathering and reporting

Four years ago, we made a conscious effort as an agency to increase our testing velocity.

Unfortunately, what we quickly realised was that, up until then, we hadn’t been gathering much data on the amount of time it took us to go from strategy to launch.

This made it extremely difficult for us to increase our speed, because without some baseline level to compare our progress to, how could we know if we were improving?

As a result, the first thing we did to increase our testing velocity was to actually start tracking it. 

This gave us a strong foundation on which we could begin looking to optimize. But data gathering alone isn’t enough to move the needle – you also need to start reporting on your data too. 

Reporting gives you a chance to keep your goals fresh in your mind, to consciously monitor your progress, and to narrow your focus on those changes that are actually going to make a difference.

Soon after we started tracking our testing velocity three years ago, we also started reporting on it internally. Since then, we’ve developed a number of reporting mechanisms – primarily using google data studio – that we’ve used to display and keep track of our testing velocity. 

Below is an example of one of the first dashboards we put together to report on our velocity. It contains the number and percentage of experiments that we launched in less than three weeks, and the number and percentage that we launched in less than two. 

A screenshot of the original dashboard we used to start monitoring our testing velocity

Our first testing velocity dashboard from way back when

This kind of reporting setup is a big part of the reason behind our dramatic testing velocity improvement over the last few years.

2. Build smaller experiments

‘The bigger the build, the bigger the uplift,’ is a truism within the world of experimentation, based on the idea that the bigger and more complex the changes you make to a web page, the bigger your conversion rate uplift is likely to be. 

But is this true?

A while ago, we dug into our database of experiments to look at the average uplift for experiments of different sizes. What we found was that large experiments were no more likely to win than minor tweaks – and that they actually had a slightly smaller uplift than tweaks as well (see chart below)!

A bar chart comparing experiment build size with average win rate and average uplift

Our internal data comparing build size with win-rate and average uplift

Of course, there may be times when it becomes necessary to spend a good deal of time building out an elaborate experiment: the key to optimizing your testing velocity is working out when this kind of additional time and energy is justified and when it’s simply slowing your process down.

Smaller experiments typically require less time spent on research, ideation, design, and development, which means that you can run more of them over a shorter period of time. 

This is why we always try to begin our CRO programs with a minimum viable experiment (MVE), which we define as the smallest possible experiment that will allow us to validate our hypothesis. 

Generally speaking, we will only invest in experiments that have a large build time if we’ve proven, either through a smaller experiment (possibly an MVE) or comprehensive user research, that the increased build time will be worthwhile. 

This approach allows us to start testing straight away, at the outset of a new project, without needing to wait weeks or even months for research, design, and development to come together. 

Ultimately, this gives us a chance to start impacting conversion rates immediately, while also gathering insights about our clients’ users which we can use to inform future experiment iterations. 

This approach is a large part of the reason why we’re so often able to achieve a positive return on investment for our clients within the first 12 weeks of their programs. 

3. Avoid deep-dive analysis (when appropriate)

Having just completed an experiment, it can sometimes be tempting to spend weeks delving through your results, analyzing various non-primary metrics in the hopes of uncovering insights that will unlock the rest of your experimentation program.

Unfortunately, this almost never happens, which means that this additional analysis, while interesting, turns out to be mostly pointless. 

Instead, we recommend asking yourself the following question: how often do I find that the decision for my next experiment (the iteration) is made based on the analysis of a non-primary metric?

If your answer is ‘rarely’ or ‘never,’ then you might want to rethink the way you do your analysis. 

Our approach often involves focussing solely on our primary KPIs and only digging into non-primary metrics when we wish to gain deeper insights into the ‘why’ behind the experiment result.  

This approach means that we don’t waste time producing analyzes that ultimately have no impact on the success of our programs. 

4. Experiment internally

When attempting to work out which parts of your process are essential and which can be cut down, you’ll probably encounter a number of conflicting opinions amongst different members of your team.

Some may feel that extensive research is necessary for every single experiment, no matter how small the intended changes. Others may believe that research can be reduced for certain kinds of experiments, but that your usual design process should always be followed to protect the quality of your experiments. 

How do you decide between these two well-reasoned perspectives?

The same way you decide between two well-designed web pages: you run a series of experiments on them and see which one works better. 

Experimentation needn’t be limited to the conversion rates on your website – it can be applied to every activity within your organization, from your R&D and product development right on through to your internal processes and procedures. 

As an experimentation agency that champions the power of experimentation, we’re constantly running tests on our internal processes, and this has allowed us to significantly improve our testing velocity while maintaining the quality of our experiments. 

5. Automation

We’ve recently launched our own R&D department, which is responsible, among other things, for automating many of the administrative tasks that our internal team does on a daily basis.

So far, this has allowed us to avoid many duplications of work while also cutting down on the time we spend doing menial tasks. As a result, we now have more time to focus on things that matter – like developing the best, most impactful experimentation programs for our clients.

Most of the automation we’ve done so far has been relatively minor, focusing on small, incremental changes, but by cutting out fifteen minutes of work here and half an hour there for every single experiment we run, these changes are starting to add up.

Consequently, automation is beginning to play a bigger and bigger role in our ability to increase our testing velocity. 

6. Gain stakeholder approval early

Whether you’re working in-house or agency side, there’s every likelihood that each of your experiments will need to be signed off by at least one – usually several – stakeholders before it can go live. 

Frustrating as this can be at times, it needn’t in and of itself negatively impact your testing velocity. 

Problems begin to arise, however, when experiments are rejected at the later stages of production. When this happens, it can mean that tens of hours have been sunk into an experiment that will never see the light of day. Not only is this a terrible waste of your resources, but it also badly hurts your testing velocity. 

To avoid this outcome, we always do everything within our power to gain stakeholder approval as early on in our process as we can. If an experiment’s going to be rejected, we want it to be rejected at the ideation stage. If this isn’t possible, then at the very least we want it to happen at the design stage, before hours of development have been put into it. 

There will always be experiments that you design and build out only to find that your stakeholders aren’t happy with the final implementation – the goal here is to make sure that this happens as rarely as possible. 

Final Thoughts

Focussing on improving your testing velocity is one of the best things you can do to increase the ROI of your experimentation program. For us, it’s been a gradual process that continues to this day – and the results have been extremely worthwhile, allowing us to provide more value to our clients than ever before. 

If you’re serious about improving the testing velocity of your program, the strategies outlined in this post offer a good place to start. 

Join 5,000 other people who get our newsletter updates