In-house vs. agency experimentation: an honest comparison

There’s one fundamental question that every business leader will have to ask themselves before they can begin to think about conducting their own online, controlled experiments:  

‘Should we hire an agency to run our experimentation programme or should we build out our own in-house capability?’

This turns out to be a harder question to answer than most people realise.  

There are various pros and cons that come with each approach, and what works for one organisation won’t necessarily work for another.  

This complexity can make it difficult to know which option is right for you, so we decided to offer some guidance in the form of this blog post:

Many of our internal team members here at Conversion have worked in both agency and in-house settings, and in this post, we’re going to pool their experiences to give a completely honest and impartial comparison of the two alternatives. 

At the end, we’ll then finish with a quick overview of hybrid experimentation programmes and explain how they can sometimes be a better fit for certain organisations – particularly those at the start of their experimentation journey. 

Quick disclaimer: we ourselves are an experimentation agency, and as hard as we’ve tried, it can be difficult to get rid of all bias when writing blogs like this. If you feel like we’ve not been completely balanced – or if you think something needs adding – send us an email and if we think your feedback is fair, we’ll be sure to add it into the article!


  1. Things to know before we start
  2. The benefits of an agency-led experimentation programme
  3. The benefits of an in-house experimentation programme
  4. The middle way: hybrid experimentation programmes 

Things to know before we start

Before we get into the meat of this post, we thought it worth mentioning a couple of things to give a bit of context to the discussion: 

  1. All agencies and in-house teams are different – this post is focussed on understanding how the typical agency experimentation team differs from the typical in-house team, based on the experiences of people who have worked as part of each. 
  2. Ultimately, there are pros and cons to each set up, and it’s possible to make a success of either approach. What really matters is that you have a robust methodology and a well-researched strategy in place to guide your experimentation efforts. 
  3. Also worth noting: we’ve not bothered to write a complete pros and cons list, because with a comparison like this, the cons of one approach will almost always be the pros of the other. As such, we’ve simply chosen to list the benefits of each, starting with agency-led programmes and moving on to in-house programmes.

And with that out of the way, let’s get into it…

The benefits of an agency-led experimentation programme

1. Higher win rate

A few years ago, released a comprehensive report which compared the average win rate of agencies with that of in-house experimentation teams. They included data from more than 28,000 experiments, and found that agencies, on average, had a 21% higher win rate than in-house teams. 

Now, of course, win rate isn’t everything.

If we’re running a programme for a client and we see that our win rate is above a certain level, we’ll start to question whether the experiments we’ve been running are bold enough – or if maybe we’re being too safe. 

One of the biggest advantages of experimentation is that it allows you to trial bold and innovative ideas – ideas that have the potential to revolutionise the way you do business. If your win rate is too high, it might mean that you’re implementing too many safe ideas that you already have good reason to believe will win. 

Having said all of this, if win rate is an important consideration for you, then due to the way agency experimentation teams are set up (to be discussed), you may find that this is the route you’d rather go down.  

2. Hit the ground running

When you hire an agency, they’ll usually be able to start experimenting on your website straight away. 

In fact, when we begin working with a new client, we aim to have experiments up and running within the first two weeks of the project’s start date. This allows us to gather insights and learnings as quickly as possible, and means that we’re able to make an impact right from the get go. 

But if you decide to go the in-house route, there’s a lot more that needs to be done before your internal experimentation capability will be fit for purpose. 

Not only do you have to go about the process of hiring each individual member of your team – a notoriously difficult task in today’s job market – but you also need to get everything else together: processes, culture, technology & tools, education, etc.

Getting all of this set up can be an extended process, itself requiring a good deal of expertise and experimentation know-how. 

But if you hire an agency, this stuff will already be in place, so you can start enjoying a return on your investment almost immediately.  

3. Experiment database

Years ago, we decided to start collecting data on every single experiment that we ran. 

As a result, we now have an enormous database of experiment results, with in-depth information about a huge range of experiments, conducted for a huge range of clients, operating in a huge range of industries. 

This database gives us a real competitive edge when it comes to driving results for our clients.  

For example, let’s say that we take on a new client that operates in the financial services sector. 

When we’re looking for research to guide the course of our programme, we’ll look to our experiment database in search of tactics that have worked for similar clients who operate in financial services. 

Of course, there’s no guarantee that what worked for one client will work for another, but at the very least, our probability of success goes up by incorporating this data into our ideation process. 

In-house teams, on the other hand, will almost never have access to this kind of data source, which puts them at a competitive disadvantage when it comes to thinking up experiment concepts and building out experiment roadmaps. 

4. R&D

Generally speaking, an in-house experimentation expert will spend most of their time researching, ideating, running, and analysing experiments – with less time available to think about big picture strategy. 

But with an agency like ours, we’ve got numerous team members dedicated solely to this stuff. These team members spend time optimising our processes, refining our testing philosophy, and building our own models so that we can deliver as much value to our clients as possible. 

For example, we’ve developed our own neurolabs product, which allows us to leverage cutting edge findings from the field of behavioural science to provide deeper insights into our clients’ web visitors. 

In fact, this newly developed product is beginning to play a big part in the work we’re doing for a number of  our clients, including The Times

Another example of the R&D work we’ve done is our levers framework: as mentioned above, since our inception 14 years ago, we’ve managed to build a huge database of experiment results, and we’ve used this data to develop a systematic theory of the kinds of factors that influence conversion. 

This theory – which we refer to as our Levers framework – allows us to make sense of experiment results more easily and helps us devise more impactful experiment programmes for our clients. 

An overview of our lever’s framework

Our neurolabs product and levers framework are but two examples of many of the R&D work we’re doing, and it’s innovation of this kind that has been a vital ingredient of our success over recent years. 

5. Innovation & creativity

Following on from our point about R&D, many of our internal team members (who have previously worked within in-house teams) also make the case that significantly more innovation takes place within agency-led experimentation programmes. 

Here’s why: 

Firstly, any agency-side consultant will be working with multiple clients at any given time.

This means that they’re often exposed to a far wider range of industries, websites, problems, and solutions. They can then take what they’ve learnt with one client and consider how it might apply to other clients they’re working with. 

In-house consultants, on the other hand, only work for a single client, usually on a single website. This means that the kind of ‘cross-pollination’ just described rarely if ever occurs, which can lead to ideas growing a bit ‘stale’. 

And following on from this point: when agency consultants come up against difficulties, they’re able to draw upon the collective experience, expertise, and creativity of the entire agency to solve their clients’ problems. 

Take an agency like ours: at the time of writing, we currently have 16 full-time consultants working here at Conversion. This means that when one of our consultants comes up against a difficult challenge, they’re able to get support and inspiration from a team of 16 skilled consultants, who, themselves, are also working with a diverse group of clients.

In-house experimentation teams very rarely have access to this kind of knowledge/skill pool, which can make it difficult – though, of course, not impossible – for them to compete in terms of innovation.  

6. Dedicated resources

Generally speaking, in-house experimentation teams are often set up in a less formal way than agency teams:

There will usually be at least one conversion-focussed specialist, who will work alongside the internal design and development teams to deliver the experimentation programme. 

But unfortunately, with much of their time taken up with day-to-day work and high-priority business projects, rarely do these in-house designers and developers have much free capacity to support the experimentation team. 

This means that in-house consultants are sometimes left pulling their hair out waiting for their experiment concepts to be created. 

And it can also mean that, in order to hit their velocity targets, in-house experts are forced to run experiments using highly limiting WYSIWYG editors that only allow them to make relatively minor tweaks to any given page.

With agency teams, on the other hand, the design, development, project management, and QA capability within the agency are all dedicated exclusively to delivering the client’s programme. This means that the work often gets completed much more efficiently, and that potentially high-impact experiments are prioritised.

7. Specialisation

One of the challenges with in-house experimentation is that often, due to resource constraints, individuals within the team are forced to do a bit of everything. 

For example, many in-house CRO roles are merged with UX roles. This can mean that in-house CRO managers are responsible for running both the experimentation programme and the UX optimisation programme, while also working across website execution for campaigns, looking after core web vitals, and supporting website admins when necessary too. (This was the experience of one of our consultants when working in a previous in-house role)

With agencies, on the other hand, the individuals responsible for delivering your programme are allowed to specialise:

  • Consultants are able to focus exclusively on strategy, without having to spend their time project managing the programme or chipping in on design/development/QA work. 
  • Designers and developers are able to learn and hone the specific skills required for high-level experimentation – skills which are often subtly but significantly different from those of most typical design and development roles. 
  • Top agencies will also often have a dedicated QA team, whose primary responsibility will be to ensure that your experiments meet certain standards. 

This kind of specialisation means that each of these individuals ends up becoming extremely skilled within their limited range of operations – and it is the client, ultimately, who enjoys the benefits of this specialisation. 

The benefits of an in-house experimentation programme

1. Intimate knowledge of product

When we take on new clients, we take pains to understand them as well as we possibly can: their needs, their expectations, their goals, their products, their services, their websites, their competitors, etc. 

This is always one of our first steps, and it gives us a firm understanding on which we can begin running experiments that respond to and attempt to overcome our clients’ core challenges. 

But as much time and energy as we dedicate to this stage of our process, it’s difficult for us to understand our clients and their products as well as an in-house expert can. 

In-house experts have the luxury of living and breathing your company. 

They can dedicate all of their time to your programme – rather than having to spread themselves between various clients, as is the case with any agency consultant. They will be able to take part in internal meetings and functions, they will be around for informal conversations, and they will have easier access to your training materials and your internal subject matters experts. 

All of this, taken together, means that the in-house expert has a chance to build a deeper, more intuitive understanding of your product and your organisation. 

If you have an extremely complicated product or service that it will be difficult for an agency consultant to get to grips with, it may make sense for you to go down the in-house route. 

2. Lower Cost

If you’re planning to build a one- or two-person experimentation team, you’ll probably find that it costs you a good deal less than would an agency. 

Of course, as discussed above, with an agency you get a dedicated designer, developer, project manager, QA engineer, etc., which is where the additional cost comes in. 

But if your internal team has the capacity and capability to support your in-house experimentation expert in running the programme, then you may find that you have little need for all of the additional resources provided by an agency. 

3. Higher testing velocity

In the first section of this post we talked about a report released by Convert in 2019 showing that agency teams had a significantly higher average win rate than in-house teams.  

But what the report also showed was that in-house teams had a higher average testing velocity than agency teams. 

Testing velocity data taken from’s 2019 Optimization Maturity Report

Testing velocity is important for a number of reasons. The more tests you can get up and running,

  1. The more data you can gather
  2. The more you can learn
  3. The quicker you can iterate

This is important, and it’s something that we as an agency have spent a lot of time working on ourselves.

But on the other hand – and as we discuss in our blog post on testing velocity – testing velocity is often inversely correlated with the quality of your experiments.

In other words, it’s easy to run more experiments if you cut down on the amount of time you put into research, design, development, and QA, but the impact of those experiments is likely to fall as well. 

Considering that agencies, on average, have a higher win rate, it might be fair to draw the tentative conclusion that – again, on average – agencies produce higher quality experiments but in-house teams produce more experiments. 

There’s something to be said for each approach, and it’s up to you decide which might work best for your business. 

4. Easier to gain stakeholder approval

When we asked our consultants about the pros and cons of in-house experimentation, this was one of the pros that came up more often than any other.

When you bring on an agency to run your experimentation programme, there’s only so much they can do to raise awareness about experimentation within your organisation. Instead, much of this burden falls upon their points of contact within the business, who aren’t always particularly well qualified to perform this task. 

A lack of enthusiasm and support for the programme can often mean that it becomes extremely difficult to get approval for experiment concepts, so this is something that it’s important to get right. 

As an agency, we’ve had a huge range of experiences with this. Some of our clients have been unbelievably good at championing our results and gaining the support of senior stakeholders, while others have had more difficulty.

But on the other side of the coin, in-house experts have a chance to build real, human relationships with the people who make the decisions within the organisation. They have a chance to present their results in internal team meetings, to explain the philosophy behind their experimentation efforts, and to generate enthusiasm for the programme. 

This means that it’s often easier for internal teams to gain the stakeholder approval they need to make their programmes as impactful as possible. 

5. Easier to effect cultural change

This point ties in very closely with the one above about stakeholder approval.

An in-house experimentation expert is in a much better position to start building a real culture of experimentation than an agency could ever be. 

They have a chance to build close relationships with senior stakeholders, to attend meetings that an agency’s consultant would never have access to, and they have more power and resources available to them to evangelise and build enthusiasm. 

If you’re serious about building a real culture of experimentation, at a certain point, you’ll probably need people on the ground, within your organisation, to get it done. 

6. Ability to get winning experiments implemented

Here’s a scenario that we, as an agency, occasionally come across:

We’ve conducted extensive research to work out why our client’s users aren’t converting in greater numbers. 

We’ve ideated, designed, built, and quality assured a variation version of the webpage, which we’ve run against their existing webpage in an a/b test. 

The new variation page has produced a strong conversion rate uplift that is likely to generate our clients significant amounts of additional revenue each year.

We’ve then passed the code for this winning variation onto our clients, but for some reason or another, the code is never implemented on their site. 

This can happen for a variety of reasons: 

  • Maybe their developers have no free capacity to implement the code
  • Maybe their senior stakeholders are unwilling to give final sign off
  • Maybe there’s nobody within the business project managing the implementation. 

Either way, this can be a big problem, completely negating the whole point of experimentation, which is to find out which of your ideas are good and to implement them. 

But in theory, this should never occur with in-house teams, because they can work from within the organisation to make sure that experimentation is understood and that winning experiments are implemented. 

The middle way: hybrid experimentation programmes

Hopefully, if we’ve achieved our goal, then you should now know about the important differences between in-house and agency programmes, and have a better sense as to the approach that’s likely to work best for you. 

This whole discussion can basically be summarised in two key takeaways:

  • As a result of increased specialisation and dedicated resources to experimentation, agency teams are often able to be more efficient and innovative, which allows them to achieve better win rates on average.
  • In-house teams can be cheaper to run in the long-run and they come with a higher testing velocity. They’re also often more able to gain strong stakeholder approval for the programme and to impact the culture within the business. 

But beyond the agency/in-house binary, there’s also a third option that we’ve not yet talked about: hybrid experimentation

This approach involves combining the best of both in-house and agency experimentation to achieve the maximum return on investment.   

For example, we’ve worked with a number of our clients to help set up their own internal experimentation capability, focussing on things like education, processes, culture, technology, personnel, etc. 

This has allowed them to draw upon our experience and expertise – as well as our processes and frameworks – while also reaping many of the benefits that come with in-house experimentation. 

And we’ve also worked alongside preexisting in-house teams to support them in developing new, innovative experiment concepts that help drive the maximum possible impact for the programme. 

The point, then, is this: there’s no one-size-fits-all solution for your experimentation needs. Some organisations may require an agency; some may require an in-house team; and some may require a mixture of the two.  

Having read this post, you’ll now hopefully have a better idea as to which set up might work best for you, but if you’d like to talk through your options with an expert, feel free to get in touch. We’re passionate about experimentation, and we’ll do our best to give you an honest assessment of the option that’s best suited to your organisation’s needs.

The 15 most common CRO mistakes (and how to avoid them)

Generating meaningful results through CRO and experimentation is tough.

While almost anyone can achieve the odd one-off conversion rate uplift, producing wins with any consistency is another challenge entirely.

What’s more, many CRO practitioners find that even when they do achieve the kinds of results they’re after, their winning variations fail to perform when served to 100% of their traffic. 

Thankfully, in our experience, almost all CRO-related difficulties can be traced back to a number of core mistakes. In this post, we’re going to run through a list of the most common (and harmful) ones, explaining what they are and how you can avoid them.

If we’ve done our job right, then by the end of this blog, you should have everything you need to sidestep all of these pitfalls and start driving real, replicable results through CRO and experimentation.

So, to begin: this is our list of mistakes (in no particular order!):

  1. Starting out too big
  2. Average build size is too big
  3. Tests are too small
  4. Chasing winners
  5. Statistical misunderstandings
  6. No research
  7. The flicker effect
  8. Not tracking guardrail and secondary metrics
  9. Wrong primary metric
  10. Over-reliance on best practices, under reliance on testing
  11. No hypothesis
  12. Not digging into segments
  13. Little or no quality assurance
  14. Noise of existing customers
  15. Sitewide redesign

1. Starting out too big

As a general rule, it’s never good to start an experimentation programme with an ambitious, resource-intensive experiment. Here’s why:

A few years ago, we began working with a new client – a property listing website. Some early research indicated that their property pages might benefit from the inclusion of a maps feature, which would allow their users to see where each property was geographically located on a map. 

This functionality was quite complicated to build, and required a lot of time spent on development and QA to ensure that everything was functional and user-friendly. 

Unfortunately, when we finally launched our experiment, we found that the variation actually produced a negative impact on our primary conversion metric. In fact, many users were actively navigating away from the maps view that we’d created!

This is just the way it goes sometimes. You can never be sure of the impact a change will bring until you’ve tested it. But the mistake we made was sinking too much time and energy into testing a hypothesis that could have been tested using a much simpler, much less resource-expensive experiment.

For example, we could have used a painted door test to gauge demand for this new maps feature. This would have been a much quicker experiment to build, and it would have given us everything we needed to validate – or invalidate – our hypothesis.

If we then found that lots of users were trying to use this feature, we could have gone on to build out the functionality with a reasonable degree of certainty that it would actually improve engagement on the site. 

As a result of this mistake, we’ve since adopted the concept of a minimum viable experiment (MVE) to help guide our experimentation. In essence, an MVE is the smallest possible experiment – in terms of build, design time, etc. – that will allow us to validate our hypothesis. 

We now use MVEs to validate our hypotheses at minimal cost and risk, and to gather information about our users which we can use to guide the future course of our programmes. 

We’re not saying ‘don’t spend time on big builds.’ We’re saying ‘only spend time on big builds when you have a reasonable degree of certainty that those builds are going to be worth it!’ 

2. Average build size is too big

This may sound like a similar point to the one raised above, but it’s slightly different. The last point was about not creating experiments with big builds at the start of your programme, when you don’t have much data to support them. This is about creating experiments with big builds in general. 

Many people within the CRO industry subscribe to the view that ‘the bigger the build, the bigger the uplift.’

The idea behind this is fairly simple: if I make big changes to a web page, the conversion uplift is likely to be bigger than if I only make small changes.  

Anecdotally, though, we’ve always felt that our smaller experiments yielded results that were just as strong as our larger ones – so we decided to dig into our database, made up of thousands of experiment results, to see what it could tell us. 

The chart below shows our findings. 

As you can see, tweaks have just as high a win rate as experiments with a large build-size – and they have a slightly greater average uplift too (6.6% vs. 6.5%). 

This data shows that there’s no correlation between build size and either win rate or uplift – so if you’re spending all of your time building huge experiments that you hope to generate an equally huge uplift, you’re probably going to waste a lot of time.  

3. Tests are too small

While it’s important to not spend all of your time focusing on huge experiments,  it’s also important that you don’t spend all of your time testing minor tweaks either.  

As discussed in the next section on chasing winners, experimentation gives you a chance to trial some of your boldest and brightest ideas – ideas that have the potential to completely revolutionise the way your business works. 

If all of your experiments are focussed on minor tweaks, you’re missing out on one of the biggest opportunities that CRO offers: taking risks with a safety net.

Ideally, your programme will be a combination of small, low-risk tests with a high probability of winning, and higher-risk tests that have the potential to fail horrendously or succeed spectacularly. 

4. Chasing winners

Following on from our last point: conversion uplifts are important, for sure, but when done right, CRO should also be about gathering deep insights about your customers and trialling bold, innovative ideas with only a fraction of the usual risk. 

As an agency, therefore, if we’re winning too many experiments, we start to ask ourselves if we’re being bold enough.

A high win rate may look good on paper, but we see it as an indication that we’re being too safe, putting ideas into action that we already have good reason to think will work. 

The most value from CRO comes when you learn things that you didn’t already know – this allows you to start achieving bigger, more surprising wins, which you can later use to inform not only your experiment roadmap but also your product, pricing, and business strategies too. 

5. No hypothesis

Many people doing CRO today simply run their tests, analyse the results, look at whether the challenger variation won or lost, and then move on to the next experiment. 

On the one hand, these people should be commended for the fact that they’re running experiments and basing their decisions on empirical evidence. But on the other, their process is missing one of the most important elements of any sound scientific methodology: a hypothesis. 

Put simply, every test should be designed to test a hypothesis. 

This way, even if your test loses, you’re at least learning something, i.e. that your hypothesis was wrong. You can then use this learning to inform future experiments with an improved chance of success. 

As will hopefully be obvious by now, high-level CRO is as much about learning as it is about improving your conversion rate. Creating data-backed hypotheses and then testing them is the key to achieving long-term success. 

6. Statistical misunderstandings

Confusion surrounding ab testing statistics is a cause of much CRO-related difficulty. 

For example, many people call (i.e. finish) their tests as soon as they’ve reached 90 or 95% significance. 

Mats Einarsen showed why this is a bad idea. 

He simulated 1000 A/A tests (where the control and the variation are identical) and found that 531 one of them reached 95% statistical significance at least once!

What this shows is that if you stop your experiment as soon as it reaches a certain significance level – even if this level is set at 95 or 99% – there’s a reasonable chance that your result will be the product of blind luck. 

To avoid this mistake, you need to determine your required sample size before you’ve even launched your test – and you need to stick to it.

Here’s a good calculator you can use to calculate the sample size you need for your experiment. 

And here’s a good starting point to help you learn a bit more about ab testing statistics in general.

7. No research

Having been convinced of the value of CRO and experimentation, the next challenge is deciding what to test. 

Should you change your hero image? Should you make your headline copy more emotive? How do you decide which of these ideas to test? What’s more, how do you determine if either of them is worth testing?

To answer these questions, you need to do your research.

Research comes in many forms – analytics audits, scrollmaps, heatmaps, surveys, user testing, biometric testing, etc. – and it provides you with a good indication as to where and why your web visitors aren’t converting. With this information, you should then have a good idea of which kinds of hypotheses are worth testing and which ought to be pushed further down your list of priorities. 

Ultimately, there are any number of potential hypotheses you might want to test on your website. By prioritising those that are backed by multiple data points, with a mix of qualitative and quantitative research, you’ll be able to zone in on areas of testing that are likely to yield the biggest return.

Learn more about our framework for using research to prioritise a/b tests. 

8. The Flicker effect

Sometimes when you’re running an a/b test, the original version of your webpage will appear in the browser before your variant finally ‘flickers’ into place. This phenomenon is known as the flicker effect (or the flash of original content (FOOC)) and it can play havoc with your experiments. 

Not only does it ruin your website’s user experience, but by showing your users both versions of your webpage – the control and the variation – it impacts the way they respond to your experiment, invalidating your results.  

Thankfully, there are things you can do to minimise or entirely remove the flicker effect

Generally speaking, our developers write the code for our clients’ experiments with CRO specific standards in mind. This ensures that the code is executed as quickly as possible, accounting for any time issues, and that the user only sees the version of the webpage that they’re supposed to. 

9. Wrong primary metric

Your primary metric is the metric that you use to decide whether or not your experiment is a winner or a loser. 

If you have a website that sells shoes, you might set the number of orders as your primary metric. This way, if you run an experiment and it results in a 10% uplift in the number of orders, you’ll class it as a winner. 

But what do you do if you’re optimising a web page that’s a few steps away from your final conversion? For example, maybe you have a four step funnel and you want to optimise the first of these four web pages. 

4 step checkout process - made up of a landing page, a basket page, a checkout page, and a confirmation page
4 step

What should you use as your primary metric?

Some people will argue that in this case, you should select the next action you want your user to take as your primary metric, rather than the final conversion. So, in this example, every time a user proceeds from the landing page to the basket page, you would then count it as a conversion. 

But in our experience, this choice of primary metric is a mistake. 

That’s because sometimes, for a variety of reasons, you’ll find that your ‘next action’ conversion rate goes up but your ‘final action’ conversion rate goes down. 

Take this real-world example:

We thought that by making the minibasket easier to use, we would increase the progression rate through to checkout, and that this would ultimately have a positive impact on our final conversion rate. 

However, despite the fact that this experiment increased the progression to checkout rate by 28%, it also increased the dropoff rate on the checkout page by 43%!

This netted out at a 7.7% decrease in final conversions

As a consequence of this result and many others like it, we always recommend using your final conversion as your primary metric.

Note: There are a few occasions when it might actually make sense to set your primary metric as something other than your final conversion. If you’d like to learn about these exceptions to the rule, we discuss this in more detail in our blog post about primary metrics. 

10. Not tracking guardrail and secondary metrics

Selecting the right primary metric is an important first step, but if you want to get as much out of your ab tests as possible, you should also be tracking certain guardrail and secondary metrics too.

This is something that many CRO practitioners fail to do, and it means that they’re leaving all kinds of important insights – insights that could be used to inform their future testing strategy – on the table. 

Guardrail metrics are second tier metrics linked to key business objectives. They help you ensure that your experiment isn’t inadvertently harming other important business KPI’s. 

Here’s an example of the importance of guardrail metrics taken from work we did for one of our clients, a camera vendor:

We introduced an ‘add to basket’ call-to-action (CTA) to the product listing page, which allowed users to make a purchase without having to navigate to the product page.

This test produced a positive uplift on our primary metric – no. of orders – but it had a negative impact on two of our guardrail metrics – average order value (AOV) and revenue. 

If we hadn’t been tracking these guardrail metrics, we would have simply declared this test a winner and recommended that our client served this variation to 100% of their traffic – costing them a fortune in the process. 

Fortunately, on top of tracking guardrail metrics, we were also tracking a number of secondary metrics too. Secondary metrics don’t determine whether your tests win or lose, but they do allow you to monitor things like engagement, scroll depth, secondary KPI’s, etc., to help you make sense of your result. 

When we dug into our secondary metrics, we found that far fewer users in the variation were purchasing accessories and add-on items than in the control. This was because these users were being diverted away from the product page, which was where they were usually first exposed to these products. 

Insights gleaned from these guardrail and secondary metrics not only allowed us to avoid rolling out a new version of the web page that would have harmed business objectives, but they also helped inform the future direction of our testing strategy. 

Learn more about how we measure a/b tests for maximum impact and insight. 

11. Over-reliance on best practices, under reliance on testing

Conversion rate optimisation, when done properly, is all about testing hypotheses and making decisions based on the best available evidence. Unfortunately, many people doing CRO today simply apply certain ‘best practices’ (e.g. CTAs should be in red) to their website, without ever testing whether those best practices are right for them. 

As any good CRO practitioner will know, every website is different. 

Just because something works well for some websites doesn’t mean that it will work equally well – or at all – for others.

Solely relying on best practices is a recipe for disappointment. 

If you’re serious about optimising your website, you need to be testing your hypotheses. 

12. Not digging into segments

Sometimes, for a whole variety of reasons, one variation will perform well with one specific segment but not with another. 

In fact, this is something that we as an agency see all the time.

A test variation will achieve, say, a 2% conversion uplift, but when we dig into the data, we find that the uplift on mobile was actually +12% while the uplift on desktop was -10%. 

This kind of finding has real, quantifiable implications. 

For example, what was it about the variation that mobile users responded so positively to? And why did desktop users respond so poorly? Can we build new experiments to iterate on these findings? Should we serve this new variation page to mobile users but leave the control in place for those using a desktop? Will doing so negatively impact the consistency of the user experience?

These are important questions that need to be answered, but unless you’re analysing your results data and looking at your different segments, you’ll miss them entirely.  

13. Little or no quality assurance

No matter how strong your research or how well designed your experiment, if your webpages aren’t appearing as they should be – or if they’re appearing differently on different devices, browsers, page resolutions, etc. – then your results are likely to be skewed. 

One way around this is to invest in rigorous quality assurance (QA). 

As an agency, we involve our QA engineers right from the start of our process, which allows them to familiarise themselves with each experiment long before it is finally launched. We also encourage them to question everything and to assume, by default, that there will be problems . 

This is a fairly stringent process, but it ensures that our QA engineers are almost always able to catch bugs and usability issues long before the experiment goes live. 

If you’re planning to start running QA of your own, Browserstack is a good place to start.

14. Noise of existing customers

Sometimes you’ll conduct a tonne of research and build a variation page that you’re confident will win, only to find that when you come to test it, the conversion rate has hardly moved and the result hasn’t reached statistical significance. 

What’s happened?

Well, sometimes, this is just the way it goes. No matter how much research you do, there’s no guarantee that the changes you make will produce their intended effect. This is why testing is so important in the first place.

But having said this, sometimes there’s another explanation:

Let’s say that 90% of the people on your website are existing customers. They’ve already been convinced of your product’s value, so when they visit your website, they’re simply there to reorder.

For these people, the changes you make are unlikely to have much of an impact on their behaviour – they’re going to buy the product regardless of whether you add new imagery, change the headline, etc.

In situations like this, many people make the mistake of running their tests on existing customers and new users together. When existing customers make up a sizable portion of overall traffic, this has a tendency to muddy the waters and make it extremely difficult for you to achieve a definitive result. 

Instead, you often need to find a way to isolate new users to ensure that they’re the only ones being included in your test. 

One way of doing this is to only include new users in your sample (not always possible). Another is to select a primary metric that’s tied exclusively to new user activity – for example, account creations. 

15. Sitewide redesigns

This one isn’t necessarily a mistake CRO practitioners themselves make, but it’s something that nonetheless damages conversion rates, so we thought we’d include it here anyway. 

Over the years, many companies have come to us with the same problem: feeling that their website’s design had become dated, they’d decided to hire a team of designers and developers to build them something new. They’d spent hundreds of thousands of pounds ensuring that their new website was as sleek and aesthetically pleasing as possible, but when they finally launched it, they found that their conversion rate fell off a cliff.

Many of these redesigns looked great, but for one reason or another, they weren’t performing. 

Ultimately, what’s the point of a fancy website redesign if it harms your bottom line?

When we hear that a company is planning to redesign their website, we like to offer an alternative approach: iterative redesign (also sometimes known as evolutionary redesign). 

This approach involves making changes gradually, often one at a time, and running constant tests to see how they’re affecting the website’s conversion rates. If the changes perform well, we keep them and look to build on them; if they perform badly, we reject them and look to learn from their failure. 

This is the method amazon uses, and it has allowed them to continually improve their website based on the best available evidence, with little or no risk. 

Final thoughts

Almost anyone can achieve a one-off conversion rate uplift on their website, but generating long-term results through CRO is extremely tough. We hope that this post will give you a good foundational understanding of where you might be going wrong with your CRO efforts and what you can do to start (gracefully!) sidestepping these pitfalls.

If you’re interested in learning more about how you can use CRO to achieve your business goals, we have a biweekly newsletter where we go into more detail on the various strategies, frameworks, philosophies, and approaches that we’ve used to generate more than £1 billion in additional revenue for our clients. 

Sign up below!

Testing velocity: 6 strategies to ramp up your experiment launch speed

As a battle-hardened experimentation agency with more than fourteen years experience running advanced experimentation programmes for our clients, we here at Conversion have spent a lot of time thinking about how we can make our programmes as impactful as possible.

As a result of all this thinking, we’ve identified three key factors (to be discussed in the next section) that we believe determine the success or failure of any given experimentation programme. 

In this post, we’re going to zone in on one of these factors – specifically, testing velocity – by explaining what it is and why it’s important. With the basics covered, we’ll then go on to share some of the strategies that we’ve used to increase our agency-wide testing velocity by more than 30%.

So, in summary, throughout this post we’re going to cover:

  1. What is testing velocity?
  2. Why is testing velocity important?
  3. How to increase your testing velocity

What is testing velocity?

Testing velocity is a measure of how quickly you can get your experiments up and running, from research and ideation through to launch. 

It’s one of the three core factors that determine the success of any experimentation programme – the other two are testing volume, which is the number of experiments you’re able to run in any given time period, and testing value, which is a calculation of your win-rate multiplied by your average conversion uplift. 

So, in essence, the more experiments you can run (volume), the quicker you can launch them (velocity), and the better those experiments are (value), the greater your ROI will be. Or,

Volume x Velocity x Value = experimentation impact!

One important thing to understand about these factors is that they’re all closely related. That means that when you work to improve one of them, you’re also likely to impact one or both of the others as well. 

For example, let’s say that you decide to improve your testing velocity by cutting down the time you spend on user research. All things being equal, you may find that this change has a negative impact on your win rate, and that this reduced win rate cancels out any of the improvements that came from increasing your testing velocity. 

This example illustrates an important point: when you take measures to increase your testing velocity, you need to make absolutely certain that these measures aren’t negatively affecting either of the other two factors.

All of the strategies we discuss in the final part of this blog post have been designed with this in mind. 

Why is testing velocity important?

On a big picture level, testing velocity is important because it impacts your ROI. 

But why’s that so?

Well, generally speaking, the quicker you can take your experiments from ideation to launch:

  • the more information you can gather about your users 
  • the greater the number of challenger variations you’ll be able to test in any given time period
  • the greater the number of hypotheses you’ll be able to validate
  • the greater the impact you’ll be able to produce on your sitewide conversion rate

6 strategies to increase your testing velocity

When trying to improve your testing velocity, the goal should always be to increase your launch speed while at the same time ensuring that your experiment quality – or, what we refer to as ‘value’ – stays high. 

The strategies we’re going to unveil in this section have allowed us to do exactly that.

In fact, since 2018 we’ve improved our average launch speed by more than 32%.

Also worth noting is that we managed to achieve and maintain this uplift during a time when our agency more than doubled in size.

Read on to find out how we did it.

A graph showing our testing velocity improvement over the last four years
Our internal data about our agency-wide testing velocity over time

1. Data gathering and reporting

Four years ago, we made a conscious effort as an agency to increase our testing velocity.

Unfortunately, what we quickly realised was that, up until then, we hadn’t been gathering much data on the amount of time it took us to go from strategy to launch.

This made it extremely difficult for us to increase our speed, because without some baseline level to compare our progress to, how could we know if we were improving?

As a result, the first thing we did to increase our testing velocity was to actually start tracking it. 

This gave us a strong foundation on which we could begin looking to optimise. But data gathering alone isn’t enough to move the needle – you also need to start reporting on your data too. 

Reporting gives you a chance to keep your goals fresh in your mind, to consciously monitor your progress, and to narrow your focus on those changes that are actually going to make a difference.

Soon after we started tracking our testing velocity three years ago, we also started reporting on it internally. Since then, we’ve developed a number of reporting mechanisms – primarily using google data studio – that we’ve used to display and keep track of our testing velocity. 

Below is an example of one of the first dashboards we put together to report on our velocity. It contains the number and percentage of experiments that we launched in less than three weeks, and the number and percentage that we launched in less than two. 

A screenshot of the original dashboard we used to start monitoring our testing velocity
Our first testing velocity dashboard from way back when

This kind of reporting setup is a big part of the reason behind our dramatic testing velocity improvement over the last few years.

2. Build smaller experiments

‘The bigger the build, the bigger the uplift,’ is a truism within the world of experimentation, based on the idea that the bigger and more complex the changes you make to a web page, the bigger your conversion rate uplift is likely to be. 

But is this true?

A while ago, we dug into our database of experiments to look at the average uplift for experiments of different sizes. What we found was that large experiments were no more likely to win than minor tweaks – and that they actually had a slightly smaller uplift than tweaks as well (see chart below)!

A bar chart comparing experiment build size with average win rate and average uplift
Our internal data comparing build size with win-rate and average uplift

Of course, there may be times when it becomes necessary to spend a good deal of time building out an elaborate experiment: the key to optimising your testing velocity is working out when this kind of additional time and energy is justified and when it’s simply slowing your process down.

Smaller experiments typically require less time spent on research, ideation, design, and development, which means that you can run more of them over a shorter period of time. 

This is why we always try to begin our experimentation programmes with a minimum viable experiment (MVE), which we define as the smallest possible experiment that will allow us to validate our hypothesis. 

Generally speaking, we will only invest in experiments that have a large build time if we’ve proven, either through a smaller experiment (possibly an MVE) or comprehensive user research, that the increased build time will be worthwhile. 

This approach allows us to start testing straight away, at the outset of a new project, without needing to wait weeks or even months for research, design, and development to come together. 

Ultimately, this gives us a chance to start impacting conversion rates immediately, while also gathering insights about our clients’ users which we can use to inform future experiment iterations. 

This approach is a large part of the reason why we’re so often able to achieve a positive return on investment for our clients within the first 12 weeks of their programmes. 

3. Avoid deep-dive analysis (when appropriate)

Having just completed an experiment, it can sometimes be tempting to spend weeks delving through your results, analysing various non-primary metrics in the hopes of uncovering insights that will unlock the rest of your experimentation programme.

Unfortunately, this almost never happens, which means that this additional analysis, while interesting, turns out to be mostly pointless. 

Instead, we recommend asking yourself the following question: how often do I find that the decision for my next experiment (the iteration) is made based on the analysis of a non-primary metric?

If your answer is ‘rarely’ or ‘never,’ then you might want to rethink the way you do your analysis. 

Our approach often involves focussing solely on our primary KPIs and only digging into non-primary metrics when we wish to gain deeper insights into the ‘why’ behind the experiment result.  

This approach means that we don’t waste time producing analyses that ultimately have no impact on the success of our programmes. 

4. Experiment internally

When attempting to work out which parts of your process are essential and which can be cut down, you’ll probably encounter a number of conflicting opinions amongst different members of your team.

Some may feel that extensive research is necessary for every single experiment, no matter how small the intended changes. Others may believe that research can be reduced for certain kinds of experiments, but that your usual design process should always be followed to protect the quality of your experiments. 

How do you decide between these two well-reasoned perspectives?

The same way you decide between two well-designed web pages: you run a series of experiments on them and see which one works better. 

Experimentation needn’t be limited to the conversion rates on your website – it can be applied to every activity within your organisation, from your R&D and product development right on through to your internal processes and procedures. 

As an experimentation agency that champions the power of experimentation, we’re constantly running tests on our internal processes, and this has allowed us to significantly improve our testing velocity while maintaining the quality of our experiments. 

5. Automation

We’ve recently launched our own R&D department, which is responsible, among other things, for automating many of the administrative tasks that our internal team does on a daily basis.

So far, this has allowed us to avoid many duplications of work while also cutting down on the time we spend doing menial tasks. As a result, we now have more time to focus on things that matter – like developing the best, most impactful experimentation programmes for our clients.

Most of the automation we’ve done so far has been relatively minor, focusing on small, incremental changes, but by cutting out fifteen minutes of work here and half an hour there for every single experiment we run, these changes are starting to add up.

Consequently, automation is beginning to play a bigger and bigger role in our ability to increase our testing velocity. 

6. Gain stakeholder approval early

Whether you’re working in-house or agency side, there’s every likelihood that each of your experiments will need to be signed off by at least one – usually several – stakeholders before it can go live. 

Frustrating as this can be at times, it needn’t in and of itself negatively impact your testing velocity. 

Problems begin to arise, however, when experiments are rejected at the later stages of production. When this happens, it can mean that tens of hours have been sunk into an experiment that will never see the light of day. Not only is this a terrible waste of your resources, but it also badly hurts your testing velocity. 

To avoid this outcome, we always do everything within our power to gain stakeholder approval as early on in our process as we can. If an experiment’s going to be rejected, we want it to be rejected at the ideation stage. If this isn’t possible, then at the very least we want it to happen at the design stage, before hours of development have been put into it. 

There will always be experiments that you design and build out only to find that your stakeholders aren’t happy with the final implementation – the goal here is to make sure that this happens as rarely as possible. 

Final Thoughts

Focussing on improving your testing velocity is one of the best things you can do to increase the ROI of your experimentation programme. For us, it’s been a gradual process that continues to this day – and the results have been extremely worthwhile, allowing us to provide more value to our clients than ever before. 

If you’re serious about improving the testing velocity of your programme, the strategies outlined in this post offer a good place to start. 

How to hire the perfect CRO consultant for your programme

Hiring an expert CRO consultant is complicated – particularly in today’s job market – which can make it difficult to know for certain if you’re choosing the right person for your programme. 

As one of the world’s fastest growing CRO agencies, we here at Conversion are constantly adding skilled consultants to our team, so in the first half of this post, we’re going to give an overview of the kinds of attributes and traits that we look for when we hire our consultants.

In the second half,  we’ll then go on to explain why, under certain conditions, it might actually make more sense for you to enlist the help of a full-service CRO agency, who will be able to run your entire, end-to-end experimentation programme for you.

So, in summary, throughout this post we’re going to cover all of the following topics (feel free to click the links to skip ahead): 

What is a CRO consultant?

A CRO consultant – sometimes also known as a conversion optimisation consultant, a CRO specialist, a CRO expert, an experimentation consultant, etc. – is the chief strategist behind your conversion rate optimisation programme. Their role is to develop and oversee your programmes’s strategy from start to finish, from research right on through to the analysis of experiment results and the application of learnings to new experiments. 

CRO consultants often have some level of expertise in all of the following areas:

  • Multivariate and AB testing
  • Analytics
  • Behavioural science
  • Statistics
  • UX design
  • Copywriting
  • User research
  • Wireframing

What, specifically, does a CRO consultant do?

A CRO consultant is responsible for stewarding your experimentation programme from start to finish. This will usually involve:

  • Developing a testing strategy – a CRO consultant will set the agenda for your CRO programme and manage a testing roadmap to help deliver long term results 
  • Identifying problems with the customer experience of your website through quantitative and qualitative research. Research will typically include things like user testing, heatmap studies, on-site surveys, data analytics, etc.
  • Ideating test concepts to address issues identified in quantitative and qualitative research
  • Analysing and evaluating experiments
  • Reporting on results and devising iterations on concepts to drive the testing strategy
  • Prioritising insights and testing strategy to align with your goals and deliver a strong ROI
An infographic displaying a typical delivery process for a CRO consultant
Here’s an example of a (somewhat) typical CRO consultant delivery process

What doesn’t a CRO consultant do?

While CRO consultants are often extremely versatile, with expertise in a range of disciplines, rarely do they have the ‘full-stack’ of skills necessary to deliver an advanced CRO programme on their own. 

Specifically, CRO consultants don’t usually have expertise in: 

  • Design
  • Web Development
  • Quality Assurance

This means that you’ll often have to bring in the capabilities of other specialists if you want to create an experimentation programme that yields strong results. 

4 things to look for when hiring a CRO consultant

So, now that you understand exactly what a CRO consultant does (and also what they don’t), you’ll want to know how to select the right consultant for your programme. In this next section, we’ll run over some of the most important things to look for as you go about trying to hire your CRO consultant.

1. Data-driven outlook

Conversion rate optimisation, at its core, is about applying the scientific method to your website changes to improve the rate at which your users convert into customers. 

This means running experiments, collecting data, and using this data to guide your decisions. 

A CRO expert who’s not data-driven is a contradiction in terms, so make sure that whoever you hire is committed to letting the data – rather than bias or opinion or committee – guide the course of your experimentation programme. 

Tip: If you want to gauge how data-driven a candidate is, ask them to present their top 3 concepts for your website. If these concepts are chosen based on intuition and gut-feel, then you may want to consider someone else for the position. If they’re chosen based on research and data, you may be on to a winner. 

2. Proven track record

Running a CRO programme is hard. There are many places where an inexperienced CRO consultant might slip up, so it’s important to make sure that whoever you hire has a proven track record delivering high-level CRO programmes that produce quantifiable results. 

Case studies are an excellent source of information about a CRO consultant’s past work. If your prospective consultant has a set of strong case studies – possibly even involving a similar company to your own – then you can be fairly confident that they’ll be able to deliver your programme for you. 

A screenshot of our client success page
Here’s a screenshot of our client success page, where we post some of our latest case studies and show the world what we can do

Of course, case studies aren’t a foolproof indication of a consultant’s quality – they’re designed to let the consultant put their best foot forward and present themselves in as favourable a light as possible. But at the very least, they’ll tell you about the kinds of projects your candidate has worked on and show you some of the best results they’ve been able to achieve in the past.

Reviews and testimonials are another great way of gauging the quality of a CRO consultant’s past work. If they have lots of happy clients who were willing to leave them rave reviews and glowing testimonials, there’s a good chance that the service they provided was of a high-standard. 

To give you an idea, here’s an example of a testimonial that our friends at Canon left us off the back of an experimentation programme we ran for them:

A positive testimonial we received from Canon

3. Thought leadership content

Many of the foremost experts in the CRO industry spend time producing content that outlines their methodology and that provides a good insight into the way they think and work. 

If you want to get a sense as to the quality of work you can expect from a consultant, their content is a good place to start.

Tip: One good way of judging the merit of a piece of content – aside from reading it yourself! – is to look at the reception it receives from other industry experts. If they’re raving about it, you can be fairly sure that the person who wrote it knows their stuff.

Our blog post about our experimentation framework, for example, written by our very own Stephen Pavlovich, is widely shared within the industry and has been used to inform the CRO methodologies of many brands across the globe – including Facebook and Microsoft.

4. Strong communication skills 

Conversion rate optimisation is still a relatively new discipline, which can sometimes mean that it’s difficult to gain the support of senior stakeholders when trying to roll out innovative or novel experiment concepts. 

If you have a CRO consultant who can effectively articulate their vision for the programme, as well as the rationale behind each of their experiments, then their chances of gaining the support of senior decision makers goes way up. This is extremely important if you want your experimentation programme to have a real impact. 

7 reasons why hiring a full-service CRO agency might make more sense

A good, in-house CRO consultant can be extremely useful when it comes to setting up and running your experimentation programme, but there may be situations when it makes more sense to bring in the services of a full-service CRO agency instead. 

In this last section, we’re going to go over a few of the reasons that you might have for hiring a CRO agency rather than an in-house specialist. 

1. Internal limitations

As discussed above, while a good CRO consultant will bring many valuable skills to the table, they’re unlikely to come with the ‘full-stack’ of CRO skills – skills like design, web development, and QA. 

If you have a strong internal team with plenty of free capacity, then it may well make sense for you to hire an in-house consultant, who can work alongside your internal team to deliver your experimentation programme. 

But if, as is more often the case, your internal team is already working at or near full capacity, then it may be more sensible for you to hire a full-service CRO agency, whose consultant, design, development, project management, and QA teams will be able to focus exclusively on experimentation without other priorities taking up capacity. 

2. Specialisation

While your organisation may be chock full of design and development talent, the skills required for CRO will often be subtly but significantly different from those of your internal specialists. For example, it’s extremely rare that an in-house development team will have all of the skills needed to work within experimentation tools. This may mean that, on top of hiring a CRO consultant, you’re also forced to hire a new, CRO-focussed developer as well. 

On the other hand, a good CRO agency will have deep expertise and specialisation in every aspect of CRO, which will often mean that they’re better equipped to support a CRO consultant in building out an experimentation programme than your internal team would be. 

3. Experiment velocity

With an entire team of dedicated experts working on your website, a CRO agency will be able to run a greater number of high-quality experiments than would a single, in-house CRO consultant. 

For organisations with a greater number of opportunities for optimisation and more revenue at stake, it may make more sense to bring in an agency, who will be able to deliver compounding results at a far faster pace. 

4. Higher win rate

A report from 2019 looking at more than 28,000 experiments found that agencies were able to rack up almost 21% more wins than in-house teams. 

Of course, this win rate differential will vary from case to case – there’ll no doubt be in-house specialists who are able to outperform agencies – but if a high win rate is one of your top priorities, you may want to go down the agency route instead.  

5. Access to a huge database of past experiments

In our case, we’ve stored data on more or less every experiment we’ve ever run. This means that we now have a database with thousands of experiments that we’re able to draw upon to inform our experimentation programmes and deliver maximum ROI for our clients. 

As an example, let’s say that we start working with a new client who has an ecommerce website. One of the first steps in our process is to dig through our database and look for the kinds of experiments that were successful – and unsuccessful – for our other clients who operate in a similar market space. Of course, each client’s website is different, and there’s no guarantee that what worked for one website will work for another, but at the very least, this data gives us additional information that we can use to guide our strategy. 

Our database is one of the biggest reasons that we’re so often able to generate a positive return on investment for our clients within the first 12 weeks of working together – and it’s also why our agency-wide win rate is as high as it is. But in-house consultants won’t have access to this kind of database, which puts them at a big disadvantage in this respect.

6. More innovation

If you hire an in-house consultant, your CRO consultant team will consist of one person. If you hire an agency, you’ll be hiring the collective expertise of a whole team of consultants. 

At the time of writing, we currently have a team of 12 consultants and 6 associate consultants, with these numbers growing all the time. This means that when our consultants run into difficulty, they’re able to draw upon the collective creativity, experience, and knowledge of our entire consulting team. This creates all kinds of synergies and allows us to be far more innovative than almost any single CRO consultant – no matter how brilliant.

7. Strategic experimentation

Most good CRO consultants will be able to deliver an effective experimentation programme that helps you optimise your website, but an experienced CRO agency will often have the capability to extend your experimentation programme far beyond the limits of your website. 

For example, we’ve worked with a number of our clients to experiment on their pricing and product strategies, allowing them to gain extremely valuable information about their customers while significantly reducing the risks associated with developing new products and applying untested pricing strategies. 

Read more about our approach to product experimentation. 

By hiring a CRO agency with advanced experimentation capabilities, you’ll have a chance to explore opportunities that go far beyond the usual remit of conversion rate optimisation. 

Final thoughts

Hiring a CRO consultant can be a complicated process. All CRO consultants – whether in-house or agency-side – will need to have certain skills and attributes if they’re going to drive meaningful results through experimentation. We hope that this blog post will give you everything you need to pick the right person – or persons – for your programme’s needs. 

Press release: Reddico joins the Sideshow Group

We are delighted to the news that Reddico, one of the UK’s leading SEO specialist agencies, is joining the Sideshow Group.

In recent years the agency has enjoyed exceptional growth and has attracted clients such as BlackRock, and Direct Line.

Reddico combines expertise in SEO with proprietary technology and tools to help clients gain a competitive advantage. They have received recognition for their outstanding and results-driven work at UK and European Search awards, and also for their positive and people-first culture, including regular appearances in the UK’s best places to work league tables. Reddico is currently pending B Corp approval and has recently committed to plant one million trees by 2030.

Tony Hill from Sideshow says: “It has been great getting to know the team from Reddico. Their integrity shines through, as does the focus they give to relationships and a positive working environment.”

“The investment they have made in bespoke technology sets them up for further growth and they are ambitious and highly capable. We look forward to working together and are excited to be supporting them in their future success.”

Nick Redding from Reddico says: “It’s an amazing opportunity for Reddico and everyone who works here and we are excited for the next step in our journey. We are looking forward to sharing, learning from, and working with other agencies in the group. From the outset what stood out to us about Sideshow was our common belief of putting people first and doing the right thing.” 

Luke Redding from Reddico says: “Over the last decade Reddico has grown from a startup to an award-winning agency, helping household brands all over the world. We’ve built an incredible team, grown quickly and stayed true to our values. I’m so excited that Reddico is now part of the Sideshow Group. It’s clear they share our values, vision and ambition for Reddico, and I’m looking forward to the next part of our journey.”

Carl Hendy from Reddico says: “From our early discussions with Tony we knew our Reddico values aligned with Sideshow’s long term vision. Reddico will continue to deliver a best-in-class SEO service whilst working with Sideshow Group agencies to offer clients an aligned digital experience and marketing service.”

Sideshow Group was advised by Lewis Silkin and Eight Advisory. Reddico was advised by Osborne Clarke.

Press release: Widerfunnel joins the Sideshow Group

Today Sideshow Group is extremely pleased to announce a significant step forward in our ambition to become a global challenger in digital experience and marketing services.

Widerfunnel, one of the world’s leading experimentation agencies, is confirmed as the newest agency in the Sideshow Group. Headquartered in Vancouver, Canada but with most of its sales coming from the US, this is an exciting move into the North American market with an outstanding partner.

Widerfunnel was founded in 2007 as the “anti-agency”, with a promise never to make a marketing recommendation that couldn’t be validated through scientific A/B testing. Today, the Company is regarded as one of the most sophisticated conversion rate optimisation (CRO) and experimentation organisations, known amongst their peers for delivering outstanding return on investment for clients such as Microsoft, HP, The Motley Fool, TaylorMade, and Dollar Shave Club.

This brings further global dimension plus additional expertise, smart proprietary tools and rigorous methodologies to the Group’s model of evidence-driven performance.

Tony Hill of Sideshow Group says: “Having Widerfunnel join the group gives a fantastic boost to our promise of delivering commercially impactful work for our clients. Their experience and expertise will be hugely valuable, and the cultural fit was clear from the outset. We can see a lot of opportunities to collaborate and can’t wait to start working together. It has been very enjoyable getting to know Chris and the team at Widerfunnel, who are super-smart as well as being humble and extremely friendly.”

Chris Goward of Widerfunnel says: “Joining this family of smart companies complements our focused expertise in meaningful ways and gives us new outlets to leverage the experimentation-based insights we’ve gained. We will play a pivotal role in a smart strategy here. I’m excited about the opportunities this will open up for our team and new evidence-driven services for our clients.”

Sideshow Group was advised by Lewis Silkin, Eight Advisory, Miller Thomson and Grant Thornton. Widerfunnel was advised by Sequoia Mergers and Acquisitions, Miller Titerle + Company, and OakTree Accounting.’s first independent event: What framework for personalisation?

Recently, we hosted our first ever independent event (we know, big deal!) and we chose to kick things off with one of the most en vogue topics of the moment in CRO – personalisation.

Led by Kyle Hearnshaw, Head of Conversion Strategy, (…and personalisation expert) and Stephen Pavlovich, our CEO, the event was held in a roundtable format. We wanted to engage with other practitioners from the industry and discuss five key themes:

  1. Cutting through the noise: Debunking personalisation myths
  2. What does personalisation mean to your business?
  3. Where do conversion optimisation, experimentation and personalisation meet?
  4. Is website personalisation right for your business? How can you tell?
  5. A framework for personalisation strategy

What’s up with personalisation?

With personalisation said to be just around the corner since at least 2014, we saw a good opportunity to get a real sense of what stage some leading organisations are at with personalisation, as well as what it means to the business.

We were not surprised to see that most of the companies present at the event were only just getting started with personalisation, whilst some had it on that radar but were yet to begin. But, it was clear however that no organisation could claim to be well underway with their personalisation programme.

These initial discussions satisfied our prognosis that everyone thinks that others are doing personalisation, but in reality very few companies are because of the expectation and complexity it poses.

So, how do you do it?

At, we created a four step framework to enable any company to use personalisation within experimentation.

1. Define goals and KPIs

“Why should we run this personalisation campaign?” The paramount question you should be asking yourselves. The first step you should take when considering personalisation is to define the goals and KPIs that will be used to measure success. An example of a goal could be to increase repeat customer revenue and our main KPIs would be conversion rate and average order value (AOV).

2. Evaluate capability 

The second step is to evaluate capability around our goals and KPIs. We aim to confirm whether it is possible to act on these and how we can do it.

You might wonder why this isn’t the first step of the framework? The reason is, evaluating capability can be a big, time-consuming task. If you don’t have a clear objective in mind to evaluate your capabilities against, you could end up spending a lot of time looking for capabilities that aren’t actually needed. Defining the goals and KPIs keeps us focused on answering whether we have the right data required to target specific users and if so, is this data accessible on the site for us to use in testing?

First set the goals you would like to achieve and evaluate if it’s possible and how. Don’t decide on what is possible first, and then shoehorn in a goal that fits.

3. Identify and prioritise audiences

The third step is the big one, this is where you identify and prioritise your audiences or audience groups for your personalisation project.

How do you know who you should target? Well, what matters here most, is that your audience is meaningful.

A meaningful audience is one that is identifiable, impactful and shows distinct behaviour. This means that each audience needs a clear profile that defines how a user in that group is identified and targeted. Audiences need enough volume and value to be worth the effort and users should behave differently enough to merit a personalised experience.

4. Experiment 

This is the last step! Now that we have our audiences defined, each audience can be treated as a conversion optimisation project where we would be looking to understand the key conversion levers that influence our audiences behaviour, and then experiment on it.

Realistically, each organisation will have more than one goal and KPI. We gathered from our event that it wasn’t only the number of orders and amount of revenue that were potential metrics for personalisation projects, but the number of customers that visit the store, or the number of driver downloads on your support site could also be worthwhile.

What should you do next? 

Now that we have a process tailored to personalisation, we can all start straight away, right?

Well, this depends on your organisation’s maturity model in experimentation and conversion optimisation. Personalisation requires a deep understanding of your users, more so than A/B testing and should only be approached if you have already reached the highest levels of experimentation maturity.

If you are just getting started with experimentation, we would recommend you first focus on gaining insights on your users and maximise the gains you can have from general experimentation and conversion rate optimisation. Personalisation is a long-term investment. So, if your organisation isn’t ready today, positioning yourself on the maturity model will help you to plan the steps you need to take to get there.

If your company lives and breathes experimentation, and you are considering optimising conversion further by increasing the relevance of customer experiences through personalisation, it is crucial that you take the time to integrate it in your wider digital strategy. Get support from the business, as it is likely that you will meet similar challenges to the ones that we have heard from clients that are already doing personalisation: lack of resources, difficulty in proving the value of personalisation and internal political issues (e.g. crossover between departments and markets).

Overall, we are extremely proud to have organised our first independent event and glad to know that everyone who attended the event left learning something new and, we are convinced, with plenty of ideas to take back to the office.

Looking to develop your approach to personalisation? If you have a question about how we can help you, then please do get in touch with

The Optimizely Customer Workshop: How do the UK’s biggest brands approach experimentation?

The Optimizely Customer Workshop, hosted by Phil Nayna (Enterprise Account Executive at Optimizely) and Stephen Pavlovich (Founder/CEO of, brought together representatives from some of the UK’s biggest brands to share their thoughts and insights on Conversion Rate Optimisation (CRO). The workshop took shape in the form of a roundtable where talk topics included: “Building a lean testing programme”, “Applying testing to business challenges” and, the buzzword of the moment, “Personalisation”.


Building a lean testing programme

Experience in testing and experimentation amongst attendees in the room ranged from businesses who were just starting out, to those that had already produced mature testing programmes. This range of experience provided the basis for a profound discussion. For UK brands just starting their testing, it was emphasised that obtaining buy-in from stakeholders was key to building a testing programme within their companies. For brands with more testing experience, the biggest challenge in building this lean programme came with shifting their culture. Key to adopting a testing culture is acknowledging – and leveraging – the focus on short-term testing and validation over long-term planning. That’s why the attendees all agreed that a short-term iterative roadmap is far better than a long-term rigid roadmap.

Here at, experimentation is at the heart of everything we do and who we are. We believe that building a lean testing programme and cultivating a testing culture relies on two key factors: education and sharing. Educating your employees to understand your philosophy on experimentation and its benefits is key. This allows your employees to view experimentation as far more than just the potential value it yields with winning tests. At we value education highly. We run our own CRO training programme for new associate consultants that educates them and allows them to think creatively and with ambition when it comes to experimentation and CRO. When sharing experiment results with clients, it’s crucial not just to share what was tested and what the results were, but more importantly why we tested it and what it can teach us about their users. This means that with every experiment, we learn more about their users, allowing us to refine and improve our testing strategy – while delivering measurable uplift.


Applying testing to business challenges (prioritisation of your testing roadmap)

Strategies for prioritising testing roadmaps varied extensively within the workshop, with all brands favouring a different approach or primary metric. One major UK supermarket brand stated that their approach was very data-driven, something we value highly at They prioritised ease of implementation, lack of organisational friction in getting the test launched, potential impact the test has and the data or evidence supporting the hypothesis. Other primary metrics included cost impacts, due to one UK brand having a lack of development resource. This meant they favoured the ease of the test as a priority, as it allowed them to test despite this barrier.  

At we believe that the data driving a test is most important when prioritising our tests. This data informs us of the impact that the test is likely to have. Secondary metrics, such as the ease of building the test and getting sign-off  – as well as the other tests and hypotheses we have running in parallel – allow us to see how and when this test fits into our roadmap. However, it is important to note that prioritisation can be limited. There are finite swimlanes to test and finite resources, meaning prioritisation and planning have to be coherent. Understanding that testing roadmaps have to be flexible and adaptive is key. This allows us to easily change our roadmap according to the performance of previous tests and as our understanding of users improves.



Personalisation is the buzzword of the moment in CRO and this topic divided our workshop audience. Some UK brands stated that they had banned the word completely. Instead they refer to this as creating more relevant customer experiences and concentrating on more targeted journeys. All representatives agreed that their personalisation journey was at its early stages, believing it was important to keep personalisation simple and start getting tests live in order to gain momentum. However, we believe that this could increase the risk of companies starting personalisation too early and as a result, missing valuable opportunities for increasing their conversion rate with all-audience A/B testing. With personalisation being such a hot topic, it is critical that companies take the time to integrate this into their wider digital strategies as opposed to implementing it without consideration for other key areas of CRO.

At, we view personalisation as optimising conversion by increasing the relevance of experiences for specific audiences. Although we see personalisation as a great and exciting new opportunity to test, we believe it is important to successfully assess when to start personalisation. By its nature, it forces you to focus on a subset of users, potentially diminishing the impact of experiments as well as complicating future all-audience experiments.


The Optimizely Customer Workshop was the perfect setting for valuable discussions and an insight into how the UK’s biggest brands approach experimentation. From the workshop the key takeaways were:

  • Education of CRO needs to be more highly regarded within businesses in order to promote a shift in testing culture.
  • Visibility of testing programmes via sharing of content allows employees to understand the value of testing past just the potential value of winning tests.
  • Roadmaps should be as flexible and adaptive as possible to allow for test and learn iterations to occur.
  • Personalisation should be undertaken when its potential overtakes all-audience testing and should integrate with – rather than replace – typical A/B testing for CRO.