The Conversion Methodology: an internal training case study

Frazer Mawson

Since opening our doors in 2007, we’ve used our experimentation methodology to generate more than $2 billion in added revenue for our clients.

So far, we’ve only ever talked about isolated parts of this methodology publicly – things like the Levers Framework, Mixed Methods Experimentation, and our Experiment Repository.

In this post, for the first time ever, we’re going to show how these various puzzle pieces fit together to form a coherent system that drives consistent value for our clients.

To do this, we’re going to use an internal training case study to focus in on a series of experiments that we ran for one of our clients, a leading iGaming company looking to increase sign-ups for its subscription offering.

Our hope is that by blending theory and practice, we can offer insight into how our methodology works for us and our clients – and how it can do the same for your experimentation program too.

Note: this case study is going to be a good deal more granular than our typical stuff. To protect our client’s confidentiality, we’ve chosen to disguise some of the superficial details of the case, but everything relevant to the way we approached these problems has been preserved.

Initial Research

As with most tests we run, the first test in this sequence was initially supported by a number of insights from our research:

When we collect research observations, we aim to cluster our specific observations into what we term insights. Insights are the overarching themes under which observations can be grouped. So, for example, in this case these observations can be grouped under the insight ‘users lack information about how the product works.’

From here, we then seek to map each of the insights we’ve collected to a specific lever from our Levers™ Framework.

Lever Framework - Lever Example

We won’t go into detail about the Levers Framework now (here’s a link to our blog, webinar, and white paper), but to explain briefly: Levers are the user experience features that influence user behavior, and the Levers Framework is our comprehensive, standardized taxonomy of the huge range of Levers that exist.

For the purposes of this example, we can think of the relationship between Levers and Insights as follows:

Insight = the problem identified by our research; Lever = how we intend to solve it.

So, returning to our case study, in this instance we can ask ourselves the following question:

‘Which lever do we need to pull in order to solve the problem that ‘users lack information about how the product works’?

Clearly, to begin with, our problem is one relating to Comprehension – by improving the user’s comprehension in some way, we can alleviate the issue. Drilling deeper, it becomes clear that this specific problem relates to Product Understanding – by improving the user’s understanding of the product, our problem – in theory – should be mitigated.

Based on this research, we therefore hypothesized that by optimizing Product Understanding, we could bring about a positive impact on our primary KPI: sign-ups.

(Note: we structure our hypotheses in a very specific way that ties in closely with our Levers Framework. Click here to learn more about our hypothesis framework.)

At this point, it’s important to emphasize that for us a hypothesis is not an experiment execution.

A hypothesis is a theory we want to prove; an experiment execution is what we will change on the website in order to test the hypothesis. Put another way, there are many different ways that we can validate a hypothesis – and some are better than others.

Having identified Product Understanding as a high priority Lever, the next step in our process was to define and refine our experiment execution.

To do this, we conducted a competitor analysis to identify how our client’s competitors were approaching the Product Understanding Lever themselves. Here we discovered that many competitors were using ‘how-it-works’ interstitials at the start of their sign-up funnels as a means of conveying important product information.

Here at Conversion, we don’t believe in blindly copying our client’s competitors – but we do believe in combining data from competitor analysis with other research methods to inform our overall approach to a problem.

Having unearthed this insight from our competitor analysis, we next wanted to understand how effective interstitials are generally at driving impact – and what specific interstitial executions tend to perform the best.

To do this, we looked to our Experiment Repository.

As some of you may know, here at Conversion we tag and store every experiment we’ve ever run in a centralized repository. This repository is now made up of more than 10,000 experiments, and it allows us to unearth macro-trends across clients and industries over time.

(Incidentally, this is also the data that we trained our machine learning assisted prioritization tool, Confidence AI, on. More on this here.)

From our database research, we discovered that across industries interstitials have a win-rate of 45% (n=45). This is significantly higher than average for components, adding additional evidence for a Product Understanding interstitial.

By drilling deeper into our experiment database, we were also able to identify a number of additional macro-level insights that further supported this execution.

To give one example: one of the main concerns our clients generally have around adding additional steps to their funnels is that doing so will negatively impact conversion rates. We were able to show that if executed effectively, comprehension-related interstitials at the start of funnels actually often work to weed out low-intent traffic early on, while driving additional conversions overall.

Interstitial Experiment #1

On the basis of all of this evidence, we decided to run a how-it-works interstitial at the front of our client’s sign-up funnel. This interstitial contained basic content about how the subscription works.

Unfortunately, when we finally ran the test, the result was inconclusive – albeit with a positive trend in sign-ups, our primary KPI.

This was a surprise given the amount of evidence we’d collected in support of this concept – but we weren’t willing to give up on this Lever quite yet.

Generally speaking, a test can fail – or be inconclusive – for a number of reasons:
The Lever is ineffective
The Lever is right but the execution is suboptimal

Before we’re willing to disregard a Lever, we will usually run several tests on that Lever to ensure that the Lever itself – rather than the specific execution – is at fault for the result.

In this instance, we had trouble intuitively understanding the why behind the result– so we decided to draw upon one of the most powerful tools at our disposal:

Mixed Methods Experimentation.

Mixed Methods

Mixed Methods experimentation is a core part of our methodology here at Conversion.,

We’re not going to go into too much detail on the theory behind Mixed Methods now (read more here), but the main point is this:

Every research method – including experimentation – has its strengths and its weaknesses. By combining these different methods in insightful ways, we can leverage their individual strengths and offset their individual weaknesses. This ultimately allows us to unearth novel insights that would have been inaccessible with any single research method taken alone.

In this instance, our interstitial experiment gave us strong data about what had happened with our experiment – it was inconclusive – but it did not tell us why this result had come about.

To fill this gap in our understanding, we decided to supplement the quantitative data from this test with the qualitative data from a user research study.

Here are some key insights unearthed during the study:

With all of this research in hand, we were more confident than ever that Product Understanding was an important Lever on this website.

We also had some good evidence as to why the first interstitial had failed to produce a positive result. Namely, because it was too generic and had answered the wrong kinds of questions.

By mixing research methods, we had been able to understand the why behind our result – and we were now in a much stronger position to develop the next test iteration in this sequence.

Interstitial Experiment #2

For our second experiment on the Product Understanding Lever, we opted to recreate the initial interstitial – but to use the insights unearthed during user testing to refine the execution.

Specifically, we completely reimagined the copy, zeroing in on the questions that recurred most frequently with our user testing participants. For example:

We were eager to re-run this interstitial on the sign-up funnel, but due to blocked swimlanes at this time, we were unable to do this.

We could have waited for a month for the swimlanes to clear, but a large part of our philosophy here at Conversion is about gathering high-quality insights at speed. This allows us to explore different avenues of testing at minimal cost and effort, with the ultimate goal of quickly identifying high-value levers that we can iterate on to drive future wins for our clients. For more on this aspect of our methodology, click here.

Rather than wait around for the swimlanes to become unblocked, we therefore chose to turn our attention to another area of the site:

During user testing, many users complained about the format of the content on the ‘How it Works’ page. Specifically, they expressed a dislike of the existing video content and suggested that they would prefer the content to be presented concisely as text and icons.

With this insight in hand, we decided to run the newly developed interstitial on the how it works page instead. This is a lower traffic page with higher MDEs, but we were hopeful that it would at least give us some insight into the performance of the new interstitial relative to the existing how it works page.

With this test, when users landed on the how it works page, a full-page pop-up would appear with our newly created interstitial. From there, users then had the option to enter the sign-up flow or to proceed to the main how it works page.

We ran this new version of the interstitial against the standard version of the how it works page (i.e. without an interstitial), and the interstitial resulted in a statistically significant 5.5% increase in AOV.

While our sign-up KPI still hadn’t reached significance, it was strongly trending upwards – and we’d also generated a completely novel insight as well: namely, that by explaining the subscription options better upfront (a source of confusion identified in our user testing), we could convince users to spend more money with our client.

Though the sign-up data still wasn’t conclusive, taken together these findings provided strong evidence that we really were onto something here with the Product Understanding lever.

Interstitial Experiment #3

Given all of the research conducted up to this point – including the positive signs shown in the previous interstitial test – we were growing increasingly confident that our newly designed interstitial was an effective execution of the Product Understanding lever on this site.

As a result, we were extremely eager to run this interstitial on the sign-up flow – so as soon as the swimlanes were clear, this is exactly what we did.

As in the first test presented above, users who entered the sign-up funnel were presented with the how-it-works interstitial ahead of the first step in the flow.

We then ran this version of the funnel against the original version (without the interstitial), and the new experience resulted in a strong 3.7% uplift in sign-ups, our primary metric.

Finally, Product Understanding had been validated beyond any shadow of a doubt.

Next steps: Exploiting Product Understanding

These results were extremely pleasing to our client, but here at Conversion, our approach is about continuous iteration – not one-off, isolated wins.

Put another way, when we find something that works, we double down on it – hard.

(In fact, with one of our clients, we’ve run 46 iterations on a single Lever, and it still delivers results to this day!)

The next test in the series involved moving this newly optimized interstitial to another funnel on the website, which resulted in a further 5.4% uplift in sign-ups.

We then sought to apply the Product Understanding Lever within the funnel itself by providing additional information that had been flagged as lacking by our user testing study. This also resulted in a positive uplift in sign-ups to the tune of a 4% increase.

As things stand, we currently have a number of additional Product Understanding experiments in various stages of development, including one that involves a complete redesign of the how it works page based on insights from our user testing and experiment data.

We will continue pulling this lever until it stops delivering results for our client – but it’s important to make one final point about our methodology here:

From the way we’ve written this post, you’d be forgiven for thinking that our entire process is about identifying a single effective lever and then single-mindedly exploiting that lever at the expense of all else.

This isn’t quite right.

Granted, we do look to identify and exploit viable Levers, but we also allocate a good deal of our energy to exploring other levers that have the potential to deliver even more value in the future. For a slightly simplified graphical representation of how this looks in practice, see the diagram below.

To give an example: while the experiments presented above were going on, we were also concurrently running experiments on a wide range of different levers, including the Value Statement Lever. Value Statement turned out to be another powerful lever on this website, and in later experiments, we were actually able to combine the Product Understanding and Value Statement levers to create another iteration of the interstitial that drove an even larger uplift in sign-ups.

Ultimately, by diversifying our experiments across different levers, we’re able to ensure we’re never overly invested in any single lever. This allows us to achieve an optimal balance between exploration and exploitation, which means we can consistently deliver results for our clients both now and well into the future.

Join 5,000 other people who get our newsletter updates