Mixed Methods Experimentation

Frazer Mawson

Quantitative research methods give us robust data about what users are doing on a website, but they offer little indication as to why users behave as they do.

Qualitative research methods, on the other hand, help explain why specific users behave as they do, but they lack the generalizability needed for understanding user behavior in the aggregate.

Mixed Methods Experimentation is a way of fusing together data from both of these methodology types to generate profound user insights that unlock completely novel avenues of testing.

In this post, we’re going to explain what Mixed Methods Experimentation is and how you can use it to drive immense value for your own experimentation program.

Contents:

What is Mixed Methods Experimentation?

As optimizers, our primary goal is to confidently answer business questions using data. To support us in this endeavour, we have access to a broad array of different tools & methods. Unfortunately, in isolation, all research methods are imperfect in one way or another.

At a high level, Mixed Methods Experimentation is a technique for strategically combining different research methods to overcome the limitations of each individual methodology taken on its own. This process allows us to ‘triangulate on the truth’ and unearth profound insights about our clients’ website visitors – insights which we can use to drive immense business value.

(If you’d like to see typical examples of Mixed Methods Experimentation in practice, feel free to skip ahead to the next section.)

To better understand Mixed Methods Experimentation and the rationale behind it, consider the graph below, which plots a range of research methods along two dimensions: 1. From most attitudinal to most behavioral; 2. From most quantitative to most qualitative.

Different research methods plotted from most to least behavioural and from most to least qualitative

Note: here when we say a research method has a behavioral emphasis, we mean that it focuses on how users actually behave rather than what they say. When we say that a research method is attitudinal, we mean the opposite.

To give an example, a/b tests have a behavioral emphasis because the data they generate is about the observed behavior of users, e.g. did they buy a product? Did they proceed to the next step in the journey? etc. Contextual interviews, on the other hand, have an attitudinal emphasis because they rely solely on the verbal responses of users to specific questions.

If we focus our research on methodologies in the top left quadrant of this graph, e.g. a/b testing or analytics, we gain a strong understanding of what users are doing on a website. What’s more, the large sample sizes associated with these methodologies mean we can be confident that the behaviors we are observing really do represent the broader population of users that we’re trying to optimize for.

Unfortunately, due to their behavioral focus, these research methodologies give us very little insight into the why behind these observed behaviors – why are these users behaving as they are? Without such information, we are forced to draw our own conclusions about the meaning of these results, which can introduce bias.

If we focus instead on activities in the bottom right quadrant of this graph, these methods allow us to unearth deep insights about the motivations and contexts of individual users. This can be extremely useful for understanding the why behind our results.

Unfortunately, as is well understood in behavioral science generally, what people say does not always translate into what they actually do. A user may say that they like a specific component on a page, but when they actually use the website we may find that they completely ignore this component altogether.

Equally important, the smaller sample sizes of qualitative research methods mean it is not always possible to generalize their findings to the user population as a whole.

Research methods plotted based on those that explain what is happening vs. why it is happening

As this discussion hopefully demonstrates, all research method types have their strengths and their weaknesses. Mixed Methods Experimentation is about strategically combining different research methodologies with the goal of preserving each methodology’s strengths while neutralizing their weaknesses.

By gathering different types of data generated by different types of methodologies, Mixed Methods Experimentation allows us to gain a deep understanding about both the what and the why behind user behavior. Ultimately giving us a 360 view of business intel to make intelligent decisions.

This is essential if we want to truly understand a website’s users and how we can engineer user experiences that meet their unique needs and preferences.

Using mixed methods experimentation to triangulate on the truth

4 Examples of Mixed Methods Experimentation in action

Mixed Methods Experimentation is a highly versatile technique with a broad range of applications.

In the last section, we gave an overview of the broad reasoning behind the technique, explaining what it is and why it works. Here, we show how the technique works in practice, sharing four of the most typical use-cases in which Mixed Methods Experimentation generates value.

1. Understanding test results: Test hypothesis > a/b test > user experience research

The most basic use-case for Mixed Methods Experimentation relates to interpreting test results.

A/b tests are the gold standard for generating high-quality data about the causal relationship between two variables. Unfortunately, as alluded to above, they offer no explanation as to why these observed causal relationships exist.

For example, an a/b test may tell us that a newly introduced reviews carousel increases our conversion rate by 10%, but it doesn’t tell us why this relationship exists. Is it because users distrust the brand? Is it because users distrust the industry more generally? Is it because the website doesn’t seem credible? etc.

By using complementary qualitative research methods to understand the results of our tests, we can gain far deeper insight into the why behind these results. This puts us in a much stronger position to make informed decisions regarding the next steps for our experimentation program.

To see what this looks like in practice, consider this example, taken from work we’ve done with a global technology corporation that sells hardware and software solutions.

At the beginning of Covid, one of the client’s senior stakeholders wanted to replace all focussed product imagery on the PLP with lifestyle imagery. The rationale for this change was as follows:

This seemed like a sensible hypothesis – so we tested it.

We ran the original product focussed imagery against a range of different types of lifestyle imagery. Unfortunately, every time we did this, the lifestyle imagery tanked the conversion rate.

(note: the image below shows shoes, but in reality, the client was focussed mainly on selling laptops).

In this situation, a/b tests gave us good data about the causal relationship between two variables – namely, lifestyle imagery and the conversion rate. Unfortunately, they offered little insight into why this result had occurred.

Looking to delve deeper and establish an explanation for this result, we decided to run a UX research study on both versions of this page.

In line with the a/b test results, UX research showed that the product focussed imagery was strongly preferred to the lifestyle imagery. Unlike the a/b test, however, UX research was able to shed light on why this result had occurred:

By mixing a/b testing with complementary UX research, we were finally able to explain our initial experiment result. Without Mixed Methods Experimentation, this would have been extremely difficult – which would, in turn, have made it equally difficult to work out how to iterate on the result.

For a more extended example of mixed methods being used to explain test results, click here. 

2. Diagnosis and prioritization: User experience research > test hypothesis > a/b test

Experimentation at its core is about testing hypotheses – but how do we know which hypotheses to test?

Mixed Methods Experimentation is an invaluable technique for both identifying hypotheses worth testing and for prioritizing the hypotheses we already have.

Consider, for example, the generic product description page shown below.

If we were optimizing this page, where would we start? Which hypotheses would be worth testing first?

One option might be to use our expert intuitions or our frameworks to create and prioritize our hypotheses – but Mixed Methods Experimentation offers a better way:

By using qualitative research methods to understand how a small sample of real-world users interact with a page, we can begin to diagnose specific issues that might be blocking conversion. Equally important, we can also identify what might be working well on a page – with an eye to doing more of it.

With this data in hand, we’re then able to generate an extensive list of hypotheses – supported by our research – that we can test. And we’re also able to prioritize our hypotheses based on those that are most strongly supported by the research.

As our own internal data shows (see chart below), test hypotheses that are supported by qualitative research methods have a significantly higher success rate than those that aren’t.

By strategically using qualitative research methodologies to generate and prioritize tests, we’re putting ourselves in the strongest possible position to create successful tests that move the needle for our clients.

3. Informing key decisions: Test hypothesis > a/b test + user experience research

Experimentation is about more than simply optimizing funnels and boosting conversion rates; it’s about generating high-quality data to inform key decisions across all areas of a business.

The more important a decision, the higher the threshold for evidence, which is where Mixed Methods Experimentation comes in:

By mixing research methods, we’re able to bring a variety of different data types to bear on any given decision. If all research methods point to the optimality of a specific decision, then our client can be confident in pursuing that course of action.

Here’s an example, taken from our own work, of the way Mixed Methods Experimentation supports key decision making:

One of our clients was looking to completely overhaul the search experience on their website. They’d spoken to a number of search vendors, but before making a decision, they were keen to evaluate each of these search experiences to see how they each affected the user experience.

To begin, we helped the client a/b test these experiences to see how they each impacted key metrics. Alongside this a/b test, we also ran an extensive UX research study to understand the results of the test and to identify further potential avenues for optimization (regardless of the specific vendor they eventually went with).

Ultimately, by mixing methods, we were able to help our client make a much more informed decision, based on a variety of different evidence types, while also unearthing findings that led to further KPI uplifts in the future.

4. Execution Sharpening

One of the most important – and most often neglected – steps in any experimentation process is the step from hypothesis to execution.

A hypothesis is a theory we want to prove to be true or false.

An execution is what we will change on a website in order to validate the hypothesis.

Our hypothesis, for example, might be ‘optimizing the Social Proof Lever will increase sales’, but there are many potential executions we can use to test this hypothesis. For example, we could

Unfortunately, not all executions are made equal. In fact, some tests will fail solely because the execution was poor – even if the hypothesis itself was actually correct.

When we have a particularly elaborate a/b test, it is therefore extremely important that we create an execution that allows us to test our hypothesis optimally. If we don’t, we may find that we’ve spent huge amounts of time and energy building a flawed execution that doesn’t generate solid data about our hypothesis.

This is where Mixed Methods comes in:

By conducting qualitative research before we’ve launched our test, Mixed Methods Experimentation allows us to ensure that our designs are resonating with users and that our execution is optimal.

Having done this, we can then allocate resources to our test, safe in the knowledge that the execution is optimal.

To see an example of this in practice, consider the images below:

Our client, a provider of storage solutions, wanted us to build a wizard to capture information about their customer’s current garages and their preferences regarding future garage purchases.

Before building out this functionality, we created a number of low-fidelity designs for the wizard, which we placed in front of real-life users to gain a better understanding of the best way to present certain questions.

Ultimately, this allowed us to iterate on these designs multiple times, progressively improving the execution before finally moving forward with the build of the wizard test.

Join 5,000 other people who get our newsletter updates