Strategy Archives | Conversion.com

Reactive or proactive: The best approach to iteration

Iterating on experiments is often reactive and conducted as an afterthought. A lot of time is spent producing a ‘perfect’ test and if results are unexpected, iterations are run as a last hope to gain value from the time and effort spent on the test. But why subjectively try and execute the perfect experiment in the first instance and postpone the opportunity to uncover learnings along the way by running a minimum viable experiment which is then iterated on?

Experimentation is run at varying levels of maturity (see our Maturity Model for more information on this) however we see businesses time and time again getting stuck in the infant stages due to their focus on individual experiments. We see teams wasting time and resource trying to run one ‘perfect’ experiment when the core concept has not been validated.

In order to validate levers quickly without over investing in resource we should ensure hypotheses are executed in their most simple form – the minimum viable experiment (MVE). From here, success of an MVE gives you the green light to test more complex implementations and failure flags problems with the concept/execution early on.

A few years ago, we learnt the importance of this approach the hard way. Based off the back of one hypothesis for an online real estate business, ‘Adding the ability to see properties on a map will help users find the right property and increase enquiries’, we built a complete map view in Optimizely. A heavy amount of resource was used only to find out within the experiment that the map had no impact on user behaviour. What should we have done? Ran an MVE requiring the minimum resource in order to test the concept. What would this have looked like? Perhaps a fake door test in order to test the demand of the map functionality from users.

This blog aims to give:

  • An understanding of the minimum viable approach to experimentation
  • A view of potential challenges and tips to overcome them
  • A clear overview of the benefits of MVEs

The minimum viable approach

A minimum viable experiment looks for the simplest way to run an experiment that validates the concept. This type of testing isn’t about designing ‘small tests’, it is about doing specific, focused experiments that give you the clearest signal of whether or not the hypothesis is valid. Of course, it helps that MVEs are often small so we can test quickly! It is important to challenge yourself by assessing every component of the test and its likelihood of impacting the way the user responds to an experiment. That way, you will be efficient with your resource and yield the same effect on proving the validity of the concept. Running the minimum viable experiment allows you to validate your hypothesis without over investing in levers that turn out to be ineffective.

If the MVE wins, then iterations can be ran to find the optimal execution – gaining learnings along the way. If the test loses, you can look at the execution more thoroughly and determine whether bad execution impacted the test. If so, re-run the MVE. If not, bin the hypothesis to avoid wasting resource on unfruitful concepts.

All hypotheses can be reduced to an MVE, see below a visual example of an MVE testing stream.

Potential challenges to MVEs and tips to overcome them

Although this approach is the most effective, it is not often fully understood, resulting in pushback from stakeholders. Stakeholders are invested in the website and moreover protective of their product. As a result, the expectation from experimentation is that a perfect execution of a problem will be tested which could be implemented immediately should the test win. However, what is not considered is the huge amount of resource this would require without any validity that the hypothesis was correct or that the style of execution was optimal.

In order to overcome this challenge we focus on working with experimentation, marketing and product teams in order to challenge assumptions around MVEs. This education piece is pivotal for stakeholder buy-in. Over the last 9 months, we have been running experimentation workshops with one of the largest online takeaway businesses in Europe and a huge focus of these sessions has been on the minimum viable experiment.

Overview of the benefits of MVEs

Minimum viable experiments have a multitude of benefits. Here, we aim to summarise a few of these:

Efficient experiments

The minimum viable experiment of a concept allows you to utilise the minimum amount of resource required to see if a concept is worth pursuing further or not.

Validity of the hypothesis is clear

Executing experiments in their most simple form ensures the impact of the changes are evident. As a result, concluding the validity of the experiment is uncomplicated.

Explore bigger solutions to achieve the best possible outcome

Once the MVE has been proven, this justifies investing further resource in exploring bigger solutions. Iterating on experiments allows you to refine solutions to achieve the best possible execution of the hypothesis.

Key takeaways

  • A minimum viable experiment involves testing a hypothesis in its simplest form, allowing you to validate concepts early on and optimise the execution via iterations.
  • Push back on MVEs are usually due to a lack of awareness of the process and benefits they yield. Educate in order to show teams how effective this type of testing is, not only in gaining the best possible final execution for tests but also in utilising resource with efficiency.
  • The main benefit of the minimum viable approach is that you spend time and resource on levers that impact your KPIs.

SCORE: A dynamic prioritisation framework for AB tests from Conversion.com

Why prioritise?

With experimentation and conversion optimisation, there is never a shortage of ideas to test.

In other industries, specialist knowledge is often a prerequisite. It’s hard to have an opinion on electrical engineering or pharmaceutical research without prior knowledge.

But with experimentation everyone can have an opinion: marketing, product, engineering, customer service – even our customers themselves. They can all suggest ideas to improve the website’s performance.

The challenge is how you prioritise the right experiments.

There’s a finite number of experiments that we can run – we’re limited both by the resource to create and analyse experiments, and also the traffic to run experiments on.

Prioritisation is the method to maximise impact with an efficient use of resources.

Prioritisation is the method to maximise impact with an efficient use of resources.

Where most prioritisation frameworks fall down

There are multiple prioritisation frameworks – PIE (from WiderFunnel), PXL (from ConversionXL), and more recently the native functionality within Optimizely’s Program Management.

Each framework has a broadly consistent approach: prioritisation is based on a combination of (a) the value of the experiment, and (b) the ease of execution.

WiderFunnel’s PIE framework uses three factors, scored out of 10:

  • potential (how much improvement can be made on the pages?)
  • importance (how valuable is the traffic to the page?) and
  • ease (how complicated will the test be to implement?)

This is effective: it ensures that you consider both the potential uplift from the experiment alongside the importance of the page. (A high impact experiment on a low value page should rightfully be deprioritised.)

But it can be challenging to score these factors objectively – especially when considering an experiment’s potential.

Conversion XL’s PXL framework looks to address this. Rather than asking you to rate an experiment out of 10, it asks a series of yes/no questions to objectively assess its value and ease.

Experiments that are above the fold and based on quantitative and qualitative research will rightly score higher than a subtle experiment based on gut instinct alone.

This approach works well: it rewards the right behaviour (and can even help drive the right behaviour in the future, as users submit concepts that are more likely to score well).

But while it improves the objectivity in scoring, it lacks two fundamental elements:

  1. It accounts for page traffic, but not page value. So an above-the-fold research-backed experiment on a zero-value page could be prioritised above experiments that could have a much higher impact. (We used to work with a university in the US whose highest-traffic page was a blog post on ramen noodle recipes. It generated zero leads – but the PXL framework wouldn’t account for that automatically.)
  2. While it values qualitative and quantitative research, it doesn’t appear to include data from the previous experiments in its prioritisation. We know that qualitative research can sometimes be misleading (customers may say one thing and do something completely different). That’s why we validate our research with experimentation. But in this model, its focus is purely on research – whereas a conclusive experiment is the best indicator of a future iteration’s success.

Moreover, most frameworks struggle to adapt as an experimentation programme develops. They tend to work in isolation at the start – prioritising a long backlog of concepts – but over time, real life gets in the way.

Competing business goals, fire-fighting and resource challenges mean that the prioritisation becomes out-of-date – and you’re left with a backlog of experiments that is more static than a dynamic experimentation programme demands.

Introducing SCORE – Conversion.com’s prioritisation process

Our approach to prioritisation is based on more than 10 years’ experience running experimentation programmes for clients big and small.

We wanted to create an approach that:

  • Prioritises the right experiments: So you can deliver impact (and insight) rapidly.
  • Adapts based on insight + results: The more experiments you run, the stronger your prioritisation becomes.
  • Removes subjectivity: As far as possible, data should be driving prioritisation – not opinion.
  • Allows for the practicalities of running an experimentation programme: It adapts to the reality of working in a business where the wider priorities, goals and resources change.

But the downside is that it’s not a simple checklist model. In our experience, there’s no easy answer to prioritisation – it takes work. But it’s better to spend a little more time on prioritisation than waste a lot more effort building the wrong experiments.

It’s better to spend a little more time on prioritisation than waste a lot more effort building the wrong experiments.

With that in mind, we’re presenting SCORE – Conversion.com’s prioritisation process:

  • Strategy
  • Concepts
  • Order
  • Roadmap
  • Experimentation

As you’ll see, the prioritisation of one concept against each other happens in the middle of the process (“Order”) and is contingent on the programme’s strategy.

Strategy: Prioritising your experimentation framework

At Conversion.com, our experimentation framework is fundamental to our approach. Before we start on concepts, we first define the goal, KPIs, audiences, areas and levers (the factors that we believe affect user behaviour).

You can read more about our framework here and you can create your own with the templates here.

When your framework is complete (or, at least, started – it’s never really complete), we can prioritise at the macro level – before we even think about experiments.

Assuming we’ve defined and narrowed down the goal and KPIs, we then need to prioritise the audiences, areas and levers:

Audiences

Prioritise your audiences on volume, value and potential:

  • Volume – the monthly unique visitors of this audience. (That’s why it’s helpful to define identifiable audiences like “prospects”, “users on a free trial”, “new customers”, and so on.)
  • Value – the revenue or profit per user. (Continuing the above example, new customers are of course worth more than prospects – but at a far lower volume.)
  • Potential – the likelihood that you’ll be able to modify their behaviour. On a retail website, for example, there may be less potential to impact returning customers than potential customers – it may be harder to increase their motivation and ability to convert relative to a user who is new to the website.

You can, of course, change the criteria here to adapt the framework to better suit your requirements. But as a starting point, we suggest combining the profit per user and the potential improvement.

Don’t forget, we want to prioritise the biggest value audiences first – so that typically means targeting as many users as possible, rather than segmenting or personalising too soon.

Areas

In much the same way as audiences, we can prioritise the areas – the key content that the user interacts with.

For example, identify the key pages on the website (homepage, listings page, product page, etc) and score them on:

  • Volume – the monthly unique visitors for the area.
  • Value – the revenue or profit from the area.
  • Potential – the likelihood that you’ll be able to improve the area’s performance. (Now’s a good time to use your quantitative and qualitative research to inform this scoring.)

(It might sound like we’re falling into the trap of other prioritisation models: asking you to estimate potential, which can be subjective. But, in our experience, people are more likely to score an area objectively, rather than an experiment that they created and are passionate about.)

Also, this approach doesn’t need to be limited to your website. You can apply it to any other touchpoint in the user journey too – including offline. Your cart abandonment email, customer calls and Facebook ads can (and should) be used in this framework.

If your KPI is profit, you may want to include offline content like returns labels in prioritisation model.
If your KPI is profit, you may want to include offline content like returns labels in prioritisation model.

Levers

As above, levers are defined as the key factors or themes that you think affect an audience’s motivation or ability to convert on a specific area.

These might be themes like pricing, trust, delivery, returns, form usability, and so on. (Take another look at the experimentation framework to see why it’s important to separate the lever from the execution.)

When you’re starting to experiment, it’s hard to prioritise your levers – you won’t know what will work and what won’t.

That’s why you can prioritise them on either:

  • Confidence – a simple score to reflect the quantitative and qualitative research that supports the lever. If every research method shows trust as a major concern for your users, it should score higher than another lever that only appears occasionally.
  • Win rate – If you have run experiments on this lever in the past, what was their win rate? It’s normally a good indicator of future success.

Of course, if you’re starting experimentation, you won’t have a win rate to rely on (so estimating the confidence is a fantastic start).

But if you’ve got a good history of experimentation – and you’ve run the experiments correctly, and focused them on a single lever – then you should use this data to inform your prioritisation here.

Again, the more we experiment, the more accurate this gets – so don’t obsess over every detail. (After all, it’s possible that a valid lever may have a low win rate simply because of a couple of experiments with poor creative.)  

Putting this all together, you can now start to prioritise the audiences, areas and levers that should be focused on:

As you can see, we haven’t even started to think about concepts and execution – but we have a strong foundation for our prioritisation.

Concepts: Getting the right ideas

After defining the strategy, you can now run structured ideation around the KPIs, audiences, areas and levers that you’ve defined.

This creates the ideal structure for ideation.

Rather than starting with, “What do we want to test?” or “How can we improve product pages?”, we’re instead focusing on the core hypotheses that we want to validate:

  • How can we improve the perception of pricing on product pages for new customers?
  • How can we overcome concerns around delivery in the basket for all users?
  • And so on.

This structured ideation around a single hypothesis generates far better ideas – and means you’re less susceptible to the tendency to throw everything into a single experiment (and not knowing which part caused the positive/negative result afterwards).

Order: Prioritising the concepts

When prioritising the concepts – especially when a lever hasn’t been validated by prior experiments – you should look to start with the minimum viable experiment (MVE).

Just like a minimum viable product, we want to define the simplest experiment that allows us to validate the hypothesis. (Can we test a hypothesis with 5 hours of development time rather than 50?)

Just like a minimum viable product, we want to define the simplest experiment that allows us to validate the hypothesis.

This is a hugely important concept – and one that’s easily overlooked. It’s natural that we want to create the “best” iteration for the content we’re working on – but that can limit the success of our experimentation programme. It’s far better to run ten MVEs across multiple levers that take 5 hours each to build, rather than one monster experiment that takes 50 hours to build. We’ll learn 10x as much, and drive significantly higher value.

In one AB test for a real estate client, we created a fully functional “map view”. It was based on a significant volume of user research – but the minimum viable experiment would have been simply to test adding a “Map view” button without the underlying functionality.
In one AB test for a real estate client, we created a fully functional “map view”. It was based on a significant volume of user research – but the minimum viable experiment would have been simply to test adding a “Map view” button without the underlying functionality.

So at the end of this phase, we should have defined the MVE for each of the high priority levers that we’re going to start with.

Roadmap: Creating an effective roadmap

There are many factors that can affect your experimentation roadmap – factors that stop you from starting at the top of your prioritised list and working your way down:

  • You may have limited resource, meaning that the bigger experiments have to wait till later.
  • There may be upcoming page changes or product promotions that will affect the experiment.
  • Other teams may be running experiments too, which you’ll need to plan around.

And there are dozens more: resource, product changes, marketing, seasonality can all block experiments – but shouldn’t block experimentation altogether.

That’s why planning your roadmap is as important as prioritising the experiments. Planning delivers the largest impact (and insight) in spite of external factors.

Planning your roadmap is as important as prioritising the experiments. Planning delivers the largest impact (and insight) in spite of internal factors.

To plan effectively:

  • Identify your swimlanes: These are the audiences and areas from your framework that you’ll be experimenting on. (Again, make sure you focus on the high priority audiences and areas – don’t be tempted to segment or personalise too early.)
  • Estimate experiment duration: Use an appropriate minimum detectable effect for the audience and area to calculate the duration, then block out this time in the roadmap.
  • Experiment across multiple levers: Gather more insight (and spread your risk) by experimenting across multiple levers. If you focus heavily on a lever like “trust” with your first six experiments, you might have to start again if the first two or three experiments aren’t successful.

Experimentation: Running and analysing the experiments

With each experiment, you’ll learn more about your users: what changes their behaviour and what doesn’t.

You can scale successful concepts and challenge unsuccessful concepts.

For successful experiments, you can iterate by:

  • Moving incrementally from minimum viable experiments to more impactful creative. (With one Conversion.com client, we started with a simple experiment that promoted the speed of delivery. After multiple successful experiments around delivery, we eventually worked with the client to test the commercial viability of same-day delivery.)
  • Applying the same lever to other areas and potentially audiences. If amplifying trust messaging on the basket page works well, it’ll probably work well on listing and product pages too.

Meanwhile, an experiment may be unsuccessful because:

  • The lever was invalidated – Qualitative research may have said customers care about the lever, but in practice makes no difference.
  • The execution was poor – It happens sometimes. Every audience/area/lever combination can have thousands of possible executions – you won’t get it right first time, every time, and you risk rejecting a valid lever because of a lousy experiment.
  • There an external factor – It’s also possible that other factors affected the test: there was a bug, the underlying page code changed, a promotion or stock availability affected performance. It doesn’t happen often, but it needs to be checked.

In experiment post-mortems, it’s crucial to investigate which of these is most likely, so we don’t reject a lever because of poor execution or external factors.

Conduct experiment post-mortems so you don’t reject a lever because of poor execution or external factors.

What’s good (and bad) about this approach

This approach works for Conversion.com – we’ve validated it on clients big and small for more than ten years, and have improved it significantly along the way.

It’s good because:

  • It’s a structured and effective prioritisation strategy.
  • It doesn’t just reward data and insight – it actively adapts and improves over time.
  • It works in the real-world, allowing for the practicalities of running an experimentation programme.

On the flip side, its weaknesses are that:

  • It takes time to do properly. (You should create and prioritise your framework first.)
  • You can’t feed in 100 concepts and expect it to spit out a nicely ordered list. (But in our experience, you probably don’t want to.)

So, what now?

  1. If you haven’t already, print out or copy this Google slide for Conversion.com’s experimentation framework.
  2. Email marketing@conversion.com to join our mailing list. We like sharing how we approach experimentation.
  3. Share your feedback below. What do like? What do you do differently?

How to build an experimentation, CRO or AB testing framework

Everyone approaches experimentation differently. But there’s one thing companies that are successful at experimentation all have in common: a strategic framework that drives experimentation.

In the last ten years we’ve worked with start-ups through to global brands like Facebook, the Guardian and Domino’s Pizza, and the biggest factor we’ve seen impact success is having this strategic framework to inform every experiment.

In this post, you’ll learn

    • Why a framework is crucial if you want your experimentation to succeed
    • How to set a meaningful goal for your experimentation programme
  • How to build a framework around your goal and create your strategy for achieving it

We’ll be sharing the experimentation framework that we use day in, day out with our clients to deliver successful experimentation projects. We’ll also share some blank templates of the framework at the end, so after reading this you’ll be able to have a go at completing your own straight away.

Why use a framework? Going from tactical to strategic experimentation

Using this framework will help you mature your own approach to experimentation, make a bigger impact, get more insight and have more success.

Having a framework:

      • Establishes a consistent approach to experimentation across an entire organisation, enabling more people to run more experiments and deliver value
      • Allows you to spend more time on the strategy behind your experiments and less time on the “housekeeping” of trying to manage your experimentation programme.
    • Enables you to transition from testing tactically to testing strategically.

Let’s explore that last point in detail.

In tactical experimentation every experiment is an island – separate and unconnected to any others. Ideas generally take the form of solutions – “we should change this to be like that” and come from heuristics (aka guessing), best practice or from copying a competitor. There is very little guiding what experiments run where, when and why.

Strategic experimentation on the other hand is focused on achieving a defined goal and has clear strategy for achieving it. The goal is the starting point – a problem with potential solutions explored through the testing of defined hypotheses. All experiments are connected and experimentation is iterative. Every completed experiment generates more insight that prompts further experiments as you build towards achieving the goal.

If strategic experimentation doesn’t already sound better to you then we should also mention the typical benefits you’ll see as a result of maturing your approach in this way.  

    • You’ll increase your win rate – the % of experiments that are successful
    • You’ll increase the impact of each successful experiment – on top of any conversion rate uplifts, experiments will generate more actionable insight
  • You’ll never run out of ideas again – every conclusive experiment will spawn multiple new ideas

Introducing the Conversion.com experimentation framework

As we introduce our framework, you might be surprised by its simplicity. But all good frameworks are simple. There’s no secret sauce here. Just a logical, strategic approach to experimentation.

Just before we get into the detail of our framework a quick note on the role of data. Everything we do should be backed by data. User-research and analytics are crucial sources of insight used to build the layers in our framework. But the experiments we run using the framework are often the best source of data and insight we have. An effective framework should therefore minimise the time it takes to start experimenting. We cannot wait for perfect data to appear before we start, or try and get things right first time. The audiences, areas and levers that we’ll define in our framework come from our best assessment of all the data we have at a given time. They are not static or fixed. Every experiment we run helps us improve and refine them and our framework and strategy is updated continuously as more data becomes available.

Part 1 – Establishing the goal of your experimentation project

The first part of the framework is the most important by far. If you only have time to do one thing after reading this post it should be revisiting the goal of your experimentation.

Most teams don’t set a clear goal for experimentation. It’s a simple as that. Any strategy needs to start with a goal, otherwise how can you differentiate success from wasted effort?

A simple test of whether your experimentation has a clear goal is to ask everyone in your team to explain it. Can they all give exactly the same answer? If not, you probably need to work on this. 

Don’t be lazy and choose a goal like “increase sales” or “growth”. We’re all familiar with the importance of goals being “SMART” (specific, measurable, achievable, relevant, time-bound) when setting personal goals. Apply this when setting the goal for experimentation.

Add focus to your goal with targets, measures and deadlines, and wherever possible be specific rather than general. Does “growth” mean “increase profit” or “increase revenue”? By how much? By when? A stronger goal for experimentation would be something like “Add an additional £10m in profit within the next 12 months”. There will be no ambiguity as to whether you have achieved that or not in 12 months’ time.

Ensure your goal for experimentation is SMART

Some other examples of strong goals for experimentation

    • “Increase the rate of customers buying add-ons from 10% to 15% in 6 months.”
    • “Find a plans and pricing model that can deliver 5% more new customer revenue before Q3”
  • “Determine the best price point for [new product] before it launches in June.”

A clear goal ensures everyone knows what they’re working towards, and what other teams are working towards. This means you can coordinate work across multiple teams and spot any conflicts early on.

Part 2 – Defining the KPIs that you’ll use to measure success

When you’ve defined the goal, the next step is to decide how you’re going to measure it. We like to use a KPI tree here – working backwards from the goal to identify all the metrics that affect it.

For example, if our goal is “Add an additional £10m in profit within the next 12 months” we construct the KPI tree of the metrics that combine to calculate profit. In this simple example let’s say profit is determined by our profit per order times how many orders we get, minus the cost of processing any returns.

Sketching out a KPI tree is an easy way to decide the KPIs you should focus on

These 3 metrics then break down into smaller metrics and so on. You can then decide which of the metrics in the tree you can most influence through experimentation. These then become your KPIs for experimentation. In our example we’ve chosen average order value, order conversion rate and returns rate as these can be directly impacted in experiments. Cost per return on the other hand might be more outside our control.

When you’re choosing KPIs, remember what the K stands for. These are key performance indicators – the ones that matter most. We’d recommend choosing at most 2 or 3. Remember, the more you choose, the more fragmented your experimentation will be. You can track more granular metrics in each experiment, but the overall impact of your experiments will need to be measured in these KPIs.

Putting that all together, you have the first parts of your new framework. This is our starting point – and it is worth the time to get this right as everything else hinges on this.

We present our framework as rows to highlight the importance of starting with the goal and working down from there.

Part 3 – Understanding how your audience impacts your KPIs and goal

Now we can start to develop our strategy for impacting the KPIs and achieving the goal. The first step is to explore how the make-up of our audience should influence our approach.

In any experiment, we are looking to influence behaviour. This is extremely difficult to do. It’s even more difficult if we don’t know who we’re trying to influence – our audience.

We need to understand the motivations and concerns of our users – and specifically how these impact the goal and KPIs we’re trying to move. If we understand this, then we can then focus our strategy on solving the right problems for the right users.

So how do we go about understanding our audience? For each of our KPIs the first question we should ask is “Which groups of users have the biggest influence on this KPI?” With this question in mind we can start to map out our audience.

Start by defining the most relevant dimensions – the attributes that identify certain groups of users. Device and Location are both dimensions, but these may not be the most insightful ways to split your audience for your specific goal and KPIs. If our goal is to “reduce returns by 10% in 6 months”, we might find that there isn’t much difference in returns rate for desktop users compared to mobile users. Instead we might find returns rate varies most dramatically when we split users by the Product Type that they buy.

For each dimension we can then define the smaller segments – the way users should be grouped under that dimension. For example, Desktop, Mobile and Tablet would be segments within the Device dimension.

You can have a good first attempt at this exercise in 5–10 minutes. At the start, accuracy isn’t your main concern. You want to generate an initial map that you can then start validating using data – refining your map as necessary. You might also find it useful to create 3 or 4 different audience maps, each splitting your audience in different ways, that are all potentially valid and insightful for your goal.

Map out your audiences by thinking about the relevant dimensions that could have the greatest influence on your KPIs and overall goal.

Once you have your potential audiences the next step would then be to use data to validate the size and value of these audiences. The aim here isn’t to limit our experiments to a specific audience – we’re not looking to do personalisation quite yet. But understanding our audiences means when we come to designing experiments we’ll know how to cater to the objections and concerns of as many users as possible.

We add the audience dimensions we feel are most relevant to our goal and KPIs to the framework. If it’s helpful you can also show the specific segments below.

Part 4 – Identifying the areas with the greatest opportunity to make an impact

Armed with an better understanding of our audience, we still need to choose when and where to act to be most effective. Areas is about understanding the user journey – and focusing our attention on where we can make the biggest impact.

For each audience, the best time and place to try and influence users will vary. And even within a single audience, the best way to influence user behaviour is going to depend on which stage of their purchase journey the users are at.

As with audiences, we need to map out the important areas. We start by mapping the onsite journeys and funnels. But we don’t limit ourselves to just onsite experience – we need to consider the whole user journey, especially if our goal is something influenced by behaviours that happen offsite. We then need to identify which steps directly impact each of our KPIs. This helps to limit our focus, but also highlights non-obvious areas where there could be value.

Sketch out your entire user journey, including what happens outside the website. Then highlight which areas impact each of your KPIs.

As with audiences, you can sketch out the initial map fairly quickly, then use analytics data to start adding more useful insights. Label conversion and drop-off rates to see where abandonment is high. Don’t just do this once for all traffic, do this repeatedly, once for each of the important audiences identified in the previous step. This will highlight where things are similar but crucially where things are different.

Once you have your area map you can start adding clickthrough and drop-off rates for different audiences to spot opportunities.

So with a good understanding of our audiences and areas we can add these to our framework. Completing these two parts of the framework is easier the more data you have. Start with your best guess at the key audiences and areas, then go out and do your user-research to inform your decisions here. Validate your audiences and areas with quant and qual data.

Add your audiences and areas to your framework. You may have more than 4 of each but that’s harder for us to fit in one image!

Part 5 – Identifying the potential levers that influence user behaviour

Levers are the factors we believe can influence user behaviour: the broad themes that we’ll explore in experimentation. At its simplest, they’re the reasons why people convert, and also the reasons why people don’t convert. For example, trust, pricing, urgency and understanding are all common levers.

To identify levers, first we look for any problems that are stopping users from converting on our KPI – we call these barriers to conversion. Some typical barriers are lack of trust, price, missing information and usability problems.

We then look for any factors that positively influence a user’s chances of converting – what we call conversion motivations. Some typical motivations are social proof (reviews), guarantees, USPs of the product/service and savings and discounts.

Together the barriers and motivations give us a set of potential levers that we can “pull” in and experiment to try and influence behaviour. Typically we’ll try to solve a barrier or make a motivation more prominent and compelling.

Your exact levers will be unique to your business. However there are some levers that come up very frequently across different industries that can make for good starting points.

Ecommerce – Price, social proof (reviews), size and fit, returns, delivery cost, delivery methods, product findability, payment methods, checkout usability

Saas – Free trial, understanding product features, plan types, pricing, cancelling at the end of trial, monthly vs annual pricing, user onboarding

Gaming – welcome bonuses, ongoing bonuses, payment methods, popular games, odds

Where do levers come from? Data. We conduct user-research and gather quantitative and qualitative data to look for evidence of levers. You can read more about how we do that here.

When first building our framework it’s important to remember that we’re looking for evidence of levers, not conclusive proof. We want to assemble a set of candidate levers that we believe are worth exploring. Our experiments will then validate the levers and give us the “proof” that a specific lever can effectively be used to influence user behaviour.

You might start initially with a large set of potential levers – 8 or 10 even. We need a way to validate levers quickly and reduce this set down to the 3–4 most effective. Luckily we have the perfect tool for that in experiments.

Add your set of potential levers to your framework and you’re ready to start planning your experiments.

Part 6 – Defining the experiments to test your hypotheses

The final step in our framework is where we define our experiments. This isn’t an exercise we do just once – we don’t define every experiment we could possibly run from the framework at the start – but using our framework we can start to build the hypotheses that our experiments will explore.

At this point, it’s important to make a distinction between a hypothesis for an experiment and the execution of an experiment. A hypothesis is a statement we are looking to prove true or false. A single hypothesis can then be tested through the execution of an experiment – normally a set of defined changes to certain areas for an audience.

We define our hypothesis first before thinking about the best execution of an experiment to test it, as there are many different executions that could test a single hypothesis. At the end of the experiment the first thing we do is use the results to evaluate whether our hypothesis has been proven or disproven. Depending on this, we then evaluate the execution separately to decide whether we can iterate on it – to get even stronger results – or whether we need to re-test the hypothesis using a different execution.  

The framework makes it easy to identify the hypothesis statements that we will look to prove or disprove in our experiments. We can build a hypothesis statement from the framework using this simple template

“We believe lever [for audience] [on area] will impact KPI.”

The audience and area here are in square brackets to denote that it’s optional whether we want to specify a single audience and area in our hypothesis. Doing so will give us a much more specific hypothesis to explore, but in a lot of cases we may also be interested in testing the effectiveness of the lever across different audiences and different areas – so may want to not specify the audience an area until we define the execution of the experiment.

The framework allows you to quickly create hypotheses for how you’ll impact your KPIs and achieve your goal.

Using the framework

Your first draft of the completed framework will have a large number of audiences, areas and levers, and even multiple KPIs. You’re not going to be able to tackle everything at once. A good strategy should have focus. Therefore you need to do two things before you can define a strategy from the framework.

Prioritise KPIs, audiences and areas

We’re going to be publishing a detailed post of how this framework enables an alternative approach to prioritisation than typical experiment prioritisation.

The core idea is that you need to first prioritise the KPI you most need to impact from your framework in order to achieve your goal. Then evaluate your audiences identify those groups that are the highest priority groups to influence if we want to move that KPI. Then for that audience prioritise those areas of the user-journey that offer the greatest opportunity to influence their behaviour.

This then gives you a narrower initial focus. You can return to the other KPIs at a later date and do the same prioritisation exercise for them.

Validate levers

You need to quickly refine your set of levers and identify the ones that have the greatest potential. If you have run experiments before you should look back through each experiment and identify the key lever (or levers) that were tested. You can then give each lever a “win rate” based on how often experiments using that lever have been successful. If you haven’t yet started experimenting, you likely already have an idea of the potential priority order of your levers based on the volume of evidence for each that you found during your user-research.

However, the best way to validate a lever is to run an experiment to test the impact it can have on our KPI. You need a way to do this quickly. You don’t want to invest significant time and effort testing hypotheses around a lever that turns out not have ever been valid. Therefore for each lever you should identify what we call the minimum viable experiment.

You’re probably familiar with the minimum viable product (MVP) concept. In a minimum viable experiment we look to design the simplest experiment we can that will give us a valid signal as to whether a lever works at influencing user behaviour.

If the results of the minimum viable experiment show a positive signal, we can then justify investing further resource on more experiments to validate hypotheses around this lever. If the minimum viable experiment doesn’t give a positive signal, we might then de-prioritise that lever, or remove it completely from our framework. We’ll also be sharing a post soon going into detail on designing minimum viable experiments.

Creating a strategy

How you create a strategy from the framework will depend on how much experimentation you have done before and therefore how confident you are in your levers. If you’re confident in your levers then we’d recommend defining a strategy that lasts for around 3 months and focuses on exploring the impact of 2-3 of your levers on your highest priority KPI. If you’re not confident in your levers, perhaps having not tested them before, then we’d recommend an initial 3-6 month strategy that looks to run the minimum viable experiment on as many levers as possible. This will enable you to validate your levers quickly so that you can take a more narrow strategy later.

Crucially at the end of each strategic period we can return to the overall framework, update and refine it from what we’ve learnt from our experiments, and then define our strategy for the next period.

For one quarter we might select a single KPI and a small set of prioritised audiences, areas and levers to focus on and validate.

Key takeaways

You can have a first go at creating your framework in about 30 minutes. Then you can spend as long or as little time as you like refining it before you start experimenting. Remember your framework is a living thing that will change and adapt over time as you learn more and get more insight.

  1. Establish the goal of your experimentation project
  2. Define the KPIs that you’ll use to measure success
  3. Understand how your audience impacts your KPIs and goal
  4. Identify the areas with the greatest opportunity to make an impact
  5. Identify the potential levers that influence user behaviour
  6. Define the experiments to test your hypotheses

The most valuable benefit of the framework is that it connects all your experimentation together into a single strategic approach. Experiments are no longer islands, run separately and with little impact on the bigger picture. Using the framework to define your strategy ensures that every experiment is playing a role, no matter how small, in helping you impact those KPIs and achieve your goal.

Alongside this, using a framework also brings a large number of other practical advantages:

  • It’s clearyour one diagram can explain any aspect of your experimentation strategy to anyone that asks or if you need to report on what you’re doing
  • It acts as a sense checkany experiment idea that gets put forward can be assessed based on how it fits within the framework. If it doesn’t fit, it’s easy rejection with a clear reason why
  • It’s easy to come back to – things have a nasty habit of getting in the way of experimentation, but with the framework even if you leave it for a couple of months, it’s easy to come back to it and pick up where you left off
  • It’s easier to show progress and insight one of the biggest things teams struggle with is documenting the results of all their experiments and what was learnt. With the framework, the idea is that the framework updates and changes over time so you know that your previous experiment results have all been factored in and you’re doing what you’re doing for a reason

As we said at the start of this post, there is no special sauce in this framework. It’s just taking a logical approach, breaking down the key parts of an experimentation strategy. The framework we use is the result of over 10 years of experience running experimentation and CRO projects and it looks how it does because it’s what works for us. There’s nothing stopping you from creating your own framework from scratch, or taking ours and adapting it to suit your business or how your teams work. The important thing is to have one, and to use it to go from tactical to strategic experimentation.

You can find a blank Google Slide of our framework here that you can use to create your own.

Alternatively you can download printable versions of the framework if you prefer to work on paper. These templates also allow for a lot more audiences, areas, levers and experiments than we can fit in a slide.

If you would like to learn more, get in touch today!

5 steps to kick-start your experimentation programme with actionable insights

Experimentation has to be data-driven.

So why are businesses still kicking off their experimentation programmes without good data? We all know running experiments on gut-feel and instinct is only going to get you so far.

One problem is the ever-growing number of research methods and user-research tools out there. Prioritising what research to conduct is difficult. Especially when you are trying to maximise success with your initial experiments and need to get those experiments out the door quickly to show ROI.

We are no stranger to this problem. And the solution, as ever, is to take a more strategic approach to how we generate our insight. We start every project with what we call the strategic insights phase. This is a structured, repeatable approach to planning user-research we’ve developed that consistently generates the most actionable insight whilst minimising effort.

This article will provide a step-by-step guide of how we plan our research strategy so that you can replicate something similar yourself. Meaning you can set up your future experiments for greater success.

The start of an experimentation programme is crucial. Pressures of getting stakeholders buy-in or achieving quick ROI means the initial experiments are often the most important. A solid foundation of actionable insight from user-research can make a big difference as to how successful your early experiments are.

With hundreds of research tools enabling multiple different research methods, a challenge arises with how we choose which research method will generate the insight that’s most impactful and actionable. Formulating a research strategy for how you’re going to generate your insight is therefore crucial.

When onboarding new clients, we run an intense research phase for the first month. This allows us to get up to speed on the client’s business and customers. More importantly, it provides us with data that allows us to start building our experimentation framework – identifying where our experimentation can make the most impact and what our experimentation should focus on. We find dedicating this time to insights sets our future experiments up for the bigger wins and therefore, a rapid return on investment.

Our approach: Question-led insights

When conducting research to generate insight, we use what we call a question-led approach. Any piece of research we conduct must have the goal of answering a specific question. We identify the questions we need to answer about a client’s business and their website and then conduct only the research we need to answer them. Taking this approach allows us to be efficient, gaining impactful and actionable insights that can drive our experimentation programme.

Following a question-led approach also means we don’t fall into the common pitfalls of user-research:

  • Conducting research for the sake of it
  • Wasting time down rabbit holes within our data or analytics
  • Not getting the actionable insight you need to inform experimentation

There are 5 steps in our question-led approach.

1. Identify what questions you need, or want, to answer about your business, customers or website

The majority of businesses still have questions about their customers they don’t have the answers to. Listing these questions can provide a brain-dump for everything you don’t know but that if you did know would help you design better experiments. Typically these questions will fall into three main categories; your business, your customers and your website.

Although one size does not fit all with the questions we need to answer, we have provided some of the typical questions that we need to answer for clients in e-commerce or SaaS.

SaaS questions:

  • What is the current trial-to-purchase conversion rate?
  • What motivates users on the trial to make a purchase? What prevents users on the trial to make a purchase?
  • What is the distribution between the different plans on offer?
  • What emails are they sending users when they are in their trial? What is the life cycle of these emails?
  • What are the most common questions asked to customer services or via live chat?

We can quite typically end up with a list of 20-30 questions. So the next step is to prioritise what we need to answer first.

2. Prioritise what questions need answering first

We want our initial experiments to be as data-driven and successful as possible. Therefore, we need to tackle the questions that are likely to bring about the most impactful and actionable insights first.

For example, a question like “What elements in the navigation are users interacting with the most?” might be a ‘nice to know’. However, if we don’t expect a navigation experiment to be one we would run any time soon, this may not be a ‘need to know’ and therefore wouldn’t be high priority. On the other hand, a question like “What’s stopping users from adding products to the basket?” is almost certainly a ‘need to know’. Answering this is very likely to generate insight that can be directly turned into an experiment. Rule of thumb is to prioritise the ‘need to know’ questions ahead of the ‘nice to know’.

We also need to get the actionable insight quickly. Therefore, it is important to ensure that we prioritise questions that aren’t too difficult or time consuming to answer. So, a second ranking of ‘ease’ can also help to prioritise our list.

3. Decide the most efficient research techniques to answer these questions

There are many types of research you could use to answer your questions. Typically we find the majority of questions can be answered by one or more of web analytics, on-site or email surveys, usability testing or heatmaps/scrollmaps. There may be more than one way to find your answer.

However, one research method could also answer multiple questions. For example, one round of usability testing might be able to answer multiple questions focused on why a user could be dropping off at various stages of your website. This piece of research would therefore be more impactful, as you are answering multiple questions, and would be more time efficient compared to conducting multiple different types of research.

For each question in our now prioritised list we decide the research method most likely to answer it. If there are multiple options you could rank these by the most likely to get an answer in the shortest time. In some cases we may feel the question was not sufficiently answered by the first research method, so it can be helpful to consider what you would do next in these cases.

4. Plan the pieces of research you will carry out to cover the most questions

You should no have a list of prioritised questions you want to answer and what research method you would use to answer each. From this you can select the pieces of research you should carry out based on which would give you the best coverage of the most important questions. For example, you might see that 5 of your top 10 questions could be answered through usability testing. Therefore, you should prioritise usability testing in the time you have, and the questions you need to answer can help you to design your set of tasks.

After your first round of research, revisit your list of questions and for each question evaluate whether or not you feel it has been sufficiently answered. Your research may also have generated more questions that should be added to the list. Periodically you might also need to re-answer questions where user behaviour has changed due to your experimentation. For example, if initially users were abandoning on your basket page due to a lack of trust, but successful experiments have fixed this, then you may need to re-ask the question to discover new problems on the basket page.

On a regular basis you can then repeat this process again of prioritising the questions, deciding the best research methods and then planning your next set of research.

5. Feed these insights into your experimentation strategy

Once your initial research pieces have been conducted and analysed it is important to compile the insight from them in one place. This has two benefits. The first being the ease in visualising and discovering themes that may be emerging within your data from multiple sources of insight. The second being the benefit that comes from having one source of information that could be shared with others within your business.

As your experimentation programme matures it is likely you will be continuously running research in parallel to your experiments. The insight from this research will answer new questions that will naturally arise and can help inform your experimentation.

Taking this question-led approach means you can be efficient with the time you spend on research, while still maximising your impact. Following our step-by-step guide will provide a solid foundation that you can work upon within your business:

  1. Identify what questions you need, or want, to answer about your business, customers or website
  2. Prioritise what questions need answering first
  3. Decide the most efficient research techniques to answer these questions
  4. Plan the pieces of research you will carry out to cover the most questions
  5. Feed these insights into your experimentation strategy

For more information on how to kick-start experimentation within your business, get in touch here.

Introducing: The 9 experimentation principles

At Conversion.com, our team and our clients know first-hand the impact experimentation can have. But we also see all too often the simple mistakes, misconceptions and misinterpretations organisations make that limit the impact, effectiveness and adoption of experimentation.

We wanted to put that right. But we didn’t just want to make another best-practice guide to getting started with CRO or top 10 tips for better experiments. Instead, inspired by the simple elegance of the UK government design principles, we set ourselves the challenge of defining a set of the core experimentation principles.

Our ambition was to create a set of principles that, if followed, should enable anyone to establish experimentation as a problem solving framework for tackling any and all problems their organisation faces. To distill over 10 years of experience in conversion optimisation and experimentation down to a handful of principles that address every common mistake, every common misconception and misinterpretation of what good experimentation looks like.

Many hours of discussion, debate and refinement later, we’re happy to be able to share the end product – the 9 principles of experimentation.

Here are the principles in their simplest form. You can also download a pdf of the experimentation principles that also includes quotes and stories we’ve gathered from experimentation experts at companies such as Just Eat, Booking.com, Microsoft and Facebook. A few snippets of those quotes are included below as a taster.

DOWNLOAD PRINCIPLES PDF

1 – Challenge assumptions, beliefs and doctrine

Experimentation should not be limited to optimising website landing pages, funnels and checkouts. Use experimentation as a tool to challenge the widely held assumptions, ingrained beliefs and doctrine of your organisation. It’s often by challenging these assumptions that you’ll see the biggest returns. Don’t accept “that’s the way it’s always been done” -to do so is to guarantee you’ll get the results you’ve always had. Experimentation provides a level playing field for evaluating competing ideas, scientifically, without the influence of authority or experience.

It was only when we were willing to question our core assumptions through interviews, data collection, and rigorous experimentation that we found answers to why growth had slowed... Click To Tweet

-Rand Fishkin, CEO and Co-founder, SparkToro

2 – Always start with data

It sounds trite to say you should start with data. Yet most people still don’t. Gut-feel still dominates decision making and experiments based on gut-feel rarely lead to meaningful impact or insight. Good experimentation starts with using data to identify and understand the problem you’re trying to solve. Gather data as evidence and build a case for the likely causes of those problems. Once you have gathered enough evidence you can start to formulate hypotheses to be proven or disproven through experiments.

3 – Experiment early and often

In any project, look for the earliest opportunity to run an experiment. Don’t wait until you have already built the product/feature to run an experiment, or you’ll find yourself moulding the results to justify the investment or decisions you’ve already made. Experiment often to regularly sense-check your thinking, remove reliance on gut-feel and make better informed decisions.

4 – One, provable hypothesis per experiment

Every experiment needs a single hypothesis. That hypothesis statement should be clear, concise and provable – a cause-effect statement. A single hypothesis ensures the experiment results can be used to evaluate that hypothesis directly. Competing hypotheses introduce uncertainty. If you have multiple hypotheses, separate these into distinct experiments.

5 – Define the success metric and criteria in advance

Define the primary success metric and the success criteria for an experiment at the same time that you define the hypothesis. Doing so will focus your exploration of possible solutions around their ability to impact this metric. Failing to do so will also introduce errors and bias when analysing results—making the data fit your own preconceived ideas or hopes for the outcome.

Any targets drawn after the experiment is run should be called into question. The evidential value of an experiment comes from targets that were drawn before we started the test Click To Tweet

-Lukas Vermeer, Booking.com

6 – Start with the minimum viable experiment, then iterate

When tackling complex ideas the temptation can be to design a complex experiment. Instead, look for the simplest way to run an experiment that can validate just one part of the idea: the minimum viable experiment. Run this experiment to quickly get data or insight that either gives the green light to continue to more complex implementations, or flags problems early on. Then iterate and scale to larger experiments with confidence that you’re heading in the right direction.

7 – Evaluate the data, hypothesis, execution and externalities separately

When faced with a negative result, it can be tempting to declare an idea dead-in-the-water and abandon it completely. Instead, evaluate the four components of the experiment separately to understand the true cause:

  1. The data – was it correctly interpreted?
  2. The hypothesis – has it actually been proven or disproven?
  3. The execution – was our chosen solution the most effective?
  4. External factors – has something skewed the data?

An iteration with a slightly different hypothesis, or an alternative execution could end in very different results. Evaluating against these four areas separately, for both negative and positive results, gives four areas on which you can iterate and gain deeper insight.

8 – Measure the value of experimentation in impact and insight

The ultimate judge of the value of an experimentation programme are the impact it delivers and the insight it uncovers. Experimentation can only be judged a failure if it doesn’t give us any new insight that we didn’t have before. Negative results that give us new insight can often be more valuable than positive results that we don’t understand.   

9 – Use statistical significance to minimise risk

Use measures of statistical significance when analysing experiments to manage the risk of making incorrect decisions. Achieving 95% statistical significance leaves a 1 in 20 chance of a false positive – seeing a signal where there is no signal. This might not be acceptable for a very high risk experiment with something like product or pricing strategy, so increase your requirements to suit your appetite. Beware experimenting without statistical significance, that’s not much better than guessing.

The best data scientists are skeptics that double-check, triangulate results, and evaluate the positive and the negative results with the same scientific rigor Click To Tweet

-Ron Kohavi, Microsoft

***

These are the 9 principles we felt most strongly define experimentation, but no doubt we could have added others and made a longer list. If you have experimentation principles that you use at your organisation that we haven’t included here we’d be interested to hear about them and why you feel they’re important.

For more detail and even more insights from some of the world leading experts on experimentation, please be sure to download the full experimentation principles.

**

We’re also looking for more stories and anecdotes of both good and bad examples of these principles in action from contributors outside Conversion to include in our further iterations of these principles. If you have something you feel epitomises one of these principles then please get in touch and you could feature in our future posts and content about these principles.

And finally, if you want to be notified when we publish more content about these experimentation principles, drop us an email with your contact details.

For any of the above get in touch at hello@conversion.com.

DOWNLOAD PRINCIPLES PDF

Introducing our hypothesis framework

Download printable versions of our hypothesis framework here.

Experiments are the building blocks of optimisation programmes. Each experiment will at minimum teach us more about the audience – what makes them more or less likely to convert – and will often drive a significant uplift on key metrics.

At the heart of each experiment is the hypothesis – the statement that the experiment is built around.

But hypotheses can range in quality. In fact, many wouldn’t even qualify as a hypothesis: eg “What if we removed the registration step from checkout”. That might be fine to get an idea across, but it’s going to underperform as a test hypothesis.

For us, an effective hypothesis is made up of eight key components. If it’s reduced to just one component showing what you’ll change (the “test concept”), you’ll not just weaken the potential impact of the test – you’ll undermine the entire testing programme.

That’s why we created our hypothesis framework. Based on almost 10 years’ experience in optimisation and testing, we’ve created a simple framework that’s applicable to any industry.

Conversion.com’s hypothesis framework

Conversion.com Hypothesis Framework

What makes this framework effective?

It’s a simple framework – but there are three factors that make it so effective.

  1. Putting data first. Quantitative and qualitative data is literally the first element in the framework. It focuses the optimiser on understanding why visitors aren’t converting, rather than brainstorming solutions and hoping there’ll be a problem to match.
  2. Separating lever and concept. This distinction is relatively rare – but for us, it’s crucial. A lever is the core theme for a test (eg “emphasising urgency”), whereas the concept is the application of that lever to a specific area (eg “showing the number of available rooms on the hotel page”). It’s important to make the distinction as it affects what happens after a test completes. If a test wins, you can apply the same lever to other areas, as well as testing bolder creative on the original area. If it loses, then it’s important to question whether the lever or the concept was at fault – ie did you run a lousy test, or were users just not affected by the lever after all?
  3. Validating success criteria upfront: The KPI and duration elements are crucial factors in any test, and are often the most overlooked. Many experiments fail by optimising for a KPI that’s not a priority – eg increasing add-to-baskets without increasing sales. Likewise the duration should not be an afterthought, but instead the result of statistical analysis on the current conversion rate, volume of traffic, and the minimum detectable uplift. All too often, a team will define, build and start an experiment, before realising that its likely duration will be several months.

Terminology

Quant and qual data

What’s the data and insight that supports the test? This can come from a huge number of sources, like web analytics, sales data, form analysis, session replay, heatmapping, onsite surveys, offsite surveys, focus groups and usability tests. Eg “We know that 96% of visitors to the property results page don’t contact an agent. In usability tests, all users wanted to see the results on a map, rather than just as a list.”

Lever

What’s the core theme of the test, if distilled down to a simple phrase? Each lever can have multiple implementations or test concepts, so it’s important to distinguish between the lever and the concept. Eg a lever might be “emphasising urgency” or “simplifying the form”.

Audience

What’s the audience or segment that will be included in the test? Like with the area, make sure the audience has sufficient potential and traffic to merit being tested. Eg an audience may be “all visitors” or “returning visitors” or “desktop visitors”.

Goal

What’s the goal for the test? It’s important to prioritise the goals, as this will affect the KPIs. Eg the goal may be “increase orders” or “increase profit” or “increase new accounts”.

Test concept

What’s the implementation of the lever? This shows how you’re applying the lever in this test. Eg “adding a map of the local area that integrates with the search filters”.

Area

What’s the flow, page or element that the test is focused on? You’ll need to make sure there’s sufficient potential in the area (ie that an increase will have a meaningful impact) as well as sufficient traffic too (ie that the test can be completed within a reasonable duration – see below). Eg the area may be “the header”, “the application form” or “the search results page”.

KPI

The KPI defines how we’ll measure the goal. Eg the KPI could be “the number of successful applications” or “the average profit per order”.

Duration

Finally, the duration is how long you expect the test to run. It’s important to calculate this in advance – then stick to it. Eg the duration may be “2 weeks”.

Taking this further

This hypothesis framework isn’t limited to A/B tests on your website – it can apply anywhere: to your advertising creative and channels, even to your SEO, product and pricing strategy.
Any change and any experience can be optimised – and to do that effectively requires a data-driven and controlled framework like this.

Don’t forget – you can download printable versions of the hypothesis framework here.

5 Books every e-commerce manager should read (to increase conversions)

Keeping a healthy online sales engine is no walk in the park. From web usability and UX (yes, they’re different things) to copywriting and consumer psychology, there are many factors at play when it comes to your customers behaviour and their chances of converting (i.e. purchasing) on a given visit. With average e-commerce conversion rates at only 2%, an e-commerce manager needs every weapon at their disposal to ensure their visitor-to-transaction ratio is at optimal performance. Knowing the impact of each conversion lever – and how to pull accordingly – is an art which, in the long-run, is well worth the initial investment.

Whether you’re starting to think about testing, already running your own experiments or just looking to brush up on a few essential skills, this list has everything you need to get focused, and start thinking critically about your website conversion strategy and performance.

Usability

Don’t make me think – Steve Krug

dontmakemethink

A staple for the CRO community, Steve Krug’s Don’t make me think is refreshingly real and to the point, bringing some much needed simplicity to the often over-complicated world of web design. Krug eats his own dog food by keeping right to the point – the book could easily be read over half a week’s commute – with relevant (and occasionally hilarious) examples which paradoxically, will make you think a lot about your website design and user experience. The beautifully obvious takeaways make it both a satisfying and highly actionable read, guaranteed to trigger your first few site changes and make you think about your longer term optimisation strategy.

 

The Design of Everyday Things – Donald A. Norman

everydaydesign

The Design of Everyday Things is one of those books which will enrich your thinking far beyond your professional remit. Besides the vast reassurance that you’re not alone in your daily struggle against doors, microwaves and all the minutiae of present-day life, the takeaways for e-commerce managers, UX designers (or anyone concerned with web performance for that matter) is nothing short of profound. By delving into the mechanics of human-environment interactions using concepts like perceptual psychology and embodied-cognition, the reader’s discovery is something to have legitimately been described as ‘life changing’. Everything will look different after reading this book, not least of all your website, which is sure to undergo some gestaltian re-analysis after you’re done.

 

User Experience

Hooked – Nir Eyal

hooked

Not your run-of-the-mill UX recommendation, true, but Hooked makes the list here for it’s priceless contributions to the importance of UX in customer habit formation and retention. 30% of an e-commerce website’s customer base purchase only once per year – Hooked shows you how to create a sense of dependency in key moments and then keep users engaged enough to guarantee their return through association, just as soon as their needs arise again. As Nir Eyal puts it – “the result of engagement is monetisation”. The book centers around a clever model known as ‘the hook canvas’, and is handily split into succinct chapters for each phase.

 

Psychology/Behavioural Economics

Influence: The Psychology of Persuasion

influence

The seminal guide to persuasion, Influence single-handedly opened the literary floodgates of consumer psychology and behavioural economics to the masses. Cialdini’s six principles of persuasion – reciprocity, commitment, social proof, liking, authority and scarcity – have become a major basis for every subsequent publication in the field, so if you’re only really going to give the time of day to one, let this be it. When you start running experiments based on cognitive-behavioural levers, you’ll get the testing bug, not just because you’re playing directly on your consumers motivations, but because it creates so much potential for agile optimisation and reactive campaign formation. Loaded with enchanting stories that make for a surprisingly fluid read, it’s guaranteed to stay with you, especially as you might read it two or three times.

 

The psychology of price – Leigh Caldwell

psychofprice

Applying well documented behavioural insights, the psychology of price gives a solid structure to pricing effects you didn’t realise you already knew (most likely from experience), all set to the backdrop of the fictitious Chocolate Teapot Company. This works perfectly by helping you to absorb all the theory whilst providing tangible examples, with practical application guidance and case studies throughout. Add that to the list of 36 solid pricing techniques providing abundant price test ideas and the book will likely pay for itself several hundred times over. Putting this in your bookcase is definitely a no-brainer.

While there are definitely many worthwhile books out there for this diverse (and demanding) profession, these would have to be our five ‘desert-island’ e-commerce necessities. Are there any you think we’ve missed? What did you make of our list? Let us know in the comments below!

How we increased revenue by 11% with one small change

Split testing has matured and more and more websites are testing changes. The “test everything” approach has become widespread and this has been a huge benefit for the industry. Companies now know the true impact of changes and can avoid costly mistakes. The beauty of testing is that the gains are permanent, and the losses are temporary.

Such widespread adoption of testing has brought the challenge that many tests have small, or no impact on conversion rates. Ecommerce managers are pushing for higher conversion rates with the request:

“We need to test bigger, more radical things”

Hoping that these bigger tests bring the big wins that they want.

Unfortunately, big changes don’t always bring big wins, and this approach can result in bigger more complex tests, which take more time to create and are more frustrating when they fail.

How a small change can beat a big change

To see how a well thought out, small change can deliver a huge increase in conversion rates, where a big change had delivered none, we can look at a simple example.

This site offers online driver training courses, allowing users to have minor traffic tickets dismissed. Part of the process gives users the option to obtain a copy of their “Driver Record”. The page offering this service to customers, was extremely outdated:

Default-2-Control-26808934
Wireframe to demonstrate the original page layout for the driver record upsell

Conversion and usability experts will panic at this form with its outdated design, lack of inline validation and no value proposition to convince the user to buy.

The first attempt to improve this form was a complete redesign:

Default-0-Variation-No-Tooltip-26808924
Wireframe to show the initial test designed to increase driver record upsells

Although aesthetically more pleasing, featuring a strong value proposition and using fear as a motivator, the impact of this change was far from that expected. Despite rebuilding the entire page, there was almost no impact from the test. The split test showed no statistically significant increase or decrease.

This test had taken many hours of design and development work, with no impact on conversion, so what had gone wrong?

To discover the underlying problem, the team from Conversion.com placed a small Qualaroo survey on the site. This popped up on the page, asking users “What’s stopping you from getting your driver record today?”

Qualaroo (1)

 

Small on page surveys like this are always extremely valuable in delivering great insights for users, and this was no exception. Despite many complaints about the price (out of scope for this engagement), users repeatedly said that they were having trouble knowing their “Audit Number”.

The audit number is a mandatory field on the form, and the user could find it on their Drivers License. Despite there being an image on the page already showing where to find this, clearly users weren’t seeing it.

The hypothesis for the next version of this test was simple.

“By presenting guidance about where to find the audit number in a standard, user friendly way at the time that this is a problem for the user, fewer users will find this to be an issue when completing the form.”

The test made an extremely small change to the page, adding a small question mark icon next to the audit number field on the form:

Default-1-Variation-Tooltip-Added-26808932
Wireframe to show the small addition of a tooltip to the test design

This standard usability method would be clear for users who were hesitating at this step. The lightbox which opened when the icon was clicked, simply reiterated the same image that was on the page.

tooltip_final

Despite this being a tiny change, the impact on users was enormous. The test delivered an 11% increase in conversions against the version without the icon. By presenting the right information, at the right time, we delivered a massive increase in conversions without making a big change to the page.

An approach to big wins

So was this a fluke? Were we lucky? Not at all. This test demonstrated the application of a simple but effective approach to testing which can give great results almost every time. There’s often no need to make big or complex changes to the page itself. You can still make radical, meaningful changes with little design or development work.

When looking to improve the conversion rate for a site or page, by following three simple steps you can create an effective and powerful test:

  1. Identify the barrier to conversion.
    A barrier is a reason why a user on the page may not convert. It could be usability-related, such as broken form validation or a confusing button. It could be a concern about your particular product or service, such as delivery methods or refunds. Equally, it could be a general concern for the user, such as not being sure whether your service or product is the right solution to their problem. By using qualitative and quantitative research methods, you can discover the main barriers for user.
  2. Find or create a solution.
    Once you have identified a barrier, you can then work to create a solution. This could be a simple change to the layout of the site; a change to your business practices or policies; supporting evidence or information or compelling persuasive content such as social proof or urgency messaging. The key is to find a solution which directly targets the barrier the user is facing.
  3. Deliver it at the right time.
    The key to a successful test is to deliver your solution to the user when it’s most relevant to them. For example price promises and guarantees should be shown when pricing is displayed; delivery messaging on product pages and at the delivery step in the basket; social proof and trust messaging could be displayed early in the process; and urgency messaging when the user may hesitate. The effectiveness of a message requires it to be displayed on the right page and in the right area for the user to see it and respond to it at the right time.

By combining these three simple steps, you can develop tests which are more effective and have more chance of delivering a big result.

Impact and Ease

Returning to the myth that big results need big tests, you should make sure that you consider the impact of a test and its size as almost completely different things. When you have a test proposal, you should think carefully about how much impact you believe it will have, and look independently at how difficult it will be to build.

At Conversion.com, we assess all tests for Impact and Ease and plot them on a graph:

Dave Graphs

Clearly the tests in the top right corner are the ones you should be aiming to create first. These are the tests that will do the most for your bottom line, in the shortest amount of time.

More impact, more ease

So how do you make sure that you can deliver smaller tests with bigger impact?

Firstly, maximise the impact of your test. You can do this by targeting the biggest barriers for users. By taking a data driven approach to identifying these, you are already giving your test a much higher chance of success. With a strong data-backed hypothesis you already know that you are definitely overcoming a problem for your users.

You can increase the impact by choosing the biggest barriers. If a barrier affects 30% of your users, that will have far more impact than one only mentioned by 5% of your users. Impact is mostly driven by the size of the issue as overcoming it will help the most users.

To get the biggest impact from smaller tests, you need to look at how you can make tests easier to create. By choosing solutions which are simple, you can much more quickly iterate and get winners. Simple but effective ways of developing simple tests can include:

  • Headline testing – headlines are a great way to have a huge impact on a user’s behaviour with very little effort. They are the first part of the page a user will read and allow you to set their mindset for the rest of the session
  • Tooltips and callouts – In forms these can be hugely effective. They are small changes but capture the user’s attention when they are thinking about a specific field. By matching security messaging to credit card fields, privacy messaging to email and phone number fields and giving guidance to users when they have to make difficult selections, it is easy to have an impact on their behaviour with a very small change.
  • Benefit bars can be a very effective way of delivering a strong message without a major change to a site. With a huge potential impact (being delivered on every page), but a small impact on page design and layout (usually slotting in below the navigation), benefit bars they can be very effective in getting your core messaging across to a user.
  • Copy testing – by changing the copy at critical parts of the site you can impact the user’s feelings, thoughts and concerns without any complex design or development work

A simple approach for big wins with small tests

By following the simple three step process, you can greatly increase the impact and rate of your tests, without having to resort to big, radical, expensive changes:

  1. Identify the barrier to conversion.
  2. Find or create a solution.
  3. Deliver it at the right time.

The impact of your testing programme is driven more by the size of the issues you are trying to overcome and the quality of your hypotheses than by the complexity and radical approaches in your testing. Focusing time on discovering those barriers, will pay off many times more than spending the time in design and development.

5 questions you should be asking your customers

On-site survey tools provide an easy way to gather targeted, contextual feedback from your customers. Analysis of user feedback is an essential part of understanding motivations and barriers in the decision making processes.

It can be difficult to know when and how to ask the right questions in order to get the best feedback without negatively affecting the user experience. Here are our top 5 questions and tips on how to get the most out of your on-site surveys.

On-site surveys are a great way to gather qualitative feedback from your customers. Available tools include Qualaroo and Hotjar.
On-site surveys are a great way to gather qualitative feedback from your customers. Available tools include Qualaroo and Hotjar.

1. What did you come to < this site > to do today?

Where: On your landing pages

When: After a 3-5 second delay

Why: First impressions are important and that is why your landing pages should have clear value propositions and effective calls to action. Identifying user intentions and motivations will help you make pages more relevant to your users and increase conversion rates at the top of the funnel.

2. Is there any other information you need to make your decision?

Where: Product / pricing pages

When: After scrolling 50% / when the visitor attempts to leave the page

Why: It is important to identify and prioritise the information your users require to make a decision. It can be tempting to hide extra costs or play down parts of your product or service that are missing but this can lead to frustration and abandonment. Asking this question will help you identify the information that your customers need to make a quick, informed decision.

3. What is your biggest concern or fear about using us?

Where: Product / pricing pages

When: After a 3-5 second delay

Why: Studies have found that “…fear influences the cognitive process of decision-making by leading some subjects to focus excessively on catastrophic events.”.  Asking this question will help you identify and alleviate those fears, and reduce the negative ffect they may be having on your conversion rates.

4. What persuaded you to purchase from us today?

Where: Thank you / confirmation page

When: Immediately after purchase. Ideally embedded in the page (try Wufoo forms)

Why: We find that some of our most useful insights come from users who have just completed a purchase. It’s a good time to ask what specifically motivated a user to purchase. Asking this question will help you identify and promote aspects of your service that are most appealing to your customers.

5. Was there anything that almost stopped you buying today?  

Where: Thankyou / confirmation page

When: Immediately after purchase

Why: We find that users are more clear about what would have stopped them purchasing after they have made a purchase. Asking this question can help you identify the most important barriers that are preventing users from converting. Make sure to address these concerns early in the user journey to avoid surprises and reduce periods of uncertainty.

What questions have you asked your customers recently? Have you asked anything that generated valuable insights? Share in the comments below!

Mine your spam email – it’s full of tips on how to be more persuasive.

Spam email copywriters have to work hard. They are the illegal street traders of the email world, flogging fake meds and pushing casino offers down the alley that is your spam folder.

You can’t succeed in the cut-throat world of spam without using a few clever tricks and persuasion techniques, and the spam folder can be a veritable gold mine of inspiration and ideas for how to be more persuasive.

To demonstrate, here is a screenshot of my spam folder. This covers about a week.

product-icon

Almost every email is using one or more persuasion techniques to persuade me to click. Here are my favourites:

Making the sender a person

Just under half of these emails claim to be sent from a person rather than a company. The sender column in each case shows the full name of a person. This is an effective persuasive technique for a number of reasons.

  • A person’s full name adds legitimacy, no matter what the content of the email.
  • A person’s name, rather than the company name, suggests this is a specific member of staff getting in touch with me directly.
  • Names have associated familiarity. For example, the second email is from Amber. Perhaps I met someone called Amber recently. This could be her getting in touch with me again. It’s worth a quick click just to be sure.
  • All the names have something in common – they’re womens’ names. I’d be surprised if targeting a man with emails from what appear to be women was an accident.

In a sea of emails where the senders are companies, a person’s name immediately distinguishes that email as more worthy of my attention. In the spam email business, attention equals clicks.

Outside of spam emails, giving your business a human face (and name) can be equally as effective. On-site customer service is an area where this can work well. Live-chat popups will frequently now show the name, and even sometimes a friendly picture, of the agent that you’ll be talking to. If you’re a lead generation business, a worthwhile test could be to make your contact form more personal, with names and photos of your service team. At Conversion.com we carry out a lot of email surveys and we’ll always ask for a customer service agent’s name to use as the sender of our emails. It looks less like an automated email, and this generates a higher response rate.

Addressing your customers by name

At some point I have given my first name to the people over at Gala Bingo and 888.com. It is good to see that it’s being put to good use. They have both used my name as the first word in their subject lines.

888

gala

We are all primed to notice mentions of our own name, whether spoken or written. Most of us will at some point have found ourselves suddenly listening to someone’s conversation because we hear them mention our name. It doesn’t even have to be our name, often just a word that sounds similar can have the same effect.

When scanning this long list of emails, my first name is bound to stand out and grab my attention. Spammers know this is an effective strategy. They are so keen to use it that they will even take a gamble on the part before the @ in your email address being your name and address you by that. My full email address would still stand out – the digital equivalent of my name – and chances are that I will read the subject line. Quite an achievement when most of these emails will normally be deleted before they are even seen.

A customer’s name is a powerful persuasive weapon when used effectively. The customer experience immediately feels more personalised when names are used. If you can personalise the content at the same time then you’re in a very strong position.

It’s often stated as best practice when collecting customer information to remove as many fields as possible. Many sign-up forms have moved to being just an email address and password, with no name field. Whilst this may get you a few extra initial sign-ups at first, your effectiveness at converting those sign-ups to sales may be impacted by not knowing that customer’s name. The safest bet is always to split-test it and measure the conversion rate to sale of the name vs no-name cohorts.

Using a question to generate an answer

The third email down in my list (apparently from Eva Webster) is asking me a direct question.

Eva

The question stands out. This particular question is phrased like a challenge, and the promise of a challenge might actually be sufficient to get my attention. People often check their emails when bored, so it doesn’t take much to get their initial interest. Plus it’s human nature when challenged in some way to want to prove that you are up to the task.

Using questions in your copy is an effective technique in general because, when someone asks a question, you can’t help but instantly think of your answer. In the case of spam email this might just be enough to stop you in your tracks as you scan down your inbox. Using a question as a headline can be an effective way to capture your reader’s attention and establish their mindset as ready to engage with the rest of your content.

Questions work particularly well in certain industries. Take cosmetics for example. There’s a mould for cosmetic industry TV adverts where they start with a model asking you a direct question such as “Do you want longer, fuller lashes?”. Starting with a question is so effective in this industry as it plays on the insecurities of the audience. Even if you didn’t want longer fuller lashes, chances are you’re now aware that maybe you should do. Then luckily for you the rest of the adverts tells you exactly how you can get those longer, fuller lashes that you didn’t know you needed. It’s a very effective way to capture the customer’s attention and get them thinking about your product.

Using fear of missing out to motivate

From the sheer volume of spam they are sending my way it does seem like 888.com are determined to try every trick in the book in the hope that one might work on me. Here is an example of them using the scarcity principle to try and provoke a response.

scarcity

This is nicely phrased to give the impression that I am wasting a great opportunity. The “Hurry!” at the end is both commanding me to take action and emphasising that there is a limited timeframe involved. This email is much more likely to get my attention than one where there is no sense of urgency.

This fear of missing out is not a new concept, and examples of its use are everywhere. Low stock indicators on ecommerce sites, next-day delivery countdown timers and simple limited time offers are fairly commonplace. Some fashion retailers will even have a “last chance to see” section of the site that only contains items that you might miss out on if you don’t buy them now.

Nearly all of the emails in this list use one technique or another to try and persuade me  to click. Some of the best use multiple techniques combined.  Here are the four key techniques we’ve seen in just this small selection of emails.

  • Making the sender a person
  • Addressing your customers by name
  • Using a question to provoke an answer
  • Using fear of missing out to motivate

Why not take a look through your junk mail folder and see how many different persuasion techniques you can spot being used?

Where else can we see persuasion techniques in action?

We’ve used my spam folder here as an example, but persuasion techniques like these are in use everywhere you look. Next time you find yourself compelled to open a particular email,  influenced by a certain advert, or buying something online, ask yourself these quick questions and see what persuasion techniques you were influenced by.

  • What was the first thing about this that caught my attention?
  • What did I see next that made me engage further?
  • What about this eventually made me take action?

When you find persuasion techniques working on you, look for ways you can use them in your own marketing. After all, if they’ve worked on you they will probably work on other people too.

10 Quick wins to increase your web form conversion rate: part 2

This is the second post in a two-part series: 10 quick wins to increase your web form conversion rates. You can find part 1 here

6. Inline question mouse highlighting

Comparethemarket.com offers a great way of using the user’s mouse movement to highlight the question they are on.

CTM 3
Comparethemarket use a hover feature to highlight what question the user is on.

This helps to keep the user focused on what’s important – completing the quote. However, if the user does happen to become distracted, upon their return, the field will continue to stand out. This will again reduce the effort placed on the user.
As you’ve got this far I’ll give you another free tip (turning your forms up to 11)! When a user completes a field, why not offer them visual confirmation of their achievement? At the start of 2014 we user tested a number of Axure interactive wireframes. We found that placing a tick next to a completed field, not only offers the user visual confirmation and reassurance, it also helps to offer a visual progress bar throughout a question set.

7. Placeholder text in each field

Rather than leaving fields blank, pre-populating them with common answers can really help.

Hopefully, you have a rough idea of your primary persona. Do you know what users normally select in your form? Could you pre-populate less business-critical questions with some answers? This will reduce interactions, increasing the completion rate for those types of users.
On questions where you can’t pre-populate the answer, be sure to add placeholder text. By leaving the field blank, you’re missing an opportunity to help your users. Being able to see an example of the answer will help to reduce the effort placed on the user. We also found that it helps to reduce the chance that users inadvertently skip a question.

8. Tooltips accessibility

During usability testing, we noticed that a particular field was causing users to question “why” the company actually needed to know this information (“what is your marital status”). The reason why the company wanted to know this was explained in the tooltip. However, the tooltip in question could only be accessed once the user clicked into the field.

Clicking into the field reveals the tooltip, but it also covers part of the header question – Moneysupermarket.com 02/04/2015.
Clicking into the field reveals the tooltip, but it also covers part of the header question – Moneysupermarket.com 02/04/2015.

Forms and tooltips go hand in hand. If you’re asking users to spend time filling out their information, make sure information within tooltips can be accessed before the user interacts with the field. The most standard way of displaying a tooltip icon is a small question mark.
Bonus tip: If you do implement tooltips, be sure to check them on phones and tablets, they’ll need to be tappable with a minimum of 36px to ensure they’re easily interacted with.

9. Progressive disclosure

Users are put off at first sight of a long and complicated question set. It’s no coincidence then why moneysupermarket.com, confused.com and comparethemarket.com ask users registration numbers before starting a quote. By presenting a basic question to start a quote, users are not immediately put off, whilst the initial investment (no matter how small) increases the likelihood that they’ll complete the rest of the form. This is because of the sunk cost bias – once a user has already committed to and spent time on a part of your form, they are motivated to see the task through to completion to avoid the initial effort going to waste.

Whilst gaining commitment from the user can help to reduce bounce rates, there are other techniques to hide a large question set.
A simple option would be to hide certain future questions until the user has filled in another. Google Compare does this dynamically with the protected no claims discount field. There are quite a few different ways of applying the technique to reduce the initial impact on users when they first see the form.

cover details

Google Compare hides the protected no claims discount question until the user tells them that they have no claims discount.
Google Compare hides the protected no claims discount question until the user tells them that they have no claims discount.

One last way of reducing the impact on users would be to split the questions over multiple pages – a progress indicator is something worth considering here too, to let users know where they are in the process. However, as with any changes you make, be sure to test it thoroughly as also having a ton of pages can just as easily put users off. The trick is to tweak and test iteratively until you find the perfect balance.

10. Ambiguous Answers

Finally, let’s look at the pre-set answers to your questions. When analysing them, you’ll need to ask yourself “what happens if the user doesn’t fit into that option you’ve provided?”

Users will likely do 1 of 4 things:

  • If you have a telephone number/email/live chat, (hopefully) they will contact you to find out what to do.
  • They may try to guess the answer (your analysts will hate that)!
  • They may search online to find out the answer, and if you are not hot on search marketing search terms, then you may lose your business here.
  • They give up, and you’ve lost a potential customer as well as damaging your brand.

The answer to this is to offer an “other” or “not sure” option. This won’t please analysts who want clean data, but it’s better than losing the sale. Once implemented, it will significantly reduce the chance of users dropping out over confusing or complicated questions.
If you’re interested in capturing more information (and you don’t mind analysing it), it’s worth displaying a free text field to capture what those “other” responses are. This way you can work them back into the question set and clean up your data.

Final thoughts

I’m conscious that we’ve gone through a lot of different points in this series. We’ve covered a lot of common mistakes in form usability that can affect the user’s journey and ultimately your conversion rates, many of which can be very quick and easy to change. However, whilst we’ve seen that these common mistakes can reduce your conversion rates, please do not make the number one mistake of making changes without testing them first. What works on one site may not work on another.

The primary reason behind this article is to help you to find ways of improving your form usability. It pains me to visit a website and come across, time and time again, some easy to rectify mistakes. The user experience of your site matters, do not underestimate the value of a good experience.

10 Quick wins to increase your web form conversion rate: part 1

This post is the first of a two part series discussing the quick wins – and pitfalls – that could make a dramatic difference to your form completion rates and your customers’ experience.

Form usability can be a tricky area of web design. There are many examples of websites investing vast amounts of time and money – almost in an F1 fashion – just to make minor tweaks that fine-tune their forms for the best customer experience. On the other hand, there are still plenty of websites that could do with going back to basics and doing some good, old fashioned research. Insight methods such as usability testing, heat mapping, surveys and form analytics will highlight pain points in web forms. Getting started or knowing where to start can be a difficult task. In the next section, we’ll go through some basic but important areas of usability that can dramatically increase conversion rates. As ever, with any changes, be sure to test them!

1. The number of questions – do you absolutely need them?

Have you ever found yourself filling in an online form, only to be faced with a never-ending set of questions?  As a user, I’m always hoping that the end result will justify the hard work. As an optimizer, I’m always looking to gather the least information required to move the user through the form. Even a small reduction in questions can drive a dramatic increase in completion rates. Before you think about removing any questions you’ll need to go through a few simple, but important tasks:

  • Step 1: Go through your question set and note each one down. I find using Google sheets or Excel the easiest place to start.
  • Step 2: You’ll need to note what type of form is used to capture the answer e.g. dropdown / free text / radio button.
  • Step 3: Add the possible answers for each question (or N/A if the question is open-ended).
  • Step 4: Mark each question to show if it is mandatory.
An example of how you could layout your question set analysis. In a previous life I worked in the Insurance industry – the first time I carried out this task, I noted 67 questions (poor customers)!

Next up, time to review the questions! At this point if you’re not in complete control of the question set, you may need to discuss the impact of removing them with the relevant members of staff.

Start by challenging each question with the following:

  • Is this question essential or just “nice to have”?
  • Is this question still relevant, or is it legacy?
  • For any question that you’ve not marked as “mandatory” – do you still need it?
  • Could you use an API to look up that information rather than asking it?

Like the mobile first approach, every question on your site should serve an essential purpose. If it’s not useful, remove it!

Comparison sites often offer great examples for how to shorten a question set. They use APIs such as the DVLA vehicle lookup to find information such as make/model/seats/transmission etc. They have even experimented with an API which reports back an average vehicle price, to reduce the deliberation which causes customer friction.

Screenshots showing how MSM use a vehicle lookup API to reduce the amount of questions required.

2. User interactions

Removing questions will reduce unnecessary user interactions. Why is the number of user interactions important? The more times you ask your users to interact with or think about something, the more cognitive and motor effort is required. As Steve Krug quite eloquently puts it… “Don’t make me think”!

In the same excel sheet it is worth noting how many interactions it takes to answer your question set. For example, a simple drop down box might need up to 3 interactions:

  • Initial click on the drop down
  • A scroll of the mouse wheel
  • Click to choose an option

Whilst this sounds like a tedious task, it’s useful to know how many interactions are needed to complete your registration form, not just the number of questions you’ve asked.

Drop downs vs radio buttons

Are you using dropdown menus where you could be using radio buttons?

Replacing dropdowns which only have a few options (as a general rule, anything under 6 answers) with radio buttons provides a quick win. Not only do radio buttons offer fewer interactions, but they allow users see the answer before they interact with the initial section, speeding up the question and answer process. This is also a big benefit on mobile, where users find dropdowns much harder to interact with. Yes, this will create longer pages, but thanks to social media and mobile adoption, users have learnt how to scroll proficiently.

Example showing how a drop down could be turned into a radio button to reduce user interactions.

Mobile

Is there a way you can help the user move through the journey? By automatically scrolling the browser/app window to the next section, fewer interactions from the user are needed. This is something Virgin America do to help users navigate smoothly through the process:

Virgin America’s use of an auto-scrolling responsive site makes using a mobile significantly easier.
Virgin America’s use of an auto-scrolling responsive site makes using a mobile significantly easier.

Get creative, don’t make users think.

Example

During usability testing, we tasked our users to get car insurance quotes from comparison sites and compared their experiences with moneysupermarket.com with competitors. Whilst comparethemarket asks a similar number of questions to competitors, users not only completed forms faster, but they found it easier to use simply because it takes fewer interactions to get a quote.

Comparethemarket.com uses big button style radio buttons to remove the need of drop down boxes.

3. Calendars

There’s a time and a place for calendars i.e. when it’s important for the user to know what day in the week a particular date falls on, such as when booking a holiday. Anywhere else and it’s a needless calendar, and my personal pet hate!

Here are a number of common issues:

  • Generally stock calendars offer a very poor affordance
  • It’s not clear whether to choose the day or year first
  • It’s not clear that you need to choose a year. I’ve watched helplessly as users select a day but then ignore the year section. Thinking they’ve done what’s required they click onto the main page to hide the calendar, to find out that the field is not completed. This just completely confuses users, as they don’t understand what went wrong. At this point, they’re stuck in an endless loop trying to figure out how to complete your form.
  • After choosing a date, if you want to change the year (let’s say your user made a mistake), your user will need to not only change the year, but also then reselect the day again.
Two common poor examples of calendars used by websites to capture a user’s Date of Birth.

At this stage, there are two options:

  1. Either completely remove the calendar and keep a text field/drop down box (this makes more sense for a DOB field, for example).
  2. Spend time ensuring the calendar is both clear and user-friendly. I would highly recommend carrying out user testing to make this as friction-free as possible.

4. Readability

This is a nice simple one! How readable are the questions and answers on your site?

Are you using a big enough point size and line spacing for your target audience? What is your primary persona’s age – has this been taken into consideration? A great example is Saga (over 50s). Their customers may require a larger font size to that of users who visit www.onedirectionmusic.com! The recommended size will vary depending on your brands chosen font. Using Arial as a baseline, the question titles should be 16px with the question labels falling to 14px.

Standard font size for form question titles and their labels. Note how the titles are slightly bigger than the labels. This provides an easy way of identifying the hierarchy.

When users fill out forms their eyes will flicker between the question and the answer. This will be more pronounced on sites where it’s imperative that the user enters the correct information – such as a mortgage or job application. The further the question is from the answer, the more distance the eye needs to cover, which translates into more effort for the user. Conversely, the closer the question and answer, the less effort.

Placing the question title above the answer is the best way of reducing the distance the eye needs to travel, thus reducing effort.

A number of different examples of ways websites layout their forms. The question title above the question label has proven in countless eye tracking tests to reduce the time it takes for the eye to move between title and label.

5. Inline error handling

I’m sure that we’ve all had this common experience. You encounter a long question set (with dread I might add); complete it, only to find that on the form submission you’ve made a number of errors… Gah, time to scroll through and find what you’ve done wrong!

Why wait for the form submission to notify users of an error? Rather than wait for the user to reach the end of the form, highlight the error as soon as possible. This will keep the user focused on what they were doing and keep de-motivation to a minimum.

Lastly, make sure the error-handling wording is helpful, rather than ambiguous. “Please select a correct value” or “Please tell us your name”  is more helpful than “answer is invalid” or “field cannot be empty”, because it addresses the user, not the database validation.

For part 2, click here .