How we increased revenue by 11% with one small change

Dave Gowans

Split testing has matured and more and more websites are testing changes. The “test everything” approach has become widespread and this has been a huge benefit for the industry. Companies now know the true impact of changes and can avoid costly mistakes. The beauty of testing is that the gains are permanent, and the losses are temporary.

Such widespread adoption of testing has brought the challenge that many tests have small, or no impact on conversion rates. Ecommerce managers are pushing for higher conversion rates with the request:

“We need to test bigger, more radical things”

Hoping that these bigger tests bring the big wins that they want.

Unfortunately, big changes don’t always bring big wins, and this approach can result in bigger more complex tests, which take more time to create and are more frustrating when they fail.

How a small change can beat a big change

To see how a well thought out, small change can deliver a huge increase in conversion rates, where a big change had delivered none, we can look at a simple example.

This site offers online driver training courses, allowing users to have minor traffic tickets dismissed. Part of the process gives users the option to obtain a copy of their “Driver Record”. The page offering this service to customers, was extremely outdated:

Default-2-Control-26808934

Wireframe to demonstrate the original page layout for the driver record upsell

Conversion and usability experts will panic at this form with its outdated design, lack of inline validation and no value proposition to convince the user to buy.

The first attempt to improve this form was a complete redesign:

Default-0-Variation-No-Tooltip-26808924

Wireframe to show the initial test designed to increase driver record upsells

Although aesthetically more pleasing, featuring a strong value proposition and using fear as a motivator, the impact of this change was far from that expected. Despite rebuilding the entire page, there was almost no impact from the test. The split test showed no statistically significant increase or decrease.

This test had taken many hours of design and development work, with no impact on conversion, so what had gone wrong?

To discover the underlying problem, the team from Conversion.com placed a small Qualaroo survey on the site. This popped up on the page, asking users “What’s stopping you from getting your driver record today?”

Qualaroo (1)

Small on page surveys like this are always extremely valuable in delivering great insights for users, and this was no exception. Despite many complaints about the price (out of scope for this engagement), users repeatedly said that they were having trouble knowing their “Audit Number”.

The audit number is a mandatory field on the form, and the user could find it on their Drivers License. Despite there being an image on the page already showing where to find this, clearly users weren’t seeing it.

The hypothesis for the next version of this test was simple.

“By presenting guidance about where to find the audit number in a standard, user friendly way at the time that this is a problem for the user, fewer users will find this to be an issue when completing the form.”

The test made an extremely small change to the page, adding a small question mark icon next to the audit number field on the form:

Default-1-Variation-Tooltip-Added-26808932

Wireframe to show the small addition of a tooltip to the test design

This standard usability method would be clear for users who were hesitating at this step. The lightbox which opened when the icon was clicked, simply reiterated the same image that was on the page.

tooltip_final

Despite this being a tiny change, the impact on users was enormous. The test delivered an 11% increase in conversions against the version without the icon. By presenting the right information, at the right time, we delivered a massive increase in conversions without making a big change to the page.

An approach to big wins

So was this a fluke? Were we lucky? Not at all. This test demonstrated the application of a simple but effective approach to testing which can give great results almost every time. There’s often no need to make big or complex changes to the page itself. You can still make radical, meaningful changes with little design or development work.

When looking to improve the conversion rate for a site or page, by following three simple steps you can create an effective and powerful test:

  1. Identify the barrier to conversion.
    A barrier is a reason why a user on the page may not convert. It could be usability-related, such as broken form validation or a confusing button. It could be a concern about your particular product or service, such as delivery methods or refunds. Equally, it could be a general concern for the user, such as not being sure whether your service or product is the right solution to their problem. By using qualitative and quantitative research methods, you can discover the main barriers for user.
  2. Find or create a solution.
    Once you have identified a barrier, you can then work to create a solution. This could be a simple change to the layout of the site; a change to your business practices or policies; supporting evidence or information or compelling persuasive content such as social proof or urgency messaging. The key is to find a solution which directly targets the barrier the user is facing.
  3. Deliver it at the right time.
    The key to a successful test is to deliver your solution to the user when it’s most relevant to them. For example price promises and guarantees should be shown when pricing is displayed; delivery messaging on product pages and at the delivery step in the basket; social proof and trust messaging could be displayed early in the process; and urgency messaging when the user may hesitate. The effectiveness of a message requires it to be displayed on the right page and in the right area for the user to see it and respond to it at the right time.

By combining these three simple steps, you can develop tests which are more effective and have more chance of delivering a big result.

Impact and Ease

Returning to the myth that big results need big tests, you should make sure that you consider the impact of a test and its size as almost completely different things. When you have a test proposal, you should think carefully about how much impact you believe it will have, and look independently at how difficult it will be to build.

At Conversion.com, we assess all tests for Impact and Ease and plot them on a graph:

Dave Graphs

Clearly the tests in the top right corner are the ones you should be aiming to create first. These are the tests that will do the most for your bottom line, in the shortest amount of time.

More impact, more ease

So how do you make sure that you can deliver smaller tests with bigger impact?

Firstly, maximise the impact of your test. You can do this by targeting the biggest barriers for users. By taking a data driven approach to identifying these, you are already giving your test a much higher chance of success. With a strong data-backed hypothesis you already know that you are definitely overcoming a problem for your users.

You can increase the impact by choosing the biggest barriers. If a barrier affects 30% of your users, that will have far more impact than one only mentioned by 5% of your users. Impact is mostly driven by the size of the issue as overcoming it will help the most users.

To get the biggest impact from smaller tests, you need to look at how you can make tests easier to create. By choosing solutions which are simple, you can much more quickly iterate and get winners. Simple but effective ways of developing simple tests can include:

A simple approach for big wins with small tests

By following the simple three step process, you can greatly increase the impact and rate of your tests, without having to resort to big, radical, expensive changes:

  1. Identify the barrier to conversion.
  2. Find or create a solution.
  3. Deliver it at the right time.

The impact of your testing program is driven more by the size of the issues you are trying to overcome and the quality of your hypotheses than by the complexity and radical approaches in your testing. Focusing time on discovering those barriers, will pay off many times more than spending the time in design and development.

Join 5,000 other people who get our newsletter updates