Strategy Archives | Page 2 of 2 | Conversion.com

Design for decision making: why it matters

Exhibiting information in a clear, yet compelling way is one of the more challenging nuances of UX design. As users become increasingly reliant on technology to provide answers in a given situation, designers come under more pressure to play the role of the choice architect. There’s a conflict between the task of the product or service provider (assuming impartiality) who wants to display all the relevant information as clearly as possible; and the user who wants to filter out the extraneous possibilities and get right down to a smart selection. Given that the average person makes over 200 decisions a day just about food in our choice-riddled society, it’s no wonder users want the burden eased when it comes to choosing the right product for them – (on that note, skip straight to end for the 5 key takeaways).

Map A – London Underground
Map B – Geographic tube map of the London Underground (source: Mark Noad)

They say the road to hell is paved with good intentions. Often, the intention to provide users with all options results in a choice-paralysis which can both hinder customers in their journey, and harm your conversion rate – potentially sending users back a step and opening them back up to your competition. Alternatively, misrepresenting, or failing to emphasise important factors going into the decision-making process may cause users to overlook these factors, and adversely impact the eventual outcome the user ends up with. Let’s first look at this in the context of the transport industry, where the user literally just wants to get from A to B.

Metro maps and schematics have been the go-to solution for route planning since transport lines began to converge – the earliest map for London transport was published in 1908. They are integral to the smooth-running of big cities – particularly when you consider the growing population and the suburban sprawl of city workers. The trouble is, public transport maps do not scale with geographic reality. A recent study found that this distortion affects travelers perceptions of relative location, route selection and associations of different routes – e.g. train journeys that look ‘long’ are often actually quicker on foot – seemingly small oversights that can actually have quite significant consequences on efficiency among other success factors, when applied at such a large scale. The study uses an example of a passenger travelling from Paddington to Bond Street with a choice between two seemingly equidistant routes according to map A – either travel to Baker Street and change to the Jubilee line (Path 1), or change at Notting Hill Gate for the Central line (Path 2). Path 2 is about 15% slower by time on-train, and actually starts in the opposite direction to the destination on a geographical map, however the experiment found that 30% of passengers chose path 2, probably because on the schematic tube map, path 2 is about 10% shorter than path 1 and Notting Hill Gate is shown to the south (not west) of Paddington. Map B shows the map scaled to London’s geography.

The London tube map suggests that Marylebone and Baker Street are significantly farther apart than in reality

Another example is that Baker Street is shown slightly south of Marylebone and significantly further away, when seasoned Londoners know the two are actually situated only 5 minutes apart on foot (and are on the same road).

Some app designers have already begun tapping into this opportunity to more intelligently guide users’ transit decisions. Apps such as Citymapper and Tube Map provide additional insights to help users make contextually informed judgements, such as approximations of taxi fares, walking times or weather-based alternatives such as ‘rain safe’ options:

This is something UX designers will be tasked to consider increasingly as the discipline evolves and matures. User interfaces need to make it easy for users to choose, not just to use, and having this practise baked into web and app designs is guaranteed to be the difference between those who grow and those who stagnate, especially in competitive market spaces.

The e-commerce, travel and SaaS sectors are choice among those to start putting serious weight behind their online choice architecture.

Dressipi checks that it’s making the right predictions during onboarding, then learns continually once the user is actively engaged
Et voila, the user is given a personalised shopping experience with stylist-approved items customized to their preferences; whittling down the selection and boosting customer confidence in their eventual purchase.

Littlewoods and Very.co.uk have confronted the barrier of an expansive clothing catalogue and indecisive female shoppers (need I say more?) with a ‘style adviser’ – a super smart backend system courtesy of Dressipi designed to narrow and intelligently guide womens online fashion shopping, based half on self-preference and half on insider stylist tips and tricks.

Users primarily search by map, determining the results they’ll see and filtering out extraneous possibilities that typically clutter the accommodation selection experience in travel.

For hip traveller types who know that location is everything, Airbnb allow (even encourage) you to search by map, turning the selection process on its head by honing in on their users’ priorities. Custom filters can then be added by region, amenities, and user generated keywords to further refine the options, continually driving users towards their end goal. In future, a nice touch might be to extend the crowd-sourcing with user-generated contextual cues within the map for different areas and districts – e.g. good for shopping, coffee shops, nightlife or museums – but I digress (occupational hazard!).

In the SaaS space, Rackspace know their visitors are arriving with a diverse array of needs, and realise the importance in getting to the bottom of that quickly to avoid losing out on custom.

By guiding users through a smart flow of options, the urge to overload visitors with a comprehensive range of services is removed, and you can avoid confounding users who aren’t really sure what they need yet. For more complex hosting problems, the flow diverts to live chat or callback, whilst the outcome and final CTAs leave users assured that they’ve taken positive strides towards resolving their specific needs which are now ready to be picked up on the other end.

The key to solid choice architecture, whatever your business, is quite simple: know your customers. Anticipate their needs, and learn to see things through their eyes.

Don’t know who your customers are or what they need? Ask them. In the long run, gathering a little intel is better than leaving your customers alone in the wild. Package it up in some effortless UX and users will feel like it’s part of a bespoke service tailored to their needs.

Here are some golden rules to set you on the path to informed, but guided customer conversions:

  1. Ask the right questions. If a user can see why you’re asking, and what’s in it for them, they’ll already be bought in to the process. Make sure the user benefits and can see the rationale behind every question – if they can’t, lose it, because it’s not helping them.
  2. Don’t tell your customer what to do. Choice architecture is not a substitute for effective information architecture – make sure that you’re only ever guiding your customers decisions and not shoehorning them into buying something they didn’t want. Let your users know you understand them and are there to give honest and impartial guidance to help them reach the best outcome for them. The alternative could erode trust and potentially hurt your future relationship.
  3. Set clear progress indicators. Answering a few intelligently structured questions is all good and well, but if your user can’t see the light at the end of the tunnel, they’re likely to lose hope and abandon the process. Make sure the method is organised and transparent if you really want your customers to engage.
  4. Refine choices, but stow the rest away somewhere that’s visible and organised. No-one likes to feel that they might be missing out and some users may prefer different ways of navigating through your site. Sometimes it’s curiosity; sometimes a need for confirmation we made the right choice – we want to be able to see what we didn’t go with. Keeping this transparent is key to a healthy customer lifecycle.
  5. Make alternatives omnipresent. Ultimately, the customer knows best, and if they lose faith in your  site’s ability to meet their needs, make sure they have a jumping off point to avoid losing their custom altogether.

Spotting patterns – the difference between making and losing money in A/B testing.

Wrongly interpreting the patterns in your A/B test results can lose you money. It can lead you to make changes to your site that actually harm your conversion rate.

Correctly interpreting the patterns in your A/B test results will mean you learn more from each test you run. It will will give you confidence that you are only implementing changes that will deliver real revenue impact, and it will help you turn any losing tests into future winners.

At Conversion.com we’ve run and analysed hundreds of A/B and multivariate tests. In our experience, the result of a test will generally fall into one of 5 distinct patterns. We’re going to share these five patterns here, and we’ll tell you what each pattern means in terms of what steps you should take next. Learn to spot these patterns, follow our advice on how to interpret them, and you’ll be making the right decision, more often – making your testing efforts more successful.

To illustrate each of the patterns, we’ll imagine we have run an A/B test on an e-commerce site’s product page and are now looking at the results. We’ll be looking at the increase/decrease in conversion rate that the new version of this page delivered compared to the original page. We’ll be looking at this on a page-by-page basis for the four steps in the checkout process that the visitor goes through in order to complete their purchase (Basket, Checkout, Payment and finally Order Confirmation).

To see the pattern in our results in each case, we’ll plot a simple graph of the conversion rate increase/decrease to each page. We’ll then look at how this increase/decrease in conversion rate has changed as we move through our site’s checkout funnel.

1. The big winner

This is the type of test result we all love. Your new version of a page converts at x% higher to the next step than the original and this x% increase continues uniformly all the way to Order Confirmation.

The graph of our first result pattern would look like this.

The big winner

We see 10% more visitors reaching each step of the funnel.

Interpretation

This pattern is telling us that the new version of the test page successfully encourages 10% more visitors to reach the next step and from there onwards they convert equally as well as existing visitors. The overall result would be a 10% increase in sales. It is clearly logical to implement this new version permanently.

2. The big loser

The negative version of this pattern, where each step shows a roughly equal decrease in conversion rate, is a sign that the change that was made has had a clear negative impact. All is not lost though, often an unsuccessful test can be more insightful than a straightforward winner as the negative result forces you to re-evaluate your initial hypothesis and understand what went wrong. You may have stumbled upon a key conversion barrier for your audience and addressing this barrier in the next test could lead to the positive result you have been looking for.

Graphically this pattern will look like this.

The big loser

We see 10% fewer visitors reaching each step of the funnel.

Interpretation

As the opposite of the big winner, this pattern is telling us that the new version of the test page causes 10% fewer visitors to reach the next step and from there onwards they convert equally as well as existing visitors. The overall result would be a 10% decrease in sales. You would not want to implement this new version of the page.

3. The clickbait

“We increased clickthrus by 307%!” You’ve probably seen sensational headlines like this being thrown around by people in the optimisation industry. Hopefully, like us, you’ve developed a strong sense of cynicism when you read results like this. The first question I always ask is “But how much did sales increase by?”. Chances are, if the result being reported fails to mention the impact on final sales then what they actually saw in their test results was this pattern that we’ve affectionately dubbed “The clickbait”.

Test results that follow this pattern will show a large increase in the conversion rate to the next step but then this improvement quickly fades away in the later steps and finally there is little or no improvement to Order Confirmation.

Graphically this pattern will look like this.

The clickbait

Interpretation

This pattern catches people out as the large improvement to the next step feels as if it should be a positive result. However, often this pattern of results is merely showing that the new version of the page is pushing a large amount of visitors through to the next step who have no real intention of purchasing. This is illustrated by the sudden large drop in the conversion rate improvement at the later steps when all of the unqualified extra traffic abandons the funnel.

As with all tests, whether this result can be deemed a success depends on the specifics of the site you are testing on and what you are looking to achieve. If there are clear improvements to be made on the next step(s) of the funnel that could help to convert the extra traffic from this test, then it could make sense to address those issues first and then re-run this test. However, if these extra visitors are clicking through by mistake or because they are being misled in any way then you may find it difficult to convert them later no matter what changes you make. Instead, you could be alienating potential customers by delivering a poor customer experience. You’ll also be adding a lot of noise to the data of any tests you run on the later pages as there are a lot of extra visitors on those pages who are unlikely to ever purchase.

4. The qualifying change

The third pattern is almost the reverse of the second in that here we actually see a drop in conversion to the next step but an overall increase in conversion to order confirmation.

Graphically this pattern looks like this.

The qualifying change

Interpretation

Taking this pattern as a positive can seem counter-intuitive because of the initial drop in conversion to the next step. Arguably, this type of result is actually as good if not better than a big winner from pattern 1. Here the new version of the test page is having what’s known as a qualifying effect. Visitors who may have otherwise abandoned at a later step in the funnel are leaving at the first step instead. Those visitors who do continue past the test page on the other hand are more qualified and therefore convert at a much higher rate. This explains the positive result to Order Confirmation.

Implementing a change that causes this type of pattern means visitors remaining in the funnel now have expressed a clearer desire to purchase. If visitors are still abandoning at a later stage in the funnel, the likelihood now is that this is being caused by a specific weakness on one of those pages. Having removed a lot of the noise from our data, in the form of the unqualified visitors, we are left with a much more reliable measure of the effectiveness of the later steps in the funnel. This means identifying weaknesses in the funnel itself will be far easier.

As with pattern 2 there are circumstances where a result like this may not be preferable. If you already have very low traffic in your funnel then reducing that further could make it even more difficult to get statistically significant results when testing on the later pages of the funnel. You may want to look at tests to drive more traffic to the start of your funnel before implementing a change like this.

5. The messy result

This final pattern is often the most difficult to extract insight from as it describes results that show very little pattern whatsoever. Here we often see both increases and decreases in conversion rate to the various steps in the funnel.

The messy result

Interpretation

First and foremost, a lack of a discernible pattern in the results of your split-test can be a tell-tale sign of insufficient levels of data. At the early stages of experiments, when data levels are low, it is not uncommon to see results fluctuating up and down. Reading too much into these results at this stage is a common pitfall. Resist the temptation of checking your experiment results too frequently – if at all – in the first few days. Even apparently strong patterns that emerge at these early stages can quickly disappear with a larger sample.

If your test has a large volume of data, and you’re still seeing this type of result, then the likelihood is that your new version of the page is delivering a combination of the effects from the clickbait and the qualifying change patterns. Qualifying some traffic but simultaneously pushing more unqualified traffic through the funnel. If your test involved making multiple changes to a page, try testing the changes separately to pinpoint which individual changes are causing the positive impact and which are causing the negative impact.

Key takeaways

The key point to take from all of these patterns is the importance of tracking and analysing the results at every step of your funnel when you A/B test, rather than just the next step after your test page. It is easy to see how if only the next step was tracked that many tests can have been falsely declared as winners or losers. In short, this is losing you money.

Detailed test tracking will allow you to pinpoint the exact step in your funnel that visitors are abandoning, and how that differs for each variation of the page that you are testing. This can help to answer the more important question of why they are abandoning. If the answer to this is not obvious, running some user tests or watching some recorded user sessions of your test variations can help you to develop these insights and come up with a successful follow up test.

There is a lot more to analysing A/B tests than just reading off a conversion rate increase to any single step in your funnel. Often, the pattern of the results can reveal greater insights than the individual numbers. Avoid jumping to conclusions based on a single increase or decrease in conversion to the next step and always track right the way through to the end of your funnel when running tests. Next time you go to analyse a test result, see which of these patterns it matches and consider the implications for your site.