Sadie Neve, Author at Conversion.com

The Perception Gap: Can we ever really know what users want?

Have you ever heard of Mazagran? A coffee-flavoured bottled soda that Starbucks and Pepsi launched back in the mid-1990s? No, you haven’t, and there is a good reason for that!

Starbucks correctly collected market research that told them customers wanted a cold, sweet, bottled coffee beverage that they could conveniently purchase in stores.

So surely Mazagran was the answer?

Evidently not! Mazagran was not what the consumers actually wanted. The failure of this product was down to the asymmetry that existed between what the customers wanted and what Starbucks believed the customer wanted.

Despite Starbucks conducting market research, this gap in communication still occurred, often known as the perception gap. Luckily for Starbucks, Mazagran was a stepping stone to the huge success that came with bottled Frappucinos; what the consumers actually wanted.

What is the perception gap and why does it occur?

Perception is seen as the (active) process of assessing information in your surroundings. A perception gap occurs when you attempt to communicate this assessment of information but it is misunderstood by your audience.

Assessing information in your surroundings is strongly influenced by communication. Due to different forms of human communication, a perception gap can occur when communication styles are different to your own. Not only can these gaps occur, but they vary in size. This depends on the different levels of value that you, or your customers, attach to each factor. In addition, many natural cognitive biases can influence the degree of the perception gap, biasing ourselves to believe we know what other people are thinking, more than we actually do.

Perception gaps in ecommerce businesses

Perception gaps mainly occur in social situations, but they can also heavily impact e-commerce businesses, from branding and product to marketing and online experience.

Perception gaps within ecommerce mainly appear due to customers forming opinions about your company and products on their broader experiences and beliefs. One thing that is for sure, perception gaps certainly occur between websites and their online users. Unfortunately, they are often the start of vicious cycles, where small misinterpretations of what the customer wants or needs are made worse when we try to fix them. Ultimately, this means we are losing out on turning visitors into customers.

Starbucks and Pepsi launching Mazagran was an example of how perception gaps can lead to the failure of new products. McDonalds launching their “Good to Know” campaign is an example of how understanding this perception gap can lead to branding success.

This myth-busting campaign was launched off the back of comprehensive market research using multiple techniques. McDonalds understood the differences between what they thought of themselves e.g. fast food made with high quality ingredients, and what potential customers thought of McDonalds, e.g. chicken nuggets made of chicken beaks and feet. Understanding that this perception gap existed allowed them to address these in their campaign, which has successfully changed users perceptions of their brand.

For most digital practices, research plays an important part in allowing a company or brand to understand their customer base. However, conducting and analysing research is often where the perception gap begins to form.

For example, say you are optimising a checkout flow for a retailer. You decide to run an on-site survey to gather some insight into why users may not be completing the forms, and therefore are not purchasing. After analysing the results it seems the top reason users are not converting is they are finding the web form confusing. Now this where the perception gap is likely to form. Do users want the form to be shortened? Do they want more clarity or explanation around form fields? Is it the delivery options that they may not understand? 

Not being the user means we will never fully understand the situation that the user is in. Making assumptions of this builds on the perception gap.

Therefore, reducing the perception gap is surely a no-brainer when it comes to optimising our websites. But is it as easy as it seems? 

In order to reduce the perception gap you need to truly understand your customer base. If you don’t, then there is always going to be an asymmetry between what you know about your customers and what you think you know about your customers.

How to reduce perception gaps

Sadly, perception gaps are always going to exist due to our interpretation of the insights we collect and the fact that we ourselves are not the actual user. However, the following tips may help to get the most out of your testing and optimisation by reducing the perception gap:

  1. Challenge assumptions – too often we assume we know about our customer, how they are interacting with our site and what they are thinking. Unfortunately, these assumptions can get cemented over time into deeply held beliefs of how users think and behave. However, challenging these assumptions leads to true innovation and new ideas that may not have been thought of before. With this in mind, assumptions can be answered by the research we conduct.
  2. Always optimise based on two supporting evidences – the perception gap is more likely to occur when research into a focus area is limited or based on one source of insight. Taking a multiple-measure approach means insights are likely to be more valid and reliable.
  3. Read between the lines – research revolves around listening to your customers but more importantly it is about reading between the lines. It is the difference between asking for their responses and then actually understanding them. As Steve Jobs once said “Customers don’t know what they want”; whether you believe that or not, understanding their preferences is still vital for closing the perception gap.
  4. Shift focus to being customer-led – being more customer-led, as opposed to product-led will place a higher value on research of your customers. With more emphasis on research, this should lead to a great knowledge and understanding of your customer base, which in turn should reduce the perception gap that has the potential to form.

Conclusion

The perception gap is something that is always going to exist and is something we have to accept. Conducting research, and a lot of it, is certainly a great way to reduce the perception gap that will naturally occur. However, experimentation is really the only means to truly confirm whether the research and insight you collected into your customer base are valid and significantly improve the user experience. One quote that has always made me think is by Flint McLaughlin who said “we don’t optimise web pages, we optimise for the sequence of thought”. This customer-led view when it comes to experimentation can only result in success.

5 steps to kick-start your experimentation programme with actionable insights

Experimentation has to be data-driven.

So why are businesses still kicking off their experimentation programmes without good data? We all know running experiments on gut-feel and instinct is only going to get you so far.

One problem is the ever-growing number of research methods and user-research tools out there. Prioritising what research to conduct is difficult. Especially when you are trying to maximise success with your initial experiments and need to get those experiments out the door quickly to show ROI.

We are no stranger to this problem. And the solution, as ever, is to take a more strategic approach to how we generate our insight. We start every project with what we call the strategic insights phase. This is a structured, repeatable approach to planning user-research we’ve developed that consistently generates the most actionable insight whilst minimising effort.

This article will provide a step-by-step guide of how we plan our research strategy so that you can replicate something similar yourself. Meaning you can set up your future experiments for greater success.

The start of an experimentation programme is crucial. Pressures of getting stakeholders buy-in or achieving quick ROI means the initial experiments are often the most important. A solid foundation of actionable insight from user-research can make a big difference as to how successful your early experiments are.

With hundreds of research tools enabling multiple different research methods, a challenge arises with how we choose which research method will generate the insight that’s most impactful and actionable. Formulating a research strategy for how you’re going to generate your insight is therefore crucial.

When onboarding new clients, we run an intense research phase for the first month. This allows us to get up to speed on the client’s business and customers. More importantly, it provides us with data that allows us to start building our experimentation framework – identifying where our experimentation can make the most impact and what our experimentation should focus on. We find dedicating this time to insights sets our future experiments up for the bigger wins and therefore, a rapid return on investment.

Our approach: Question-led insights

When conducting research to generate insight, we use what we call a question-led approach. Any piece of research we conduct must have the goal of answering a specific question. We identify the questions we need to answer about a client’s business and their website and then conduct only the research we need to answer them. Taking this approach allows us to be efficient, gaining impactful and actionable insights that can drive our experimentation programme.

Following a question-led approach also means we don’t fall into the common pitfalls of user-research:

  • Conducting research for the sake of it
  • Wasting time down rabbit holes within our data or analytics
  • Not getting the actionable insight you need to inform experimentation

There are 5 steps in our question-led approach.

1. Identify what questions you need, or want, to answer about your business, customers or website

The majority of businesses still have questions about their customers they don’t have the answers to. Listing these questions can provide a brain-dump for everything you don’t know but that if you did know would help you design better experiments. Typically these questions will fall into three main categories; your business, your customers and your website.

Although one size does not fit all with the questions we need to answer, we have provided some of the typical questions that we need to answer for clients in e-commerce or SaaS.

SaaS questions:

  • What is the current trial-to-purchase conversion rate?
  • What motivates users on the trial to make a purchase? What prevents users on the trial to make a purchase?
  • What is the distribution between the different plans on offer?
  • What emails are they sending users when they are in their trial? What is the life cycle of these emails?
  • What are the most common questions asked to customer services or via live chat?

We can quite typically end up with a list of 20-30 questions. So the next step is to prioritise what we need to answer first.

2. Prioritise what questions need answering first

We want our initial experiments to be as data-driven and successful as possible. Therefore, we need to tackle the questions that are likely to bring about the most impactful and actionable insights first.

For example, a question like “What elements in the navigation are users interacting with the most?” might be a ‘nice to know’. However, if we don’t expect a navigation experiment to be one we would run any time soon, this may not be a ‘need to know’ and therefore wouldn’t be high priority. On the other hand, a question like “What’s stopping users from adding products to the basket?” is almost certainly a ‘need to know’. Answering this is very likely to generate insight that can be directly turned into an experiment. Rule of thumb is to prioritise the ‘need to know’ questions ahead of the ‘nice to know’.

We also need to get the actionable insight quickly. Therefore, it is important to ensure that we prioritise questions that aren’t too difficult or time consuming to answer. So, a second ranking of ‘ease’ can also help to prioritise our list.

3. Decide the most efficient research techniques to answer these questions

There are many types of research you could use to answer your questions. Typically we find the majority of questions can be answered by one or more of web analytics, on-site or email surveys, usability testing or heatmaps/scrollmaps. There may be more than one way to find your answer.

However, one research method could also answer multiple questions. For example, one round of usability testing might be able to answer multiple questions focused on why a user could be dropping off at various stages of your website. This piece of research would therefore be more impactful, as you are answering multiple questions, and would be more time efficient compared to conducting multiple different types of research.

For each question in our now prioritised list we decide the research method most likely to answer it. If there are multiple options you could rank these by the most likely to get an answer in the shortest time. In some cases we may feel the question was not sufficiently answered by the first research method, so it can be helpful to consider what you would do next in these cases.

4. Plan the pieces of research you will carry out to cover the most questions

You should no have a list of prioritised questions you want to answer and what research method you would use to answer each. From this you can select the pieces of research you should carry out based on which would give you the best coverage of the most important questions. For example, you might see that 5 of your top 10 questions could be answered through usability testing. Therefore, you should prioritise usability testing in the time you have, and the questions you need to answer can help you to design your set of tasks.

After your first round of research, revisit your list of questions and for each question evaluate whether or not you feel it has been sufficiently answered. Your research may also have generated more questions that should be added to the list. Periodically you might also need to re-answer questions where user behaviour has changed due to your experimentation. For example, if initially users were abandoning on your basket page due to a lack of trust, but successful experiments have fixed this, then you may need to re-ask the question to discover new problems on the basket page.

On a regular basis you can then repeat this process again of prioritising the questions, deciding the best research methods and then planning your next set of research.

5. Feed these insights into your experimentation strategy

Once your initial research pieces have been conducted and analysed it is important to compile the insight from them in one place. This has two benefits. The first being the ease in visualising and discovering themes that may be emerging within your data from multiple sources of insight. The second being the benefit that comes from having one source of information that could be shared with others within your business.

As your experimentation programme matures it is likely you will be continuously running research in parallel to your experiments. The insight from this research will answer new questions that will naturally arise and can help inform your experimentation.

Taking this question-led approach means you can be efficient with the time you spend on research, while still maximising your impact. Following our step-by-step guide will provide a solid foundation that you can work upon within your business:

  1. Identify what questions you need, or want, to answer about your business, customers or website
  2. Prioritise what questions need answering first
  3. Decide the most efficient research techniques to answer these questions
  4. Plan the pieces of research you will carry out to cover the most questions
  5. Feed these insights into your experimentation strategy

For more information on how to kick-start experimentation within your business, get in touch here.

Who, When and Where, but what about the Why? Understanding the value of Qualitative Insights: On-site Surveys

Within our data-driven industry, many gravitate towards heavily relying on quantitative data to guide and inform experimentation. With Google Analytics and the metrics it measures (e.g. conversion rate, bounce rate and exit rates) often dominating our focus, it means many undervalue or forget about the other insights we can run.

Numbers, rates and statistics are great for finding out what’s happening on your site and where the opportunities for testing lie. However, what some people still don’t understand is that quantitative insights can only take us so far within conversion rate optimisation. Understanding where to target our tests for the best impact is necessary but it does not provide insight into what our tests should entail. This is where qualitative research takes center stage.

Qualitative research provides us with insight into the “why” behind quantitative research. It provides a deeper understanding into your visitors and customers, which is vital to understanding why they behave and engage with your site in a particular way. Conducting qualitative research is ideal for discovery and exploration and a great way of generating insights which can be used to guide future experimentation.

In this series, we will cover the qualitative insights that we run for our clients at Conversion.com including case studies of when qualitative research has helped with our tests and some of our best practices and tips!

 

On-site Surveys

By on-site surveys we are referring to targeted pop-up surveys which appear on websites asking either one, or a series of questions to users to gather insights. Hotjar and Qualaroo are two of our favourite data collection tools for this specific insight.

On-site surveys let you target questions to any visitor, or subset of visitors, to any page within your website. These surveys can be prompted to appear to your visitors in a number of ways; time elapsed, specific user behaviour such as a clicked element or exit intent, or custom triggers using Javascript. Understanding the behaviour and intent of website visitors allows us to more effectively interpret motivations and barriers that they may face with your site. These insights can then guide our data-driven tests which aim to emphasise the motivations whilst eliminating the barriers.

On-site surveys have many benefits such as being non-intrusive, immediate in collecting data and they are anonymous which allows for higher ‘ecological’ validity of responses. But most importantly, they have the benefit of gaining real-time feedback from users ‘in the moment’ while they are engaging with your site.

Don’t underestimate the power of an exit survey. An exit survey is triggered when a user shows intent to leave a website, for example, when a user moves their cursor towards the top of the page. Exit surveys are the best non-intrusive qualitative method that provide crucial insights into why visitors are not converting or why your website may have a high exit rate. They often outperform other website surveys in terms of response rates because they minimise the annoyance a survey gives to a user, especially when they were already planning on leaving the site.

But what questions should you be asking in these on-site surveys? Well, that really depends on what you want to get out of this piece of insight. Below are a few examples of the types of questions you can ask:

 

  • Investigating user intent and bounce rate
    • Why did you come to this site today?
    • What were you hoping to find on this page?
    • Did this page contain the information you were looking for?
  • Understanding usability challenges
    • Were you able to complete your task today? (If yes, why? If no, why not?)
    • Is there anything on the site that doesn’t work the way you expected it to?
  • Uncover missing content
    • What additional information would you like to see on this page?
    • Did this article answer your question?
  • Identify potential motivations
    • What persuaded you to purchase from us?
    • What convinced you to use us rather than a competitor?
    • What was the one thing that influenced you to complete your task/purchase?
  • Identify potential barriers
    • What is preventing you from completing your task?
    • What is stopping you from completing your checkout?  
    • What concerns do you have about purchasing from us?

 

When launching a survey, it may be difficult to know how long to run it for or how many responses you actually need. In reality, large sample sizes are important when collecting data, however we are more concerned with gaining in-depth understanding into your users, while looking for ideas and inspiration. Therefore we look for thematic saturation, the idea that the data is providing us with no new significant information, instead of large sample sizes. For more information about the sample size required to running an on-site survey and how many responses are necessary, check out our article about on-site surveys and thematic saturation.

At Conversion.com we ensure we are continuously collecting both qualitative and quantitative insights on our clients. On-site surveys are just one of these insights which help to guide and backup our data-driven hypotheses.

An example of when on-site surveys have guided tests to add additional revenue to our clients is with one company within the online pharmacy industry. Our on-site survey asked users what stopped them from choosing a particular product when they were at a specific stage of their user journey. Insights demonstrated that users found it difficult to choose products by themselves with no assistance or help. This informed a test we ran on these specific pages which implemented signposting to particular products through recommendations. We believed that by including elements that could aid users at the product selection stage, it would make it easier for users to select a product, thus eliminating the barrier we found via our on-site survey. Making this change caused an uplift of 3.48% in completed purchases for new users.  

 

Look out for our next article in this series where we discuss the importance of competitor analysis.  

The Optimizely Customer Workshop: How do the UK’s biggest brands approach experimentation?

The Optimizely Customer Workshop, hosted by Phil Nayna (Enterprise Account Executive at Optimizely) and Stephen Pavlovich (Founder/CEO of Conversion.com), brought together representatives from some of the UK’s biggest brands to share their thoughts and insights on Conversion Rate Optimisation (CRO). The workshop took shape in the form of a roundtable where talk topics included: “Building a lean testing programme”, “Applying testing to business challenges” and, the buzzword of the moment, “Personalisation”.

 

Building a lean testing programme

Experience in testing and experimentation amongst attendees in the room ranged from businesses who were just starting out, to those that had already produced mature testing programmes. This range of experience provided the basis for a profound discussion. For UK brands just starting their testing, it was emphasised that obtaining buy-in from stakeholders was key to building a testing programme within their companies. For brands with more testing experience, the biggest challenge in building this lean programme came with shifting their culture. Key to adopting a testing culture is acknowledging – and leveraging – the focus on short-term testing and validation over long-term planning. That’s why the attendees all agreed that a short-term iterative roadmap is far better than a long-term rigid roadmap.

Here at Conversion.com, experimentation is at the heart of everything we do and who we are. We believe that building a lean testing programme and cultivating a testing culture relies on two key factors: education and sharing. Educating your employees to understand your philosophy on experimentation and its benefits is key. This allows your employees to view experimentation as far more than just the potential value it yields with winning tests. At Conversion.com we value education highly. We run our own CRO training programme for new associate consultants that educates them and allows them to think creatively and with ambition when it comes to experimentation and CRO. When sharing experiment results with clients, it’s crucial not just to share what was tested and what the results were, but more importantly why we tested it and what it can teach us about their users. This means that with every experiment, we learn more about their users, allowing us to refine and improve our testing strategy – while delivering measurable uplift.

 

Applying testing to business challenges (prioritisation of your testing roadmap)

Strategies for prioritising testing roadmaps varied extensively within the workshop, with all brands favouring a different approach or primary metric. One major UK supermarket brand stated that their approach was very data-driven, something we value highly at Conversion.com. They prioritised ease of implementation, lack of organisational friction in getting the test launched, potential impact the test has and the data or evidence supporting the hypothesis. Other primary metrics included cost impacts, due to one UK brand having a lack of development resource. This meant they favoured the ease of the test as a priority, as it allowed them to test despite this barrier.  

At Conversion.com we believe that the data driving a test is most important when prioritising our tests. This data informs us of the impact that the test is likely to have. Secondary metrics, such as the ease of building the test and getting sign-off  – as well as the other tests and hypotheses we have running in parallel – allow us to see how and when this test fits into our roadmap. However, it is important to note that prioritisation can be limited. There are finite swimlanes to test and finite resources, meaning prioritisation and planning have to be coherent. Understanding that testing roadmaps have to be flexible and adaptive is key. This allows us to easily change our roadmap according to the performance of previous tests and as our understanding of users improves.

 

Personalisation

Personalisation is the buzzword of the moment in CRO and this topic divided our workshop audience. Some UK brands stated that they had banned the word completely. Instead they refer to this as creating more relevant customer experiences and concentrating on more targeted journeys. All representatives agreed that their personalisation journey was at its early stages, believing it was important to keep personalisation simple and start getting tests live in order to gain momentum. However, we believe that this could increase the risk of companies starting personalisation too early and as a result, missing valuable opportunities for increasing their conversion rate with all-audience A/B testing. With personalisation being such a hot topic, it is critical that companies take the time to integrate this into their wider digital strategies as opposed to implementing it without consideration for other key areas of CRO.

At Conversion.com, we view personalisation as optimising conversion by increasing the relevance of experiences for specific audiences. Although we see personalisation as a great and exciting new opportunity to test, we believe it is important to successfully assess when to start personalisation. By its nature, it forces you to focus on a subset of users, potentially diminishing the impact of experiments as well as complicating future all-audience experiments.

 

The Optimizely Customer Workshop was the perfect setting for valuable discussions and an insight into how the UK’s biggest brands approach experimentation. From the workshop the key takeaways were:

  • Education of CRO needs to be more highly regarded within businesses in order to promote a shift in testing culture.
  • Visibility of testing programmes via sharing of content allows employees to understand the value of testing past just the potential value of winning tests.
  • Roadmaps should be as flexible and adaptive as possible to allow for test and learn iterations to occur.
  • Personalisation should be undertaken when its potential overtakes all-audience testing and should integrate with – rather than replace – typical A/B testing for CRO.

Aesthetic-Usability Effect: Can beauty become the key to usability in online retail?

Retailers we have a problem.

iPerception’s 2016 report shows that less than 25% of people who intended to make a purchase ended up actually buying something. That’s a huge leak in the buying funnel.

Why did 75% of people leave?

Using iPerception’s report from 2009 the top #1 and #3 reasons are usability-related:

Lesson: Fix your site’s usability issues.

But the problem is even more severe when you realise that this statistic talks about those who stayed on the website.

The second biggest leak is people who didn’t even stay on the website. The average bounce rate for e-commerce is reported to be somewhere between 35 and 57%.

According to a study by Chao Liu and colleagues from Microsoft Research, there is a 10-20 second window where a user decides whether a website is worth giving a chance or not.

What happens during those 20 seconds that makes people leave?

Research led by B.J. Fogg shows that many factors affect people’s trust towards a website, but the most prominent issue is design.

Lesson: Make your website visually appealing.

But here’s something interesting.

It appears that a website’s aesthetics and usability aren’t independent.

Research shows that users find visually attractive websites easy to use, even when actual ease to use is poor. In one usability study, on average, users failed to complete more than half of the required tasks successfully. When they worked with visually appealing websites they still rated their user experience highly.

So, maybe you could kill 2 birds with one stone: by improving aesthetics you might not only convince more visitors to stay, but also increase the chance that they will put up with your usability issues, ultimately buying.

This principle is known as the aesthetic-usability effect.

The aesthetic-usability effect.

In the context of e-commerce, the aesthetic-usability effect is when users perceive more aesthetically pleasing websites to be easier to use than less aesthetically pleasing websites.

This means that better aesthetics can result in:

  • More people choosing your website over other sites
  • People putting up with your usability issues, making them more likely to reach the end of the buying funnel and purchase

(With that said, there are studies that show that severe usability problems will force users to judge usability independently of aesthetics. Regardless of how great your visuals are, if a user can’t find a product, she won’t buy.)

We saw the aesthetic-usability effect work not only in academic labs, but in real-life business, too. For example, one experiment we ran was solely focused on improving the design of a check-out form. This test resulted in a +3.6% uplift. Additional revenue from that uplift is estimated at €450,000+ per year.

How to apply that principle yourself?

Some of the most important areas to apply that thinking are:

Top Landing Pages.

A landing page is the first page that users see when they come to your website. It’s crucial because this is where users form a positive first impression of your website. How well those pages look thus determines whether users will stay or leave, and what’s their level of tolerance with your usability issues.

Each website’s situation is different, but in retail most often visitors enter through your homepage, your category pages or your product pages.

As an example, let’s look at a traffic report for PC World, a British chain of computer superstores.

Based on the limited data that Semrush can provide me with, I can see that their homepage gets the most traffic. Equally, their category pages are right at the top:

Checkout Flow.

Checkout Flow is the stage where people start to open their wallets. Not everyone does though with 69% of users abandoning the process. At least 2 of the top 10 reasons why people abandon could be mitigated by the aesthetic-usability effect:

As we already know, making this stage visually appealing can enhance users’ perception of the funnel being easy to complete and drive them to make a purchase.  

How to make my website visually attractive?

A website needs to be attractive, but how do we make it happen? After all, it’s subjective. You might like your website, but do your users think the same way?

There are 2 approaches to answering those questions:

  • Customer research (aka ask your customers)
  • Heuristic analysis (aka ask experienced UX designers)

I’m going to show you how you can apply both approaches, looking at examples of PC World’s homepage (customer research) and NET-A-PORTER’s mobile checkout flow (heuristic analysis).

PC World’s Homepage.

To understand if users perceive PC World’s homepage as visually attractive, we ran a small-scale survey, asking visitors to rate different aspects of that page.

For research we used VisAWI, an 18-question survey that was specifically designed for evaluating website aesthetics. The results of the survey showed that overall the website is visually pleasing. It scored 4.47 points out of 7. The benchmark for e-commerce is 4.05. This shows that PC World’s homepage is better than average:

In particular, users highly rated PC World’s layout as being “easy-to-grasp” and “well-structured”, indicating that it met their expectations as an ecommerce store. Overall, the majority agreed that the site is designed with care and looks professional.

But findings also highlight a number of areas that could be improved.

  • 49% of users agreed that the layout is “too dense”
  • 40% of users agreed that the choice of colours looks “botched”
  • 47% of users agreed that the design of the site “lacks a concept”

49% of users agreed that the layout is “too dense”

Some of the reasons why users might think so are:

Lack of whitespace

Text-heavy sections

The “reasons to shop” section is very crowded which adds to the denseness of the page. This is further complicated by how text-heavy this section is, which could leave users perceiving it as overwhelming:

40% of users agreed that the choice of colours look “botched”.

This might be because PC World’s color harmony is disturbed. According to color harmony theory, bright blue and red colors that they use for their promo campaign wouldn’t necessarily match their main brand color, purple.

The theory of color harmony tells us that PC World would be better off following Hulk’s example…

In other words, using green as a complementary color to purple (potentially alongside a more saturated red):

There are 5 types of color harmonies, and depending on the one you choose, you’ll find other matches. Here’s another one suggested by Paletton.com, a tool for creating complementary color combinations:

To me personally (feel free to disagree), it seems that there is their branded color scheme. When seen in isolation its colors match well:

There is the promo-campaign color scheme. Again, when seen in isolation its colors match well:

It’s combining the two together that creates a challenge, and might have resulted in some people feeling that colors look “botched”.

47% of users agreed that the design of the site “lacks a concept”

When users say that the design of the site lacks a concept this might mean that a page doesn’t form a complete and pleasing whole. Gestalt theory of psychology suggests that we look at an object as a whole first and zoom in on its individual parts next.

When we have difficulty making sense of objects, we tend to perceive them as less beautiful.

As we’ve already observed with the colour scheme, the page doesn’t form a unified whole as two separate colour scheme do not match each other.

Similarly, the product catalog is chaotically organised. This makes it more difficult for people to group the objects together:

How about grouping those objects by a unified theme? For example, one row of laptops, one row of game-related products, one row of home office equipment. This would’ve made it easier for users to make sense of the page.

Finally, the page doesn’t seem to have a unified purpose. Yes, the main theme is the promo-campaign, but what action should users take next?

With so many competing calls-to-actions this isn’t clear:

Every page should have a dominant call-to-action (CTA). (As you should know from our article on the paradox of choice when there is no dominant option users are likely to get overwhelmed and quit).

Their “summer mega deals” banner takes the most space. So, I assume this was meant to be their dominant CTA. The problem is it looks like an image, not a clickable element which is why people might have been confused as to what they should do next on the page.

If this is the dominant purpose of this page, PCWorld would have been better off designing their “Shop Now” CTA not as text, but as a button.

Look at what Maplin is doing:

While technically, this banner is one single image, the call-to-action was designed to look like a button, so it’s clear to people that they can click it.

If PCWorld did the same and made that button as prominent as the rest of the banner, it would be easier for users to identify the main CTA.

Checkout Flow

To show you an example of how heuristic analysis can be applied to improve aesthetics, we’re going to look at NET-A-PORTER, a luxury fashion store.

Our designer, Josh Lenz, quickly identified a number of problems:

  1. Lack of whitespace. Margins are too narrow and grey background overtakes the whole screen. Research shows that perception of simplicity is critical not only for general aesthetics, but also for creating a perception of luxury.
  2. Forms look outdated. They don’t reflect the previous stages of the user experience. This inconsistency with the brand and design could diminish trust and credibility.
  3. Third step looks broken or at least unprofessionally designed as “Select card type” field overlaps with “Card number” field.

So, the checkout design looks not only hard to complete, but might also diminish users’ trust in the brand. To reverse that experience he made a number of changes:

All Steps

  • Changed style of progress bar – makes it more legible
  • Changed text to icons in footer – simplifies the page visually
  • Increased the spacing – creates a cleaner look which is easier to digest visually

Step 1

  • Simplified text fields – creates a better balance on the page
  • Changed tickbox and button style to match brand

Step 2

  • Placed shipping options in cards – card placement improves vertical symmetry of the page, adds more white space and creates more of a visual flow down the page.
  • Packaging options placed side by side – creates better balance on the page.

Step 3

    • Simplified text fields – creates a better balance on the page
    • Changed the ‘+’ icon and edit button style to match brand

My hypothesis is that since the page now looks modern, more clear and matches NET-A-PORTER’s brand, more users will perceive it as easier to complete. Not only will more users continue to complete their purchase as the page looks more clear, but users are also less likely to be scared by an experience that substantially deviates from the brand.

The next step would be for NET-A-PORTER to test this design change against the original, and measure if it actually impacts their bottom-line.

Key Takeaways (and Limitations):

  1. You should recognise that I don’t have a full insight into these companies’ strategies and user behaviour, so some of my suggestions above might not be in line with their business goals.
  2. It’s more important to understand the process and reasoning I followed rather than copy and paste specific suggestions.

With that in mind your main takeaways should be:

  1. Improving your aesthetics can drive additional revenue for your business. (In our experience, it could drive as much as €450,000 of additional revenue per year).
  2. One of the reasons why is people would perceive your product/platform as easier to use, making them more likely to stay and complete their purchase.
  3. To improve your aesthetics use research. Be it user feedback (with the help of such tools as VisAWI) or systematic analysis by an experienced designer (internally we use a UX Review checklist that consists of 52 criteria against which we evaluate every website we work with).
  4. Translate insights from the previous step into actionable test ideas. Tip: Hire an experienced designer who can translate your user feedback into action. For example, users can tell your colors look botched, but they can’t tell you how to fix your color harmony.
  5. Focus on the high-value, high-friction points in your user funnel (eg. your top landing pages, bottom of the funnel stages such as check-out flow).
  6. When applying aesthetic-usability effect, don’t change usability. As you could see in none of our suggestions we changed the core function of the website. In 2014 Marks & Spencer re-designed their website, changing both its aesthetics and functionality. A vast array of technical issues led to an 8% drop in sales.
  7. Measure impact. This can be in the form of gathering user feedback on your new design, but bottom-line impact is what we’re after. Treat your new design as a test, not a blind change. This way you’ll know how much aesthetics matters for your audience.