Imagine you’re a business selling software that allows busy parents to share photos and videos with their close ones.
When designing your pricing strategy, should you offer your users a single plan? Take it or leave it.
Or should you offer 2 different plans, so that everyone can find something that works for them?
Back in 2010 the right answer would be: you should offer a single plan. When Ash Maurya ran these tests, he found that less people signed up when they were shown 2 plans, and as a result less people became paying customers. One explanation is that people could not decide which plan to choose. So, they left.
This brings us to the long-time problem that marketers have to solve: what number of options should we offer to our customers?
Is large range always better…
…than a small range?
On the one hand, choice plays an important role in our daily lives. More choice can mean customers are more likely to find what matches their taste. In fact, we humans love to make choices. Research shows that the act of choosing activates brain regions associated with motivation and reward.
Since then multiple articles proclaimed choice as a conversion enemy. This even led to some exaggerated statements like the one below:
As always, my job is to separate the wheat from the chaff. As a data-driven marketer you don’t want to follow best practises. You want actual research, so that you can create your own hypotheses and find out what works for your business.
I have done the research for you. Meta-analysis by Benjamin Scheibehenne and his colleagues gave me 50 experiments to work with.
(If all you want is actionable advice, skip straight to the last 2 sections).
Why is choice harmful for your bottom-line?
Analysis paralysis is when a person over-thinks the decision to such an extent that he/she does not make it. In marketing context, that means the person won’t buy.
Ryan Engley from Unbounce tested removing the number of options for its webinar sign-up form. After reducing the choice down to 3 options he saw a 16.93% increase in conversions. Perhaps this was because people did not get caught up in thinking what day of the week would work best for them.
The Buyer’s Remorse.
The buyer’s remorse is when a person regrets making a purchase. As the number of choices increases, it is easier to imagine a different choice that may have been better than the one selected. As a result, that kind of post-analysis decreases satisfaction with the purchase.
Iyengar showed this effect in their 2000 series of experiments. They found that when people had to make a choice from a limited number of options (6 vs 30), they expressed greater satisfaction with the made choice. As researchers explain one of the reasons why is because people regretted their decision less. They measured ‘satisfaction’ and ‘regret’ using questionnaires and found that the two were related (r = -.55, p < .0001).
Thinking more long-term, more regret could mean that the customer would not become an advocate of your brand. In the worst case scenario she might cancel or return the product.
Decision fatigue is when we start taking shortcuts as we exhaust our mental energy. Research suggests that humans have a limit on how many active, deliberate decisions they can make in a certain time period. The more we exhaust that reserve, the more likely we are to look for shortcuts.
In one study researchers analysed more than 1,100 decisions made by court judges. They found that as the day went by (and judges made more decisions), judges were becoming more likely to take shortcuts when deciding which prisoners to release before their official sentence was over.
Prisoners who appeared at the start of the day received parole about 65% of the time. Those who appeared late in the day received parole less than 10% of the time. Put simply, as judges got tired they used the least-risky option and allowed only a small number of prisoners out.
To translate it to business context, consider onboarding process of eHarmony, an online dating website. To register at eHarmony, you need to answer more than 130 questions about yourself. It’s not as simple as “When you were born?”. Questions range from estimating how warm, clever, dominant you are, etc. down to how often you felt happy, depressed, etc. in the last month.
If you try to have an objective sense of reality, you will likely stress your mental muscles to estimate where you should sit on that scale. After answering 130+ such questions you’ll be exhausted.
By the time you get to the final stage, you might be so tired, there will be no willpower left to figure out why you can’t see anybody’s profiles. The path of least resistance is to abandon the process.
At the same time you might feel so invested into the process, decision fatigue might be in eHarmony’s favour. After all, you just made an active, effortful and uncoerced commitment to meeting a partner online. The alternative shortcut is to pay for a membership.
Decision fatigue will cause customers to take shortcuts in your conversion funnel. The shortcuts might be in your favour, but the opposite might also be true.
Why is choice beneficial for your bottom-line?
Perception of better choice can be a competitive advantages for retailers.
In the review by Benjamin Scheibehenn and his colleagues, they write, “retailers in the marketplace who offer more choice seem to have a competitive advantage over those who offer less”. After all, that’s one of the reasons why we shop at Amazon.
Customers associate higher variety with higher quality.
A study done at Stanford shows that brands that offer higher variety are perceived as having higher quality by the customers.
Adding decoy options can increase average order value.
The Economist offered 3 subscription plans to its customers:
The middle option seems redundant as it is the same price as the last option. So, from “choice overload” perspective we should remove it. But what we’ve missed is that the print & web subscription option seems like a real bargain when compared to the print subscription option.
That’s what Dan Ariely found in his experiment with 200 MIT students:
This extra option more than doubled the number of people who chose the more expensive option.
Choice means freedom. Restricting visitors’ freedom might backfire.
MECLABS Institute tried to reduce friction on a client’s check-out page. They eliminated one of the options in the choice set. Instead of giving users an ability to choose among different subscription lengths (1 month, 6 months, annual), they only offered a monthly plan.
Mike Xiao, research manager at MECLABS Institute, explains that this is what the majority of visitors chose anyways. So, for them, elimination of this step should have only made the checkout process easier.
However, instead of seeing the conversions go up, they saw a 40% decrease.
Mike’s explanation is similar to the Economist experiment. People preferred the monthly plan when it was presented in relation to more expensive 6-month and annual plans.
I want to offer another potential explanation to this result. Visitors might have left the funnel because they did not like that a billing method was forced onto them. Choosing how you pay is an important decision to make, especially when 30% of consumers believe subscriptions are bad value for money.
As I explained in my article on the foot-in-the-door technique, when we feel that someone tries to force us to do something, we often respond in such a way that will re-assert that freedom. In that case, leave the checkout. This process is called psychological reactance and giving customers choice is an effective way to counteract it.
10 factors that drive/mitigate ‘choice overload’
Research by Benjamin Scheibehenn and his colleagues gives us a good idea of what factors make “choice overload” less (or more) likely to happen. We can use that knowledge to design better experiences for users while reaping the benefits of offering more choice.
All 10 factors are based on sound theories, and some of them have already been validated by research. You should remember though that you will only find out if it works by testing it on your own audience.
Factor #1. Users’ preferences towards (and familiarity with) the items in the choice set
If people have strong preference for a particular product, choice won’t overwhelm them. They will simply choose their favourite option.
In 2003 Alexander Chernev, a professor of marketing at the Kellogg School of Management, showed that people with clear prior preferences had no problem choosing from larger assortments. The larger the number of options they could choose from, the more likely they were to actually choose something and to be more satisfied with their choice (study 1 and 2). This is the complete opposite of the research we covered above.
Another study showed that only those people who were unfamiliar with the product category were less likely to be satisfied with their choice.
For example, if I visit SkateHut and know nothing about longboards, I will most certainly be overwhelmed. I don’t have a preference for any brand or type of board. This makes it difficult for me to make a decision.
The solution would be to find out if a large proportion of visitors are actually novices (like me). You could do this by using surveys (see example below), examining live chat data and talking to your sales team.
If the data shows that yes, there is a significant proportion of customers who are novices, the solution would be to educate them. If users become more familiar with different types of boards and brands, it will be easier for them to navigate through a vast array of options.
For example, the following SkateHut page ranks first for a keyword “longboards” in Google:
Instead of sending novices straight to the catalog, SkateHut could offer them educational material first.
In Russia I saw ecommerce websites educating visitors with the use of quizzes. Once you are on a product catalog page, there is a quiz at the top that asks you a set of questions. Based on your answers you will be shown products that match your criteria.
The bot asks me, “Who are you choosing a self-balancing scooter for?”.
I need to tell if it’s for an adult or a child. As I answer these questions, it explains how I should choose my wheels, why some factors are more important than others, etc. As a result of my answers, it shortlists the whole catalog down to 9 options and I even understand why it shows me those 9 options.
At the end I just need to choose the colour I like:
Factor #2. Presence of an obviously dominant option in the choice set
If there is a clearly dominant option, then choice is unlikely to be complicated. Some offers are so inferior relative to one outstanding offer, people would not need to give it much thought.
At a glance, AQF wrist wraps seem way more superior than other offers in the choice set. They have over 700 reviews, most positive and price-wise it’s a real bargain.
The term “dominant” is open to interpretation though. In my view, it means that a user can quickly identify which option matches his use case and it is clear that the other options are inferior. In the example above, all the other options are inferior because they lose on both the number of reviews and price.
Perhaps a better way to think about dominance is through the lenses of elimination. Can a person easily eliminate all the options apart from the one that meets her needs?
In retail review ratings and bestseller tags can act as such filters. We will cover them later.
In SaaS creating plans that match different customer segments can help. Ruben Gamez, the founder of BidSketch, does a great job in simplifying his plans. Users don’t have to make substantial trade-offs when deciding between them.
Solopreneur? Then, Solo is for you. Several members using the software? Then, Team or Business. He makes his plans almost mutually exclusive. At its core plans only vary on the number of users.
Important note: The 2 factors above are necessary pre-conditions for “choice overload” to occur. Put simply, if you can ensure that visitors coming to your website have already developed strong preferences for a certain product and/or the options are presented in such a way that each customer segment will identify one of them as dominant, you won’t have problems with choice overload. In practise, this is difficult to ensure. That’s why we cover 8 other factors that help us beat “choice overload”.
Factor #3. Categorization and Option Arrangement
One study found that an increase in the number of options decreased satisfaction only if the options were not prearranged into categories. Categories make it easier to navigate the choice set and decrease the cognitive burden of making a choice.
HSBC gives its customers only the names of the different credit cards. Unless a person is closely familiar with HSBC credit card offering, those are a poor guide on which credit card is best for you.
FCA carried out research on different motivations of credit card holders. They found that generally there are 4 types of customers:
When either one of these people comes to HSBC website, it won’t be clear what card will help them to achieve their goal.
In contrast to HSBC, Barclays groups its credit cards into categories that fit the motivations above.
This both lowers the number of options that people have to choose from and ensures that people choose only from those options that are relevant to them.
Similarly, HostingAdvice.com categorises its web hosting providers based on use cases.
Not only that, ordered assortments can also ease up the burden of making a decision. For example, WhoIsHostingThis.com presents its web hosting providers in a ranked order. Ranking makes it easier for the user to make a choice.
Factor #4. Difficult Trade-offs
The more difficult the trade-offs that we need to make between different options, the more likely it is that “choice overload” will occur. Research suggests that the trade-offs are particularly tough when options have unique features that are not directly comparable.
To get our heads around it, let’s consider an example.
What credit card would you choose?
Everything is the same apart from a number of differences:
To decide you will need to make some trade-offs. Is a generous rewards scheme more important than balance transfer and purchase rate offers? Is £195 annual fee justified by 80,000 (potential) reward points?
Let’s assume that I am looking for rewards, so purchase rate and balance transfer offers are not important. Unfortunately, HSBC provides no guidance for me to figure out if rewards justify the annual fee.
In contrast, Barclays tells me the exact reward value that I will get based on my monthly spending level:
This lets me to compare the price of the card with the value that I am likely to get out of it. No burden of making mental calculations in my head. Thank you Barclays!
Factor #5. Ease of comparison
To get to the stage of making those trade-offs I needed to compare 2 cards. If the banks make it hard for me to do this, I might drop out of the funnel even before I get to “trading” one feature for another.
HSBC is a bank that certainly does not make it easy.
Continuing with the scenario where I was looking for rewards, I found 2 cards that have them:
Those cards have some common and some unique features. Even for the common features I can’t easily compare them. One gives specific information on how many rewards you will get, the other one does not. One gives specific information about its annual fee, the other one does not. One has “enhanced” travel benefits, but it’s unclear how “enhanced” benefits are better than standard benefits.
If this was the time of the day when my mental energy had been already exhausted, I would have started looking for shortcuts. I could go to MoneySupermarket.com despite originally starting with my bank. Adiós HSBC!
Compare this to what Royal Bank of Scotland is doing. They compare their credit cards in a clear table format:
Not only that, they provide guidance on which card would suit which customer best with their “Decide if it’s right for you” section.
Factor #6. Information Overload
Information overload theory points out that it is not necessarily the number of available products that causes ‘choice overload’, but the number of factors that you need to consider when making a choice in your head.
For example, in SaaS the industry’s standard is to to have between 3 and 4 packages in your pricing grid. It seems like you could have that number, highlight one of the plans as “most popular” and be ok. But your conclusion will be different if you look at the issue through the lenses of information overload theory. What becomes clear is that it’s not the number of plans that matters, but the amount of information that a person has to consider.
Compare the two pricing pages below.
While both offer 4 plans, BidSketch purposefully presents its users with only 1 factor to consider, number of users. In contrast, Surveygizmo used to give its users 15 factors. According to information overload theory, choosing a plan at Surveygizmo will be more overwhelming than choosing a plan at BidSketch.
So, what can we learn from that?
When I interviewed Ruben Gamez, founder of BidSketch, he explained to me that he purposefully omitted many of their features, to simplify the plans. He identified the ‘value’ features, i.e. the features that people considered to be most important. He did this using both data from analytics and Jobs-to-be-Done interviews. Not only that he also did ‘sensitivity testing’, i.e. removing a feature to see if it actually affects conversions.
Factor #7. Time Pressure
Graeme Haynes, psychologist at University of Western Ontario, found evidence for choice overload only if he constrained the decision makers’ time to make a decision.
Thinking of how it applies to the real world, time pressure can be due to seasonal changes (eg. Christmas) or due to time constraints you put on the customer yourself (eg. limiting an offer by a certain number of days).
In these circumstances, you are in an even higher need to simplify the purchase decision for the customer.
As an example, let’s assume I’m looking for perfume for my girlfriend. It’s 15th of December. As someone who hates queues, I search “female fragrance” and come to theperfumeshop.com:
Looking at bestsellers is a not-so-bad shortcut, but obviously ‘bestseller’ does not mean my girlfriend will like the smell. Product descriptions don’t help either.
So, after grilling their customer support for half an hour, here’s how we made progress on what perfume to buy:
Support: Do you know of anything she has liked before?
Support: Do you have a budget in mind?
Support: Her age?
Support: Preferences: Floral or oriental?
No idea. I would describe her current perfume as having a rich smell, but at the same time light (in a sense that you can smell it, but it’s not overwhelming)
Support: Paco Rabanne Lady Million “It is a warm scent that lingers nicely on the skin but not overpowering” or Gucci Guilty “feminine fragrance that is very distinct”
Not sure which one to choose, both have good reviews.
Support: Would you like to visit a local store and try them or you could get a gift card instead?
Perfect! Gift card is a risk-free option.
Now imagine analyzing hundreds or thousands of such dialogues. We could reverse engineer them into a feature that would help novice customers like myself make similar progress:
In this industry visitors are likely to feel ‘choice overload’ because the products are difficult (if not impossible) to compare. After all, you need to smell them. Time pressure during Christmas escalates that problem.
Instead, the feature above could simplify the choice down to answering a couple of questions. And even if the answers are not perfect, the visitor would still feel like he is making progress. At the end he might end up going to the local The Perfume Shop store or buying a gift certificate. All thanks to not being overwhelmed with choice and closing the website.
Factor #8. Maximizing behavior
Maximisers are those who search for the best option, not just the one that is “good enough”. It is assumed that maximisers would consider more factors, overcomplicate their decision-making and thus be at a higher risk of choice overload.
I believe that maximising behaviors can be identified. For example, imagine a user of Match.com, an online dating website. Instead of contacting a set of matches who have been presented first to him by default, he continues to scroll down, loading all the profiles, analysing who he should contact first.
We could identify users with such behavior and guide them to take action. Otherwise, they might get caught up in analysis paralysis.
For example, we could show a pop-up box to everyone who loads more than 5 extra pages of profiles. The message will prompt them to start a conversation rather than to keep scrolling through.
(By the way, that research is somewhat true. Read this paper.)
Factor #9. Choice Justification
Scheibehenne found an effect of “too many options” when people knew that they would have to justify their choice later on. Perhaps this effect will most likely occur when a purchase involves several people. For example, you work in an industry where one person will be the user of the product, but above them is a person who approves the budget.
Or it’s a purchase that will be likely discussed with peers (eg. new clothes or a new car).
That means that your role as a marketer is to help customers justify their choices.
3D printing is an industry where multiple stakeholders are often involved in the purchase decision. iMakr, a 3D printing store, not only categorises its printers by the use case…
…it also compares 3D printers in each category, making it easier to justify the choice for a particular 3D printer. For example, B9 Core 530 seems to be the ultimate 3D printer in the jewellery category:
iMakr also gives guidance on who a particular 3D printer is right for, again helping to justify the choice:
It all goes back to helping the user find the ‘dominant’ option.
Factor #10. Simple Decision Heuristics
Many studies have shown that users cope with excessive choice by using mental shortcuts that often guarantee “good enough” decisions.
Some heuristics are useful to know if you’re a marketer:
the elimination-by-aspects strategy that quickly screens out unattractive options
the choice of a default option
The elimination-by-aspects strategy
Retailers give customers an ability to screen out some options from the others by providing such cues as review ratings and bestseller tags:
Search filters play an equally important role where users can screen out all the options that don’t match their criteria:
The choice of a default option
Charities use default options on their donation forms to both simplify the decision-making and increase revenue. A 2014 study found that donations closely corresponded to the default amount that was set.
In the same spirit SaaS companies make certain options default. Instead of explicitly asking users how often would they prefer to pay (choose monthly or yearly), they default users to one or the other:
(Think about advice below as ideas to test)
Educate users about your products, so that they can easily navigate a large set of options and develop preferences for particular products. This could be done with the use of chatbots, targeted tutorials, etc.
Help customers to identify the dominant option in the choice set:
This could mean making it easier for users to eliminate “inferior” options in the choice set (review ratings, bestseller tags and filters).
This could mean designing products for distinct segments, so that users would not need to make complicated trade-offs between products.
Categorize and order large choice sets in a meaningful way, so that users can quickly find what they are looking for.
Make it easy for users to compare different products (eg. add a feature where users can compare products in a table format).
Assist users with making the trade-offs between different types of products.
This can be done through educating users about what’s more important to “get right” in your product. For example, chatbot for the self-balancing scooters explained to me that wheels is the most crucial part of the decision. Incorrectly chosen wheels can result in a terrible driving experience.
This can be done through performing all the calculations for the user if quantitative factors need to be weighed in.
Provide guidance on which product would be right for the user (think of RBS example).
Think about not only reducing the number of options in the choice set, but also the number of factors that users need to consider when making a decision.
In the conditions of scarce time, users are more likely to look for shortcuts in their decision-making. Give them an opportunity to take educatedshortcuts (think about The Perfume Shop Christmas feature).
Identify perfectionists who try to make the best decision possible and guide them to take action.
Make certain options “default” instead of explicitly asking the user to make a choice.
There is one technique that does wonders for your conversion rates. As I am about to show you, you can use it to:
double your email signup rates (one firm used it to increase their CR by 113%)
grow average monthly revenue of a subscription-based business by 11.4%
get up to 2x as many leads who are happy to jump on a phone call to discuss your product
I am talking about the foot in the door technique. The reason why it works is that it utilises an effective persuasion mechanism called the commitment and consistency principle.
Read on if you want to find out how to achieve similar results for your business.
What is commitment and consistency principle?
Commitment and consistency principle is based on the theory that we, humans, want to appear consistent to ourselves and to others. Sometimes we alter our attitudes to be in line with our actions (as explained by the self-perception theory). This is where persuasion experts step in. By making people to commit to something, we might change their attitudes. This makes it more likely that they will comply with other related requests in the future. A classic example comes from a 1966 study by Jonathan Freedman and Scott Fraser. Researchers asked a group of residents to support a safe driving campaign. This meant installing a large (6 feet by 3 feet) sign, stating “DRIVE CAREFULLY” on their front lawn. Only 17% of residents agreed to do so, but this number jumped to 76% for another set of residents. What caused that difference? Two weeks earlier researchers approached that group of residents with another request. They asked them to install a small sign in their window that read “BE A SAFE DRIVER”. People easily said “yes”. What residents did not realise is that this first commitment would make them see themselves as the type of people who support such causes as safe driving. Two weeks later, to appear consistent with their past behaviour (and a newly acquired self-image) they had to agree with a larger request, too.
That’s the essence of the foot-in-the-door technique: compliance with a modest request leads to compliance with a larger request.
Now let’s think about how you can apply that principle to your online marketing!
What is so powerful about it?
Consistency and commitment principle is powerful because not only can you increase immediate conversions within your sales funnel, but also alter your customers’ attitudes towards your brand. As you will see in the studies below, this could go as far as turning your dissatisfied customers into satisfied ones. This means lower churn rate and higher lifetime customer value.
Will it work for you and how can you maximise your chances of success?
As with the norm of reciprocity, you are very unlikely to be able to manipulate your customers. When applying this principle online, people would think more deliberately about requests that they comply to. So, unless your request might add some real value to their lives, people will likely reject it. Yet, you can still use this principle to gradually guide them towards the target request. The one that will help them to live a better life and make you money.
Commitment and consistency principle has the greatest effect when one’s self-image is affected by his/her previous actions. According to Cialdini, for the highest chance of a commitment affecting one’s self-image, it needs to be:
Active (a person consciously commits to something)
Public (there is a sense that commitment is known/observed by other people)
Freely chosen (uncoerced)
Not to bore you with dry theory, we will jump straight into some examples. Today I am considering 4 main use cases for this technique: email sign-up, user onboarding, check-out optimization and lead generation. Where relevant, I will cite research by Cialdini and his colleagues. Jerry Burger’s meta-analysis of existing research on the Foot-in-the-Door technique deserves special attention.
Here’s what the standard email popup looks like:
While great, there is a psychological process that interferes with our goal of getting a visitor’s email address. It’s called psychological reactance. It occurs when people perceive a threat to their sense of personal freedom and choice. When we become aware of an effort to reduce our freedom (eg. we feel we are being forced to do something), we often respond in such a way that will re-assert that freedom. In that case, we will close the pop-up window. It’s a completely different story when we offer our readers an opportunity to sign up for our email list. Let’s consider SnackNation’s example. They offer their readers an opportunity to sign up for a newsletter within their popular article, “121 Proven Employee Wellness Program Ideas For Your Office”.
When clicking “Download this entire list as a PDF”, users are making an active commitment that does not feel coerced (ie. it was their free choice). This increases the chances that they will finish the sign-up process. When SnackNation changed a forceful pop-up for an uncoerced opportunity to get a PDF checklist their subscription rate increased by 195%. From 20 subscribers per week to 59.
Moreover, entering your first name and an email address requires a certain amount of effort (tick this one, too). This gives SnackNation not only a chance to get subscribers, but also make related requests in the future such as a webinar sign-up (which is a larger commitment). “Remember you signed up for this PDF checklist? We know you are the type of person who cares about employees’ well-being. Why not sign-up for our free webinar that will help you with exactly that?”
Apparently, that’s what they do:
Now, using research on the use of commitment and consistency principle, I will walk you through some questions. They will help us to understand if SnackNation is doing a good job.
Does it tie a visitor’s first commitment to her self-image?
First of all, we know that, to be effective, the first commitment needs to insert a desirable (from our point of view) self-image into the person. So, we might want to improve this page by tieing a person’s identity to the action.
It could look like this:
Does your second request seem like a logical progression of your first request?
Burger found that when a second request seemed like a continuation of the first request, participants were twice as likely to comply relative to the control groups.
In SnackNation’s case a webinar on wellness seems to be a logical progression of the first request to download the wellness program report. Well done guys!
On the other hand, Decathlon, a global sports retailer, fails on this one. As you enter their website, you are being presented with a standard email sign-up form. It promises that upon entering your email address, you will receive sport advice and offers.
From my point of view, the logical progression would have been to ask me to confirm my email address, so that I can start receiving those offers. Instead, their second request is to create an account at MyDecathlon (I am still not sure what that is):
In other words, to complete that enormous sign-up form with several tabs:
Decathlon’s signup process is likely to be ineffective because psychological reactance might kick in. A user might feel that Decathlon purposefully omitted any information about account creation. They made the pop-up form seem like a newsletter sign-up, so that users don’t realise that they will have to go through a lengthy sign-up process, all before receiving any of the promised perks. As we know from reactance theory, when we feel manipulated, we will act in an opposite way to re-assert that freedom (ie. not sign up).
But there is another reason why Decathlon’s sign up process might be failing. It violates the rule of reciprocity…
Are you taking and giving back or just taking and taking more?
We know that when we apply the principle of commitment and consistency, compliance with a small request leads to compliance with a larger request. Well, this is not always the case. As one study showed, when two requests are made one straight after another (and the second one is not a logical progression of the first one), the norm of reciprocity might burn this whole persuasion effort into pieces.
In line with the norm of reciprocity, after people complied with the first request (put some effort on their end), they feel that the person who asked for it is indebted to them in some way. So, unless the “asker” provides something of value back first, the second request would be taken unfavourably. People would perceive it as an imprudent act that violates the norm of reciprocity and, as Cialdini says in the book, we tend to react negatively to such violators.
Note: The reciprocity norm is unlikely to backfire when the sign-up form is designed as a logical progression sequence (for example, upon signing up you are made another request, to confirm your email). This is because the second request would seem as a part of the first request, not a separate request. According to Burger, the highest risk of the reciprocity rule backfiring is when two unrelated requests are made one straight after the other.
This is exactly the mistake that Decathlon is making.
First of all, upon receiving your email address, they make you a second request that is unrelated to their first request. I would have expected them to ask me for confirmation of my email address. Instead, they asked me to create an account at some unknown MyDecathlon. As we saw this involves filling out several, complicated forms. I have also put some effort on my end, I provided an email address. So, if the second request does not logically progress from the first one, I would at least expect to get something of value in return for my contact details. Unfortunately, this is not the case. Now, I feel that the reciprocity norm has been violated, causing a negative reaction on my end.
Compare this to what Reebok is doing.
You just comply with their first request (enter your email address) and boom!
A free coupon offer straight to your inbox.
Obligations on both ends have been fulfilled, now I am ready to consider any other requests from them.
Similarly, SnackNation provides you with value once you complied with their first request. Upon entering your contact details you expect to receive a PDF-checklist in return.
The copy on the next page informs you that the PDF was sent. So, SnackNation fulfilled their obligations and it’s ok to make a second request (webinar sign-up).
So, what should a firm like Decathlon do if ultimately they want you to create an account with them?
Well, they could have informed us that we would need to create an account before receiving any offers.
Then, copy in the email would seem like a logical progression of what has been offered in the pop-up.
Alternatively, they could take an approach where the first commitment would be tied to a person’s identity. They could offer a 60-second quiz, making users answer a number of questions about themselves.
The quiz would allow them to find out more about the person. This in turn would give them an opportunity to use that first commitment (quiz completion) as a segway to registering an account at MyDecathlon.
Here’s what it could look like:
However, there is also another factor that we need to consider when dealing with the norm of reciprocity. It’s timing.
Are you timing your requests or trying to feed the whole pie?
One study has shown that the reciprocity norm lasts only a few days, at least for the kinds of favors typically examined in social psychology experiments (for example, receiving a soft drink). That is, the obligation people feel to return a small favour appears to dissipate after a short period of time. So, if you are making two, unrelated requests, you might overcome the negative effects of the norm of reciprocity by simply making these requests on separate days.
Research is inconclusive on how many days need to pass by for the norm of reciprocity to lose its effect. In a recent study a lapse of 2 days was sufficient to produce a more favourable effect than when there was no delay at all. This means that if Decathlon employed the quiz change I described above, it might have been safer to send the follow-up email 2 days later.
What are you communicating about people’s likelihood to comply with your request?
A fourth consideration we need to make is social proof. It was found that if a person perceives the type of request you are making as the one only few people comply to, they would be less likely to commit to it. This is exactly what SnackNation might be sub-communicating with their “The number of seats are limited…” copy. We are not being told how many seats are left (or have been reserved).
This could have been interpreted as a “needy” attempt to get at least a few webinar attendees (unfavourable social proof). So, from the research on commitment and consistency, it might have been safer to reframe this last bit as “XX caring HR managers have already signed up” (I assume that a good number of people actually signs up).
Pro Tip: Customise your email pop-ups
In case you don’t have time to create customised lead magnets to every single one of your posts (or you’re afraid the bonus won’t be noticed), you could test customising your standard pop-up box.
Instead of making a sign-up request straightaway, you could re-frame it as a question. Here’s an example of how Thrive Themes did it:
They ask you a question instead of pushing the sign-up form on you. The sub-headline, “Let us know, maybe we can help!” reinforces the idea of helpfulness vs. pushy-ness. These two elements reduce the psychological reactance. Now the user can consciously pick the goal they are interested in (making an active, somewhat effortful commitment that does not feel coerced).
And only after that she is being presented with a request.
With the first commitment having been made, the reader’s self-perception can now be framed as “I want to design a cool website, I just chose it as a selection”.
We were able to get our conversion rate up from 5.5% (the control) to an impressive 11.7% by changing from a simple opt-in to a multiple choice opt-in form.
Marketing Team Lead at Thrive Themes
It doubled their conversion rate. Very impressive indeed.
Similarly, Duolingo uses the principle of consistency & commitment to convert visitors into active users. Everything starts with a declarative statement, “I want to learn…” where a visitor needs to choose a language (an active commitment).
Straight after that, Duolingo asks me to decide how much I want to commit to language learning (another active commitment).
This is very smart as then they can send me emails with “remember, you set this goal for yourself”. I checked it and that’s exactly what they do:
Again, using the knowledge from Burger’s meta-analysis, we can improve the way Duolingo gets a commitment from their users. Research has shown that labeling people can have pronounced effects on their behaviour. When participants’ behaviour was attributed to positive internal traits (eg. calling someone “cooperative” or “helpful”) compliance rates were higher.
Similarly, Duolingo could label users based on the commitment they made. If I chose “insane” as my commitment level, they could show me this message:
Supposedly, that label would make me more likely to keep using the app on a daily basis.
In the past, the commitment part was pushed further down the onboarding process (you can see the full overview by Samuel Hulick here). The idea was that they provided some “aha moments” first and then requested the user to make a commitment.
It’s interesting that they pushed it towards the beginning of the funnel. We will discuss why this might have been a bad choice in the 8th part of these series where we discuss sequencing.
Another great example comes from Naked Wines. Before you can use their services, they screen you on whether you would be a good fit for them.
Let’s do it!
Now, I told them I consider myself to be a wine enthusiast who would prefer a wine with a story. I have not only made several declarative statements about who I am, I have fully laid out how they can tie up my identity to their marketing.
To reward or not to reward?
There is mixed evidence on how offering extrinsic rewards affects effectiveness of the commitment and consistency principle. It’s important to notice that neither Duolingo nor Naked Wines offer any incentives in return for commitments we are making (Naked Wines gives you a voucher, but it has not been said in the copy).
Generally, studies have shown that when we are rewarded for a commitment, we are less likely to comply with a subsequent request. This is because we attribute our initial commitment to the reward, not to our internal qualities (e.g. being “determined” or generally liking certain types of wine). The greater the reward, the less committed the person is to the act (according to astudy by Kiesler and Sakumara).
With that said, all these studies used money as an extrinsic reward. This is rarely the case with our online marketing activities. Instead, we offer ebooks, PDF checklists, etc. Those types of incentives have not been explored in academic research. I personally believe that ebooks and PDF checklists are a sufficiently small reward for a person to still feel committed to their action.
So, ideally you would not use them. For example, there is no way I would attribute my commitment to Duolingo learning to anything external. It’s 100% clear that I made this commitment because I want to learn a language. This gives them the full power to nudge me to stay committed to that goal (hence, use the app).
However, it should not hurt too much for the standard perks we use in online marketing. (Plus, as we will explore later in this article, rewards can strengthen our commitment). Testing is the only way to answer this question with certainty though.
Optimisation of a check-out funnel
You have probably already read about Barack Obama’s 2012 presidential campaign. The team behind it re-designed the donation form using the commitment and consistency principle. This resulted in a 5% increase in the conversion rate, adding millions of incremental dollars.
They split the check-out process into 4 steps, thus splitting the target request to donate into a series of smaller commitments. It worked.
What was missed in other overviews of this campaign is the way copy was re-framed. Notice, the new form asks, “How much would you like to donate today?” vs “Donate now”. This again overcomes the psychological reactance, showing the visitor that there is no threat to their freedom. Moreover, “How much would you like to donate” transforms the first commitment into a declarative statement about a person’s intentions, prompting one to think “I(!) want to donate that much”.
Obviously, the first commitment is active as you have to select an amount, and is not effortless.
As with the campaigns above, the strength of this implementation is that it does not offer any rewards. In terms of improvements, we know that labelling could have been used to tie one’s self-image to their action. For example, upon clicking “Continue”, one might have been shown a message, saying “That’s so generous of you. We appreciate your support.”
Finally, the principle of commitment and consistency can be used for getting/closing sales leads. This is not a post on sales techniques, so I will only cover how you can generate leads through digital media.
Crazy Egg offers a great example. When reading their blog, here’s a Qualaroo survey that you get. (Notice how you can skillfully use Qualaroo not just for research, but also lead generation).
Funny fact: Hiten Shah told me that at this moment their priority with this form is customer research, but we both agreed that it could as effectively be used for lead generation.
It starts with a non-intrusive question, “Would you like to know which problems on your site are driving people away?”. Notice that it also does not use any manipulative copy such as, “No, I want to keep losing customers”. Emphasis on the you again attempts to tie this commitment with our self-image.
After having made an active commitment, a visitor has to actually put some effort in and explain what her problems are.
The next question is the target request. Notice how it has been built on top of all the previous questions.
The copy remains being non-intrusive, “Would you be open to…?”.
The final step is again a logical progression of previous commitments. Giving an email address is a sufficiently small, but valuable commitment to the subsequent phone call offer.
So, again Crazy Egg combines active, effortful and non-coercive elements of commitment making for maximum effects.
I asked Hiten Shah if they used direct sales approaches (eg. asking “Would you be open to a 20 minute call about…” straightaway), and if yes, how did they compare to gradual, commitment-based approaches?
Here’s what he told me:
We have found that the direct approach is much less effective. We have tested both approaches and this one works much better. It is up to 2 times as effective in getting us people who are willing to get on the call.
Founder of Crazy Egg and Kissmetrics
It’s difficult to think of how you could make all the commitments above public, but you could introduce publicity once a visitor has become familiar with what you are offering.
For example, Duolingo could prompt users to publicly share their goal of learning a new language once they completed some of the exercises.
We will cover public commitments in more depth in the next section.
Long-term impact of the commitment and consistency principle
Cialdini’s work on influence is not limited to immediate conversions. Persuasion is a different animal. The definition of persuasion, according to Cialdini is that it is focused on “the change in a private attitude or belief as a result of receiving a message.”
That leads us to the idea that we might use the principle of commitment and consistency to alter one’s attitudes towards your brand or product (or both). By making people to take a favorable stance towards your brand, you can set them on a path where it would be inconsistent to switch to your competitor.
Researchers explored how this effect could be induced through referral programs. The intended goal of referral programs is to attract new business. Yet, many don’t realise that, due to the effects of commitment and consistency principle, recommending a company to a friend strengthens the bond between a firm and the recommending customers themselves. This makes existing business more stable. For example, churn rate might go down.
Indeed, this is what a 2013 study has found. A group of researchers examined the impact of participating in a referral program on customers of a global cellular communications provider. They operate on a subscription-based model, so SaaS folks might find these results particularly interesting:
Researchers found a significant churn-reducing effect from participation in a customer referral program. Twelve months after participating, the probability of being an active customer was 93% for participants (of the referral program), but only 81% for non-participants.
Moreover, the average monthly revenue for participating customers grew by 11.4% compared with a matched control group.
The principle at work is the same. After publicly recommending the company, the customers’ attitude towards the company became more favourable, to be consistent with their own behaviour.
Madlen Kuester and Martin Benkenstein show that this could go as far as turning dissatisfied customers into satisfied ones. Theirstudy showed that recommending a firm enhanced attitude and loyalty towards the recommended provider despite users’ prior negative experience with that firm. In simple terms, users who were dissatisfied with the firm’s service in the past became more loyal after recommending it. They rationalised it backwards, “If I recommended the company, then they offer a good service” (change of attitude despite prior negative experience).
(Note that this study was not conducted on a group of 120 undergraduates in a non-real life setting, so whether the same impact can be produced in real life remains an open question, but theoretically yes, it can be.)
To reward or not to reward? Re-considered.
Oddly enough, the 2013 study examining churn has also found that the larger the reward offered for referring a customer, the stronger the effect on attitudinal loyalty. How can this be the case? Haven’t we previously said that rewards are bad for making a person committed to their previous action?
The devil is in the detail.
Based on research, it seems that rewards might have a negative impact when you are working with people who have no pre-existing attitude towards your brand or had a negative experience with your brand. In that case, they would be more likely to justify their actions in terms of extrinsic rewards they received.
However, according to positive reinforcement theory when customers alreadyhave apositive attitude towards the brand (remember, people participating in referral programs are likely to be existing customers; why would they recommend something to a friend if they did not like it?), a larger reward can strengthen that attitude. So, whether to offer rewards or not would depend on the situation you are dealing with.
Let me reiterate. When you need to create a positive attitude or reverse an existing negative attitude, you should avoid using rewards (if using, the smaller the better). When you need to strengthen an existing positive attitude, rewards can serve you well (the larger the better). At least that’s what these findings suggest.
The how-to of building a commitment through a referral program
When it comes to execution, all the same principles apply. For greatest results, a referral needs to be an active, effortful, public commitment that does not feel coerced.
A referral program used by Typeform ticks all those boxes. It’s important to point out that in contrast to many other referral programs, it allows you to customise the message that will be sent out. Most referral programs just ask you to enter your friends’ email addresses. This is great in terms of simplicity, but you might be missing out on an opportunity to strengthen loyalty of your existing users.
By allowing them to change the message, you create a situation where
1) if they did not change it, that still means that they agree with what is being said (= more active commitment)
2) if they do change the text (telling about their positive experiences), you are building an even stronger commitment because then it also becomes more effortful.
Are you right on time?
Finally, timing is crucial. The 2013 study found that effect was the strongest for customers with low expertise in the service category and little experience with the provider. This means that engaging customers in the referral program at an early stage of the onboarding process would produce maximum effect on their loyalty.
The same principle is at work when customers are voluntarily leaving testimonials. Each testimonial is a public, declarative statement about their experience with your brand. If it was positive, then your customers are likely to use it as a frame of reference when thinking about which brand they prefer.
Basecamp shows testimonials of 1,000 of its customers on their website.
Each is an active, public, effortful statement about their experience that was not coerced.
I explored the commitment & consistency principle with all of its peculiar subtleties. It appears that its application is not as simple as breaking a larger request into a series of smaller ones. For it to take a substantial, lasting effect, the first commitment needs to establish (or reinforce) a certain self-image within a person.
That self-image would act as a frame of reference for a person to decide whether to commit to subsequent requests or not.
For maximum chances of establishing a desirable self-image, the first commitment needs to be:
Freely chosen (uncoerced)
To enhance the likelihood of inserting that self-image, you can:
Label a person to possess a certain internal characteristic after they took a certain action (eg. you’re “determined” or “you care about your employees’ well-being”)
Frame your copy in a way that makes the visitor think in declarative statements about herself (or better make her write them).
To make sure the process goes smoothly, think about how the principle of commitment & consistency interacts with other principles of influence:
Does your second request seem like a logical progression of the first one?
If not, the norm of reciprocity will likely backfire.
To counteract it, you should either provide something of value first or wait at least 2 days between making 2 requests (or both). Top tip: make sure to remind the person of their first commitment when making the second request.
Use social proof to show that it’s a norm to comply with a request you are making.
If planning to use rewards:
Offer small rewards when trying to get a commitment towards something that a user does not have a pre-existing attitude towards (eg. a new reader or a user on your website would not have a strong attitude towards your brand)
Offer large rewards when trying to strengthen a commitment of existing customers
As we have seen, this principle can be successfully applied to improve email signups, user onboarding, check-out funnels and lead generation. More importantly it can strengthen loyalty of your newly acquired customers (for example, through referral or testimonial programs).
(With the release of Cialdini’s new book Pre-suasion over the course of 8 part series I will do an in-depth exploration of each of his 7 (not 6!) principles and how best to sequence them)
Ever wondered why helping influencers on Twitter doesn’t create a relationship?
Tired of giving well researched content away without even getting a bunch of shares, not to say email addresses?
Baffled that free trials aren’t converting into paid subscribers?
You might be overlooking some important reasons that reciprocity may not work as well in an online world, as it does in an offline world.
I have been reading posts about reciprocity that make bold statements about how it works online, but they oversimplify. There have been countless times when I would never reciprocate anything back even though people’s content was outstanding.
It can’t be just as simple as give and take, can it?
So, I decided to do some research. I am not an anthropologist. I cannot give you a full explanation about what role reciprocity plays in our cyber lives. But, I have spent more than 50 hours reviewing existing research and examining whether that rule would work online. Here’s what I found.
I scoured through 20+ research papers so that you didn’t have to, here are the 14 most valuable insights.
Understanding why reciprocity may not work online
Cialdini’s rule of reciprocity may not work for your online marketing. Here’s why:
People are less likely to respond in a click and whirr mode
People process information in two ways. One is when we rely on heuristics (mental shortcuts) to make our decisions (eg. a book has many positive reviews, so it must be a good one). The other is when we are more deliberate with our decisions (eg. reading through reviews, previewing table of contents). This concept is known as the elaboration likelihood model.
All 6 principles in Cialdini’s book “Influence” are based on the idea of a click and whirr response (the idea that we take shortcuts when making decisions).
This is an important point that many people have missed in suggesting that you can as effectively apply Cialdini’s principles of persuasion online.
Contrary to that belief existing findings suggests that we are more likely to take the long (deliberate) route if:
The topic is of importance to us
We have the cognitive resources available to process the message
We know something about the topic
The arguments are written
Exactly right. We can tick all these boxes for our cyber lives.
Yes, we do sometimes get bombarded with email and notifications, get distracted by our co-workers and relatives. That leaves us with less cognitive power. We also watch sales videos where our focus would shift back and forth from the content of the message to its source.
Most of the time, though, we consume content in privacy. We are able to control our environment (various distractions), what we read, and what topical websites we get it from. Across all generations, blog articles were found to be the most consumed type of content (according to research by Fractl and Buzzstream).
This means that when we are online instead of taking shortcuts, we are more likely to make deliberate decisions about who we reciprocate to.
To make the point clear, let’s consider a couple of examples.
In the book “Influence” Cialdini cites Dennis Regan’s study where a man called “Joe” managed to double his ticket sales by first making a little favor. Before making an offer to buy some tickets he (unexpectedly for the recipients) offered them a bottle of Coke. Feeling obliged they purchased. Well done Joe!
Sam Parr, founder of Hustle Con, repeated the same process. He negotiated a motorbike’s sales price down to 1800$ ($400 saving!) by gifting a $1.99 Coke before starting the negotiation. Here’s what the conversation went like in his own words:
But in the online world, even if you are genuinely trying to help someone, people’s thinking process might look like this:
Me: Wow, what a cool episode. I think that the host of this podcast might also like reading a blog post on a similar topic.
(8 hours go by since we are in different time zones)
Host: (opens the email) Who is he and why is he sending me this?
Host: (an hour goes by, gets some work done, opens the email again) So why did he send it? Is he trying to get something from me? Is it worth my time? Well, whatever, I have got some work to do.
Host: (closes the email, end of the conversation).
In our cyber lives we have more time to think about authenticity of other people’s motives (any reasons to doubt it have been shown to reduce the power of requests).
We have more control over what we devote our attention to and your “gift” might not get that attention. Even if it does, it might not even be considered a gift.
And the recipient may feel fine about not saying “thank you” or suggesting something back in return. Here’s one of the reasons why:
People are less likely to reciprocate when they can stay anonymous
In the book Cialdini tells us one of the reasons why we reciprocate is, “because there is a general distaste for those who take and make no effort to give in return, we will often go to great lengths to avoid being considered a moocher, ingrate, or freeloader.”
Well, when you are just an anonymous user – this won’t happen. For example, you can use Kaspersky online scanner, close it and no one would ever know if you complied with the author’s request to install an offline version of their software or not.
There is no one to apply that social pressure.
But research shows that social pressure matters a great deal. A field experiment (sample size of 180,000+ people) examined the impact of social pressure on people’s likelihood to vote. They found that a mail letter that applied the most social pressure resulted in the highest compliance rate. Here are the triggers that this mail letter included:
An identity appeal emphasising that voting is generally a good thing to do (“DO YOUR CIVIC DUTY— VOTE! Remember your rights and responsibilities as a citizen.”)
A social pressure appeal informing them that their voting records would be disclosed to their neighbours (and their neighbours’ voting records would be disclosed to them).
Those two triggers resulted in 8.1 percentage-point uplift (29.7% vs 37.8%). (Mailer that only included an identity appeal only resulted in 1.8 percentage points uplift).
The experiment above studied compliance, not reciprocity per se. Researchers did not give anything to people before making a request. So, unless you want to argue that citizens are by default in a reciprocal relationship with the government, we shall look for some further evidence.
People who received a favor and were not required to put their name on the form were less likely to comply with a donation request.
In contrast to our offline lives, when we are online we can easily choose ignoring other people’s requests. Lack of social pressure means that our identities are not at risk. This is particularly true when…
The person you are making a request to is not dependent nor affected by you in any way
When we are interacting in the real world, we can see, in real time, how our actions affect others and how we are being affected by them. When we are online, these effects are not clear. In fact, many interactions we have online are one-off encounters where we are not trying to build long-term relationships. So, we might as well just ignore them.
That’s exactly what Neville (founder of Kopywriting Kourse) does. He clears his inbox from all these one-off shooters to leave space for those who have something important to say:
Neville is not the only one.
Research on social dilemmas shows evidence that this is a general human tendency. Studies show that one-shot encounters encourage selfish rather than cooperative responses. People who only expect a single, anonymous exchange with another person will tend to favor themselves rather than selecting mutually beneficial choices.
That’s why many people who really get a lot of value from the free version of your product would not share it. Unless they see their actions being interdependent with their ability to use it in the future (e.g. many start-ups would cease to exist unless early adopters spread the word), they are more likely to act selfishly.
Similarly, influencers you are trying to reach receive hundreds of similar random emails per day. Most of them are perceived as one-off shots. Unless there is an indication that you might have some business in the future, it’s better for that person to act selfishly and ignore you.
You’re expected to give stuff away. It’s not a favour.
In the book Pre-suasion Cialdini mentions that unexpected gifts are those that will most likely result in return favors. For what we are doing online (e.g. provide a free trial, write useful content, give an ebook away) we are expected to give these things away. Everyone is doing it and you’re no different. That’s why people don’t even perceive it as being a favour, but rather a given.
Perhaps, a free trial might have surprised me back in 1999.
Certainly not today.
There are people who won’t reciprocate anyway unless they see a benefit in doing so
Several papers including this one discuss that there are different ‘personality types’ or social value orientations as they call it. Your social value orientation determines the nature and likelihood of you reciprocating back to others. There are 3 of them:
Prosocials – those looking for win-win cooperation; they believe that it is better if everyone comes out even in a situation.
Individualists – those who are only concerned with their own outcomes. They make decisions based on what they will gain personally; no concern for others.
Competitors – those who tend to maximize their own benefit, but in addition they seek to minimize others’ benefit, e.g. they find satisfaction in imposing their own will when cooperating.
So, assuming that people consistently belong to one type or another, not everyone will reciprocate regardless of what you have done to them. Individualists take into account only how potential cooperation might affect their own outcomes, so unless you provide them with a good reason to cooperate, it won’t happen. Studies supported the idea that individualists reciprocate less than prosocials.
Understanding why reciprocity will work online
But here’s why it will work (and what you can do to overcome the caveats above)
Many humans will reciprocate regardless
As Cialdini says in the book, “the rule for reciprocation is so deeply implanted in us by the process of socialization we all undergo [eg, we learn it from our parents.]”. With all of the studies above, researchers found that these factors reduced people’s tendency to reciprocate, but none of them found that it eliminated it completely.
It’s not just social pressure that forces us to reciprocate.
As researchers explain some people reciprocate because they have internalised reciprocity as a personal norm. It becomes a part of our moral values and thus failing to reciprocate might create feelings of guilt, even when no one pressures us to repay back.
For example, Mark Schaefer, author of Return on Influence, shares his experience of how he had been thanking everyone on Twitter for sharing his posts. His explanation of why he did this closely resonates the research described above. Him saying “Mom would be proud” clearly shows that this is something we learn from parents. He carried that value to his online communications, too.
We also want to reward and encourage good behaviour of others.
Finally, as we have already discussed, there are different personality types. Prosocial people will want to maximise outcomes both for you and themselves, striving for equal benefit for both parties.
One of the very smart people in our industry who I had the privilege to learn from has this quote in his signature:
Likely to be one of those types.
Finally, our tendency to think deliberately about who we reciprocate to does not result in us not reciprocating. It just means that we are less likely to be manipulated and more likely to respond to those who can provide real value.
How to maximise your chances of success?
With that in mind, let’s consider what you can do to overcome the caveats I have just described. We will go through each action tip by considering 3 main use cases for the rule of reciprocity: building relationships with people in our industry, content marketing and user onboarding.
You can provide meaningful gifts
In their books Pre-suasion and Yes! 50 Secrets from the Science of Persuasion, Cialdini and his colleagues mention meaningfulness (or significance) as a factor that increases your chances of making someone to reciprocate. I want to argue that in the online world meaningfulness is a core element, not an enhancer. Since we are more likely to be deliberate about who we reciprocate to, unless we see value in what has been provided to us, we are very unlikely to exchange anything with that person.
I interpret meaningful as something that helps us to make progress in our lives (this would particularly make sense for those familiar with Jobs-to-be-Done framework). Here’s how you could apply it.
When Sam Parr was starting out with Hustle Con, Neville Medhora, founder of Kopywriting Kourse, was on his target list of those he wanted to invite. From watching Bootstrap Live he knew that Neville loved Dave Matthews Band.
Guess what he did to build their relationship.
Yep, he sent Neville a DMB live DVD.
Here’s what Neville’s response was:
Going back to my definition of ‘meaningful’, Jobs-to-be-Done is the core research framework that helps us to understand people’s struggles. By knowing what our customers’ struggles are, we can write that thorough, actionable content that will help them to overcome those struggles.
Gregory Ciotti, content marketer at Help Scout, puts it this way:
If someone implements your advice, they can’t help but form a connection with you.
Content Marketer at Help Scout
To give you an example of how jobs have been solved and what effect they produce, let’s consider Brian Dean’s work.
Many people creating content struggle to make their work visible. They struggle despite implementing the good ol’ advice of creating “great content”.
That’s exactly the struggle that Brian addresses in his link building article.
And here are the types of responses that Brian occasionally gets:
This shows the emotional connection that meaningful gifts like this create.
Similarly, SurveyGizmo provides an extra period of trial subscription without you asking them for it. I considered it as a meaningful favour because I wanted to carry on doing my research vs. having to ask them for extending my trial vs. having to build a business case for my management to approve that subscription.
Similarly, Basecamp’s whole onboarding process (or ‘lack’ of thereof) was a meaningful ‘gift’ for its users:
You can exceed expectations
As I have already mentioned, Cialdini’s research shows that unexpected gifts are more likely to make the other person to repay you back.
We have just seen Neville’s response to the DMD live DVD. The second reason why it worked is that it was unexpected, even more so because their relationship shifted from online to real life.
People expect you to produce content, but you can still go beyond their expectations by producing really great content.
For example, another type of comments Brian Dean gets look like these:
His content is so uncommonly actionable, detailed and well-designed that he continuously exceeds people’s expectations.
Similarly, the trial option offered by SurveyGizmo exceeded my expectations because it was not the standard for software industry.
Other SaaS companies like Slack design unusual user experiences, e.g. its slackbot setting you up, another addition to its slick onboarding experience.
You can break anonymity by showing your own face…
One study has shown that reversing anonymity increased compliance rates. A group of people was emailed to fill in a questionnaire. Some received an anonymous email, while others received an email with a sender’s picture. The latter were more likely to comply with the request (57.5% vs. 83.8%).
This is exactly how Stephen Twomey increased his success rate with cold emails by more than 300%. He used that technique to get press coverage (social proof to boost your conversions!), backlinks from reputable publications as well as to connect with influencers in different industries.
He just added a picture at the bottom of the email. Here’s an example from one of the projects he worked on:
Result: For 50 emails that he sent without the picture he received 3 guest post requests. For 50 emails that he sent WITH the picture he received 13 requests (300%+ difference).
Many blogs still have faceless author contributions, even those in online marketing industry.
Obviously, being able to see the author’s face makes the connection more humane. Example from Conversion.com:
For user onboarding, you can use tools like Intercom, so that people know who they interact with.
…and making them to show theirs
This is the most important part. As we have discussed, when our identities are not at risk we are less likely to comply.
The best strategy with cold email is being referred by someone. Not to sell something you want to sell, but being referred to help. In that case the person on the other end knows that you know their right email address and you are part of their in-group, meaning their identity is on the table.
With content you can use platforms such as Disqus. They keep trail of all the previous communications a person had. For example, when I click on Bryan Harris’s comment, I can see the past history of his communications. That means that when Bryan Harris leaves a comment on someone’s website, he does not just leave an opinion, he leaves his opinion.
For user onboarding use Intercom. It integrates with FullContact. FullContact uses a person’s email address to gather all the publically available information on that user. So, when chatting with someone you don’t talk with user #15345, but a real person. You have access to their job title, social profile, etc.
There is one caveat to Intercom. When I am chatting with someone from a support team, I don’t necessarily know if they know who I am.
But if you play it smart you can still build a very personal connection. For example, go above and beyond in your customer service by utilising the information that becomes available to you. Look at what they did here:
And help your customers when they need it. Here’s an example of me asking for extra access to Buzzsumo subscription. To do so I surely needed to disclose my personal circumstances, again putting my identity on the table.
More importantly, (although it’s harder and won’t be perfect) you can still apply social pressure and establish social norms online
For this to work you need to place yourself in a community. So that it’s not you reaching to someone as if you exist in a vacuum. Instead, it’s you reaching out to someone with both of you having a sense that you co-exist in the same community where rules are set and reinforced by the collective mind. Those rules govern who gets rewarded and who gets punished for their behaviour.
Let’s consider some examples.
When David Garland, founder of The Rise To The Top, wanted to connect with Tim Ferriss, he did not just send him a bunch of cold emails. Instead, he used a combination of different mediums including Twitter, his personal blog (and email), making his attempts to connect more public.
Did Tim Ferriss start feeling guilty for not replying to any of those comments (consequently creating a sense of obligation)?
I don’t know, but as we have already discussed, publicity increases compliance. It might have had some effect when they connected via email. Ultimately result is all that matters. In David’s case Tim agreed to participate. Twice.
(we will cover other aspects of David’s strategy later).
For content marketing, social rewards can come in the form of likes and upvotes…
Social costs can come in the form of being ‘excluded’ from a community for not contributing. For example, Moz excludes you from its ranking unless you have logged in the past 60 days.
It can also come in the form of not getting support from others when you need it. Here’s an example from GrowthHackers.com:
If building a community platform sounds too complicated, could you build an end of the year wall of fame for users who shared your content (and unexpectedly give them something as a gift)?
Surely you could. Just use Buzzsumo and you will quickly find everyone who shared your posts. It’s quick and easy.
Memrise, a language learning app, applies social pressures to user onboarding, too! They made me feel guilty for not catching up with my class.
They also push me forward by showing to me how I compare relative to my peers:
Even if your SaaS product does not have a community around it, can you assign people to demo classes and tell them that they are not catching up? You should test that!
You can encourage cooperative behaviour by sending long-term signals
As we have already said, if people don’t know that achievement of their goals is interdependent on cooperation with you, the chances of them cooperating automatically go down.
To reverse that, you need to send signals that you are staying in the game for a long period of time.
With cold outreach, there are numerous things you can do.
#1. Be persistent.
David Garland from The Rise On The Top has not been just helping Tim Ferriss for a couple of months, but has been providing him with favours for a period of 2 (TWO!) years.
For the past two years I’d been building my platform, helping Tim out to the best of my ability in various small ways, including retweeting his content, writing about him on my blog, mentioning him on the show, reaching out occasionally with an idea, etc.
David Siteman Garland
Founder of The Rise to the Top
As you remember, persistence is what Neville Medhora (and many others) evaluate in you when you are sending them a message.
#2. Show your affiliation with people in that person’s in-group.
Sam Parr who I mentioned previously asked for an intro from his friend Joey Mucha to invite Rick Marini, founder of BranchOut, to his conference. In this particular case Sam was not really making any favors to Rick, he was approaching him with an offer.
BUT! With his follow-up email, Sam made Rick a favor (of some sort) by not being sales-y (many people are), but reaching out to him in a very personalised, caring way:
This made a good impression:
And Rick came to the conference.
I used the following template to reach out to people who I thought could benefit from my research skills:
With content, you can create interdependence by making it likely that people will want to ask you a question.
We have seen it done at GrowthHackers, Moz…
On a standard blog, it seems that by answering people’s comments you can foster reciprocation.
Here’s an example from Brian Dean’s blog:
See this “Added to my buffer” spin by Sam Oh? Nice and subtle use of the reciprocity rule. Obviously, I don’t know the full story, but he might not have shared it unless he needed to ask this question. Thinking of long-term effects I can see that Sam shared Brian’s content at least 13 times to his 7000+ Twitter audience (no causation suggested here, just an observation).
So, provide valuable, detailed advice and make sure people know that you will help them when they need it.
I don’t think this one applies to user onboarding well. You can’t frame it as, “Hey, I gave you access to a free trial, so, could, you, please, complete your profile and set up the integrations, so that you can see how valuable our product is and ultimately sign up?”.
“If you don’t do it now, I won’t give an extension in the future” (creating long-term interdependence).
Despite that, I have a great example for you (at the end of this article) of how one company used incentives and interdependencies to triple their trial-to-paying conversion rate.
You can show the sacrifices you make
Another factor that I did not see Cialdini mentioning in his book is “the degree of sacrifice experienced by the donor”. Researchers suggest that the more person thinks you have sacrificed something substantial to provide a gift, the more they will feel obliged to return something back. In my own interpretation it’s about communicating or showing how much effort you have put into something.
Here’s how I (unknowingly) used it to build relationships with people:
Not only is the fact that I read his book shows my degree of involvement with that person’s work, the rest of the email is a 1028 words “essay” where I provide a new perspective on what he has been working on. It might not always work, but in this particular context I genuinely thought that this type of detail would add a lot of value. In the end, we had a very meaningful exchange around this topic. Moreover, using skills I learnt from copywriting books I tried to make it as engaging as possible, so strike a balance.
When people publish content on their blogs, all they do is hit publish and hope others will appreciate it. However, readers don’t know how many hours you have spent and the number of late-nighters you had before hitting that blue WordPress button. Maybe you should be telling them. At least that’s what the research tells us.
A couple of days ago I saw this post trending at GrowthHackers. From the title I sensed that it’s another collection of random tips, but when I opened it the description below made all the difference.
This has also been successfully applied to user onboarding.
That person has gone at great lengths to explain what she did, what worked and what did not. She included screenshots. I could feel the “sacrifices” she made before getting back to me. This is when I thought that Mailchimp is not actually that bad. I am still feeling obliged to stay with them although I already got all my campaigns working in GetResponse.
Finally, you don’t need to put all your hopes into reciprocity
Many articles out there seem to suggest that you just publish good content, give a free trial to your customers and people will start giving back.
That’s ok, but gets you very little in the way of results.
Reciprocity works for building relationships. Reciprocity works for creating preferences for your product, not your competitor’s (at least one study suggests that).
It sometimes works to get people to enter email addresses into your sign up form.
And it rarely works to get them to enter their credit card details and click “pay”.
This is the key action you want people to take, not just share your content, not leave you comments on your blog, but pay for your product.
That’s where incentives are most powerful though. Give people a good reason why they should purchase your product or enter their email and they will be more likely to do so.
In fact, Optimizely generated more than £3 million in pipeline opportunities using incentives alone. They offered an Apple Watch to a targeted list of high end executives in exchange for a meeting and got an 8% agreement rate.
The rule of reciprocity is one of the many tools you can use and solely relying on it would be plain stupid.
Many big brands follow that strategy on their blogs though. Adidas is one of them:
In contrast, guys at SnackNation, a healthy snack delivery service, offer an incentive for you to join their subscriber list. Free, great content serves the role of establishing the relationship. A clear incentive serves the role of getting them the email addresses. Here’s an example from their wildly popular article, “121 Proven Employee Wellness Program Ideas For Your Office”.
Similarly, when David Garland offered Tim Ferriss to be interviewed on his show, he did not just assume that all their previous interactions would pay for themselves. Instead, he tied it in with an incentive. Tim Ferriss just launched The 4-Hour Body, making the show a perfect opportunity for him to spread the word.
This made it a no-brainer.
Finally, one SaaS company used interdependencies (as well as incentives and an element of gamification) to triple its trial-to-paying conversion rate.
Team at ProdPad, a project management software, cut its trial from 30 days to 14 to 7 days.
Once they cut it down to 7, they decided to allow flexible extension for every important step you complete.
Set up an integration? 2 extra free days. Added a mockup? Another 2 free days.
The reasoning behind it is obvious. The quicker you can get a trial user from “first touch” to “first value”, the more likely she is to become your happy, paying customer.
Instead of leaving it as a 30 day trial, hoping that users will have enough time to make the full use of their product, team at ProdPad set up an incentive-based system where every desirable step is rewarded with a trial extension. They even gamified it (notice the progress chart and green ticks)!
So, now when a user approaches a 30-day trial, she would not think, “I should start being nice to their team, so that they can extend my trial in case I am over the limit”, the user knows right from the start what the dependence is (between her actions and her ability to use the app).
Again, this shows that adding incentives to what you give away for free can substantially increase the power of your persuasion efforts. Tripling your trial to paying conversion rate is not bad at all.
Reciprocity is a powerful persuasion tool, but it comes with its caveats when applied online. Beware of the reasons why it is less likely to work and overcome these challenges with the techniques I described above.
Here’s a full summary for your reference:
Provide meaningful, targeted gifts (tip: research the person, Sam Parr watched conference videos to find out what Neville Medhora was into)
Exceed their expectations (eg. send them a physical gift)
Break anonymity by showing your face (eg. include a selfie at the end of your email)…
…and look for a way to ensure they show theirs (eg. find a way to be referred by someone)
Use social pressure to your advantage (eg. connecting via an active community or social media adds publicity to your and the target person’s actions)
Make it clear that this is not the first time the two of you might engage with each other (eg. be persistent and show affiliation with that person’s in-group)
Explain the sacrifices you made to provide that gift, in a subtle way (eg. explain how difficult it was to find, but you did X, Y and Z, and finally got it, or just show the effort you went to)
Write meaningful material that helps your target prospects to make progress in their lives (tip: use JTBD framework)
Exceed people’s expectation by creating visually attractive, insanely detailed and super-actionable content
Show your face in the author section
Make users show their face when interacting with you through the blog (eg. integrate your comments section with Disqus)
Create a community in order to enforce the social pressure (this involves having a publicly observable rating system as well as appropriate rewards and punishments that reinforce desirable user behaviour)
Make use of interdependencies between you and your readers (eg. by giving meaningful help through the blog comments you might encourage readers to give something to you, in anticipation that they might receive something meaningful back from you)
Show the sacrifices you made to write that article (eg. number of hours spent, interviews conducted, core challenges you had to overcome)
Provide users with meaningful favours (tip: find out what their struggles might be at different points of the onboarding process, use JTBD and on-site surveys to get meaningful, contextual data)
Exceed users’ expectations by researching your industry and creating unexpected, meaningful experiences
Use tools like Intercom to create personal, non-anonymous connections with your users
If applicable, create social groups within your app in order to generate social dynamics
If applicable, use copy notifying the user how they perform relative to the rest of the in-app social group
Show the sacrifices you made (e.g. to build the app, build a certain feature, or resolve a bug issue when answering a support ticket)
Finally, remember that the rule of reciprocity is only one of the many techniques that you can use. For your visitors to progress through the final steps in your conversion funnel, it would be less risky for you to utilise incentives (eg. explain the clear benefits of giving an email address or subscribing for your product) than solely relying on reciprocity.
In 2009 Ruben started his own company, BidSketch. At its inception BidSketch was a proposal software that was primarily targeted at web designers and web developers. Starting as a one-man show, BidSketch has vastly grown, to this date it helped its customers to make more than $1 billion in sales.
But Ruben’s journey was not an easy one. On the way to his first $1000 there were multiple times when he wanted to give up on the whole idea completely. His initial research showed that the no one had any interest in the product, he wasted a whole month building a free tool that nobody used and even missed his launch date due to unreliable contractors (more on that here).
In fact, he even had to hit the $1000/month mark twice(!). One API call eroded almost all the billing information he had about his customers. In a matter of seconds, his revenue dropped down to zero. He had to email his customers, asking them to reset their paying accounts again. Surely, a fair number of customers did not return.
The upside is – throughout his journey Ruben has learnt a lot – and that’s why I am so excited to have had the opportunity to interview him. In 2012 he wrote a blog post, “What I learned from increasing my prices”, where he explains how research and testing allowed BidSketch to see one of the largest spikes in growth it has ever had. Since then his pricing page has evolved even further and that’s what we are about to dig into.
We cover pricing, small tests that he ran that resulted in substantial increases in conversion rate (and revenue), how Ruben used Jobs-to-be-Done interviews to decrease his customers’ churn rate, research and testing tools that helped him on his journey, and so much more.
I recommend you read his original article first (although it’s not required). I’ve learnt a lot and I am sure you will too.
Part 1: How to communicate value of your product on the pricing page – while keeping things simple
A little bit of background history.
Here’s the first version of BidSketch pricing page (2010)
Here’s the version that resulted in one of the largest spikes in revenue (the one he talks about in his article). This is 2012.
This is the version that we see today (in 2016)
Egor: First of all, I would like to understand the context behind your pricing page, how it evolved over the years, and then get into the nitty-gritty of what research questions did you find being most useful, any actionable tips you can share, things that delivered the most results.
The major difference that I can see is that you had freelancer, studio and agency plans. Then, today in 2016 it is split into Solo, Team and Business. How did that change happen? The first one seemed to be more tailored to customer personas (web designers in particular), and this one seems to be more generic – more applicable to everyone. Did you change it as you scaled or was there another reason for that?
Ruben: Initially,whenwe had the premium and basic plans (when BidSketch first launched), it was for designers. By the time I did this pricing change, it was no longer for designers, but it still was for… you know, creatives. I think at the time 80% or 90% were the categories of either web designers, marketing, freelancers, SEO, developers, people from companies in those categories. Persona-based pricing was a good fit for that.
Then, there was a point where we started getting more customers as we scaled, and that distribution started to change. We saw that it started to change through a few surveys, but beyond that, we also started to see it in cancellation feedback of people who were entering the trial period. More and more people were saying, ‘I don’t think this is for me. I don’t feel like it was made for my business. It seems as if it was made for designers or web developers’.
We started changing the product, for example, addingmoretemplatesformore businesses. That way we had a bunch of different signals in the app that spoke to those kinds of businesses. Then we started generalizing even more andaddingmoreresourcesto appeal to them.
But we were still getting that the last piece was the pricing page. Welookedatthebusinesses that were cancelling, at their websites and we talked to them. With some of them we did Jobs-to-be-Done interviews. It was like, ok, the pricing might be unclear when somebody goes to the pricing page and they see freelancer, studio, agency, and they are not that type of company.
For example, they could be from a SaaS company doing enterprise sales, and they would think, ‘hmm, this is not quite right’. So, we did a test to see if there would be an impact on conversions. In the previous test [the change that was carried out in 2012, see images above] where we changed general names [Basic and Premium] to freelancer, agency and studio plan names, we got more customers. This time around when we tested Business, Team and Solo, we got less trials, which was interesting, but we got a little bit more customers at a bit of a higher price point.
Egor: That’s very interesting. First of all, you targeted specific segments or even identities, and achieved an uplift, and then you repositioned it… with an appeal to broader audiences. It seems like the opposite of the technique that worked for you in the first place led to more people being closed.
Ruben: Right, you know. Business changes, market changes, competition, traffic you get, there are a lot of variables. It’s a good idea to retest, I do that sometimes – retest things that did not work before.
Egor: Another change that I can see is your plans are primarily limited by the number of users; and previously the limitations included proposals, clients and users [and storage].
The limitations you set for your plans are important, aren’t they?
They can act as an incentive for a client to upgrade.
The extent to which your product and its different features are used also affects your cost base. For example, if the number of proposals that a customer can set closely correlates with your costs, and you make it unlimited, then your cost base could skyrocket [if customers start creating loads of proposals].
Probably, this is not the case given the fact that you removed it, but I am just trying to understand what was your thinking behind setting some of these limitations, for example, users, and removing other ones (proposals and clients)? Is it primarily customer-research driven? Was it somehow affected by your consideration of costs and profit? What was your thinking process?
Ruben: It was based off of a couple of things. One was we looked at the data when we had very simple plans, either a plan with one user or a plan with unlimited users [the very first plans Bidsketch had in 2010]. Looking at the data, we could see very clear groupings or break-points. They were not getting charged for those extra users, so we could see naturally how many people on one account used the product.
We’ve seen a bunch of companies with two or three users. Then, we’d see, I think the next point was 5, and the next one 8. Just based off of that data, it felt like a really good test. We also looked at different types of companies that had these different numbers of users. That was one of the things that we looked at, and the other thing was features.
Basically, since we were just leveraging users [as a limitation], we mainly looked at customising domains and team management [for different plans]. Team management does not really mean anything for people who are on the 1 user plan, but it’s there to make it feel a lot more different, like you’re getting a lot more value on a higher priced plan where you have more users. We could probably eliminate that row, and it would still be clear what the differences are [between these plans]. The reason why it’s there is to make it feel more different, it’s something to make it stand out more.
Overall, we used a combination of metrics and qualitative data. One limitation, users, was based off of our quant data. Ability to customise your own domain is something that was highly valued based off of our conversations with customers. We tried to do both; we looked at the data that we have and we tried to have conversations with customers to get clarity on that data, to make sure that what we think we are seeing is actually what we are seeing. That was the thinking behind it.
The proposals… I am trying to remember why [we had in the first place], I think the proposals was an attempt to have something else that we use to push people towards the $29/month plan. That’s why we did it and when we had this plan, most people signed up to the $29/month plan. Most people did not sign up to the $19/month plan although it was cheaper.
So, I don’t think I ever really tested that before [specifically testing impact of proposals]. At one point I wanted to simplify pricing. So, we ran surveys and asked people what confused them. There were few things that would come up in [the surveys], but one thing that I just wondered about was is, ‘Is the number of proposals actually doing anything?’.
Hiten Shah from KISSmetrics, CrazyEgg, recommends sometimes doing what he calls sensitivity testing, which is just: remove something from the page, see if it’s actually working. Instead of adding something or changing it, just take it off, see if it actually has any impact. So, we did that and you know it did not get any worse, it did not get any better. So, I dropped it, just because I like simple, simple is better.
Egor: Was is the same for the ‘clients’ limitation?
Ruben: Well, we dropped the freelancer plan (the $19 plan) out of the main grid to add another plan. So, clients is a metric that we still limit on, but not on any of the plans that are on the grid. It’s limited on the link below the plan. Since the other plans on the grid are more expensive and we don’t limit clients on any of them, there is no need to have that.
Egor: Ah, I saw that. There is a link below that takes you to another plan. I read this case study where Joanna Wiebe from CopyHackers optimised CrazyEgg’s website, I think they did a similar sensitivity test. They removed the Johnson’s box on the left which is a navigation box, and I think they removed it in order to… basically, to make more space, so that they can put more content above-the-fold on that landing page.
You said that you tried to simplify the plans. As far as I understand, BidSketch has many more features than what is currently listed on the pricing page.
How did you… I am asking this because usually when I look at enterprise SaaS at least, they have a huge list of different features. It just falls on you and sometimes I start feeling overwhelmed.
You can sense that the one with a larger list is meant to be more attractive for larger businesses, but to really find something for yourself… it’s hard, sometimes I can’t even make it through.
So, my question is: How did you come to that list of features that is currently listed on your pricing page? Your plans look very simplified and easy-to-digest.
Ruben: There was mainly… I think when we were working on the second version of these plans, we tested just having a bunch of features on the left hand-side, having them listed all out, more detailed [like the SurveyGizmo example above], and the simpler version won; it did better. So, that’s what moved us in that direction.
We also did some Qualaroo surveys. We found that yes, there can be value from showing what are the features on each of these plans, even if the feature does not communicate what are the differences between each of these plans, it is still valuable for users to know.
Even if you pushed everyone to see your tour page before they see the pricing page, not everyone will actively engage with your tour page. They might just skip to pricing, this is why I think it’s important to show important features that are available on all the plans. But it does not have to be done in a way that a lot of people do it, which is a column on a left hand side, and on the right there is a pricing grid.
We are doing it at the bottom before the sign-up button where we say, “All plans include templates, branding, and PDF export”.
Also, we don’t [show] all of the features that we have, we only have those features that are most important to people. The ones that we know from interviews and surveys, they are the most important things because they asked for them or whatever. So, it’s sort of still limited, but it’s shown there. And when you have them on the left hand side, it’s just more energy, it adds more visual noise, it makes it harder to parse through the pricing grid.
Part 2: How to apply Jobs-to-be-Done interviews to SaaS and finally ‘get’ your customers, build a better product and cut down churn by over 30%?
Egor: So, to simplify the plan you needed to limit the number of features you are showing. To do this, you did research and identified what customers found being most valuable in your product, what research and what types of questions. You said you used Qualaroo surveys, you used interviews – what exact questions did you find most useful when trying to understand what your customers find being most valuable?
Ruben: It’s two things. It’s seeing what they are using when they pay, and it’s never been directly asking them, but finding out what they chose or why they chose it, for example, when they upgraded or decided to pay. This came up through Jobs-to-be-Done interviews where we did ‘switch interviews.’
In those you focus on what happened; the steps that they took when they stopped using whatever it is that they were using previously and started paying for our product. In that, there is a point where they are evaluating and they are deciding and it’s pretty clear…
You ask them, ‘What did you do next? What you didn’t do next? Why did you do that? Ok, what was you thinking at this point? Did you have any concerns?’ The thing that generally comes out is the decision that they were making, the trade-offs that they were making when they were buying, so then you get to see, ‘Aha!’
So, to them the branding part is not really that important because that did not stop their decision, that did not stop them from upgrading, but they were not sure about custom domains, so in their trial they did not upgrade or did not pay or did not start their plan early – even though they wanted to – until they set up DNS and set up their custom domain, etc’.
So, there are a bunch of little stories like that, so that we can then see, ok, these were the themes that helped them to decide to pay and these are the ones that did not. So, again, we used a combination of that [qualitative data, specifically JTBD interviews] and quantitative data.
Egor: You mentioned switch interviews and Jobs-to-be-Done interviews. I have heard of Jobs-to-Done as a concept [when I read Clayton Christensen’s “How will you measure your life?”], but I have not heard of Jobs-to-be-Done interviews. Is it a standardised set of questions you use, do you prepare it yourself, is it some type of framework? Could you explain it to me?
Ruben: Yeah, sure. Generally we run switch interviews. It’s about capturing the story of the switching moment. So, instead of asking them, “Why did you sign up? How did you like it?”, or any things like that, you approach it in a different way.
Basically, people often don’t know on the surface why [they made a particular decision]. Or they would give you reasons that they think you want to hear, but instead with switch interviews you start by asking…
Well, you start in a lot of different ways, but the framework for asking these questions is to find out:
what they were using before
when did they start to have problems or doubts with the things they were using
why did they start looking for something else
why did they start to evaluate something else
why did they start to evaluate it or sign up for it at that moment, on that day instead of the day before, the day after, to really dig into it
You want to have them walk you through every step of what happened in order to understand their thinking, their process and ask, ‘What were you thinking here? Why did you do this? Why did you do that?’ instead of asking, ‘Why did you sign up?’. And going through that story where you are finding out what the moment when they decided to buy – through their actions – was, and what their thinking was.
[My note: Notice how the approach above is different from standard CRO questions such as, “What persuaded you to purchase from us today?”.
For those unfamiliar with JTBD framework, think about what Ruben said before: often customers do not know the deep reasons behind why they signed up. So, often if you just ask, ‘Why did you sign up?’, you will get a lot of surface answers. Eg. ‘I just needed to create proposals for my business.’ This is not very actionable.
Instead, with JTBD interviews you go through their story and ask them why they made certain decisions in the past that ultimately led to the final purchase decision. When people go back in time and start recalling situations and context in which these decisions were made, more detailed memories start coming out on the surface and the real motives behind one’s purchase are revealed.
It didn’t click with me until I read Alan Klement’s book “When Coffee and Kale compete” and tried conducting a JTBD interview myself, but the quickest way to get to your first “aha” moment with JTBD framework is to listen to the JTBD Mattress interview].
Egor: So, when does it happen? Does it happen straight after someone converted into purchase or can it happen at any time?
Ruben: Well, it’s a SaaS product, there are two things. There’s a ton of friction when we ask for a credit card upfront for someone to sign up for a trial. So, that’s one thing. There has to be enough… enough momentum and something pushing them towards entering their credit card information to do that rightatthatmoment. That’s one point and the other more important point is when they actually decided to buy.
Since that’s a SaaS product where we just bill them automatically on day 14, it’s not on day 14 that they decided to buy. Maybe they forgot to cancel, so a month later they’re going to ask for a refund. Maybe they haven’t even set it up yet. It’s like, ‘yeah, in a few months we will’.
Usually, it’s at some point during the trial or some point after they started paying, after the trial. We often cover that with a question, ‘At what point did you know it was going to work for you?’. We walked through the whole story and ‘yeah, it’s working, we used it and it was really good’. It’s like, ok, good, at what point did you realise that it was ok, before that point you were trying it out, trying to see and then at some point something happened where you saw something and you thought, ‘Yes, this is gonna work’. That’s the buying moment.
Egor: So, you are trying to get them to narrate a story about themselves as opposed to trying to make them rationalise why they made that purchase. And then you try to understand why they made that purchase by listening to their story and analysing it yourself rather than making them to rationalise it for you. That’s very interesting. Did someone create switch interviews? Where did it originate from?
Ruben: Yeah, two guys from the Re-Wired Group that work closely with Clayton Christensen on implementing Jobs-to-be-Done interviews. Bob Moesta and Chris Spiek. They do these interviews with really big clients. They put on these Switch Workshops where they teach this concept.
So, we have also done cancellation interviews where people are switching away from our product to something else. Our product is the thing that they were using and they had a problem with, and eventually people started using something else.
Egor: When you say interviews, do you mean calling and talking through their story? What is the set up like?
Ruben: Yeah, these are like 30-45 minute interviews.
Egor: Is it difficult to recruit people for these interviews?
Ruben: If it’s for people who are paying, we are trying to do the switch interviews for people who paid at least once or they just finished payment for the next month. We do it there because we still want it to be fresh in their mind. We also want to make sure that they are paying [ie. they did not just forget to cancel].
Egor: Is it difficult to get people to agree to these interviews? Do you use some type of incentive? What kind of email do you send?
Ruben: We have not had too much luck recruiting through email. So, generally we do not do that. We previously recruited through Qualaroo surveys inside the app or using Intercom inside the app, taking them through a survey and getting them an incentive.
Recruiting for people who cancelled is much harder than people who just paid for your product, especially when you want to get them on the phone for that long. So, for people that cancelled we did a cancel confirmation page, it came up with a message, saying that they have been cancelled, sorry to see them go, feedback is very important to us, please, help us improve, asking them if they would be willing to participate in a 30-45 minute interview.
To show our appreciation, we’ll toss in a $100 Amazon gift card. It can work without the gift card, we have done that, with the gift card it’s just so much faster. We have a really big incentive. You generally need around 10 to 15 of these interviews. As you don’t need a lot of interviews, it’s well worth for us to give $100 per person for the data that we would get.
Egor: That’s amazing! So, based on what I have heard so far, there are two types, switch interviews and cancellation interviews. What did you find most valuable? With switch interviews you are trying to understand what happened in someone’s life and led them to start paying for your product. With cancellation interviews, are you trying to understand why your value proposition suffers? What’s the main value of these interviews?
Ruben: We have exit surveys on the cancel form. It’s a required form where they tell us why they are cancelling. The vast majority of people are just saying, ‘Did not use it enough’, so there is a percentage of people who say, ‘Well, this did not work or that did not work’. People that cancel in the early months, first month after paying or the first 2 months after paying, it’s generally onboarding stuff. They just did not finish setting up their account, they did not fully implement it or they just did it once, things like that. It’s still a symptom, but the reason for that varies. And people that cancel that have been using it for a while, they tend to be in different categories.
So, the cancellation interviews were to get more insight into what we were seeing as far as the feedback that they were giving us. It felt kind of superficial. It was light, it was better than nothing in these cancel forms, but we wanted to see what the stories were behind that. In particular, the biggest category was ‘not using it enough’.
What do you mean by ‘I am not using it enough’? Why not? It’s not just not using it. There was a reason for it. In some cases, there was a big disconnect between what they expected and what they got.
Another thing that came up was about the term ‘proposal’. There was a disconnect to what they understood as proposals and what the app offered them. It’s a proposal app and once they sign up, they have proposals in their mind to create and send. Then, they start using it and they think, ‘Ok, that thing was more thorough than what I currently use’, it has a lot of features for the proposals that I send. Proposals that I send are very simple.’
Well, taking a look at their “proposal”, it is not really a proposal, they are sending an estimate or they are sending a contract, but for a lot of people these are their sales proposals. These people were less likely to buy. So, as a result of these cancellation interviews we set up examples and help documentation around those other types of documents.
There are several categories and things like that. Sometimes it’s a setup thing, it’s just onboarding, if it’s onboarding, then you can fix it. But it’s much easier to uncover what those reasons are after doing interviews that way.
Egor: I see. Did you make any other product changes or marketing changes that came as a result of these interviews? And did you see any tangible results from these changes?
Ruben: Some of the pricing grid changes that we have already talked about. Through those interviews, that’s where we got the insight. Knowing what features we want to show on that page, on the left hand side or just at the bottom, and which features should we not even bother showing. A lot of that insight came from that. As far as pricing…
Egor: It does not necessarily have to be about pricing. Anything related to product or marketing…
Ruben: The ‘pause’ feature for their account, where they pay $5 a month is actually used and people come back and un-pause their account and start paying again.
Egor: Is it for people who are not using it actively, but want to stay?
Ruben: Right, with people who were cancelling it was kind of streaky. Especially if they are smaller, they would send out some proposals, then they would get something. They’d be busy with that project for several months and would not be using BidSketch. Then, we would bill them and we would bill them again. They would think, ‘I need to cancel, I am not using this’.
Then, in 2 more months they would start using it again. So, they would sign up for another trial and create another account and would not have the past history or anything like that. So, they would like to have had all their past history and not have to set everything up again. Just implementing that was a pretty good thing that came from that. It worked.
It’s used in the way that it was meant to be used. We monitored it, and we worried that people would just leave it there and not come back, but a lot of people did come back. So, that’s working well.
The other thing was yearly plans. Being more aggressive with yearly because of that cycle. This is another thing that came from Jobs-to-be-Done interviews. Being able to change the evaluation period in their mind.
When someone is on a yearly plan, it’s about, ‘How much did I use it this year?’ It’s a very different question from the one you ask yourself every month, ‘Did I use it? Oh no, this month I did not.’ So, maybe I used it 20 times in a year, but it was all in 3 months or 4 months cycles throughout the year. The rest of the months were not used at all.
For somebody who is evaluating on the yearly basis that works. For somebody who is evaluating monthly, sometimes it’s worth it if they think about it in terms of their entire usage per year, but a lot of people don’t think that way. People literally think, ‘Oh, this is a second month I have not used it’.
Egor: So, what did you do? Did you just literally push more people on the yearly plans as opposed to offering monthly plans?
Ruben: Yeah, and pitching yearly plans through Intercom at day 45 or something like that, and giving link for an upgrade with a big discount to invoices. Basically, pitching them everywhere.
Half of the traffic that we get to the pricing page gets defaulted to yearly plans, and the other half to monthly with the option to pay upfront yearly. It’s a little ghetto, but it gives us the right amount of yearly paid accounts without sacrificing too much of the monthly revenue.
Egor: Is it easy to convince people? Do you convince them with just a discount or do you build a bigger business case around it?
Ruben: Well… The discount does most of the work. Just having a generous discount, then pitching it at the right time for the people that do not default to it or initially take it. Some people don’t even know if this is going to work. They don’t feel secure enough with going for something yearly. That’s why… I found that about a 45 day mark is a good time for us to do that.
Egor: How did you come to that 45 day mark? Was it through experimentation/trial-and-error?
Ruben: Through a lot of conversations that we had, we could tell that by then, not everyone, there are many people that are still unsure, but most people would surely love it and know if it’s going to work for them or not.
Egor: How did you come to your current discount? If I am correct, it’s 40%.
Ruben: 40% is for the middle plan, 26% percent for the other plans.
Egor: How did you come to this?
Ruben: We tested discounts. We started maybe at 10% or so, I don’t remember exactly what they were.
Egor: So, you started with discounts and then you looked at how many people would get into a yearly plan? Was is the main KPI for that one?
Egor: Ok, and then you just went up and up with your discount and looked at what the effect would be?
Ruben: That’s right.
Egor: I want to come back to the pause feature. There has been a number of times when I would have certainly paid a small fee. I think it’s very smart…
Ruben: It’s something that I think a lot of SaaS products could do. I’ve seen it done with a pre-pause, I don’t remember the exact products. I wanted to do a ‘pay’ one because we don’t want to pause just a bunch of accounts where people had no intention of coming back. So, it was just that if they are willing to pay at least $5/month, they see that there is real value in this for them. [In that case], they would be more likely to then un-pause it at some point.
Egor: And do a lot of people come back?
Egor: So, it works.
Ruben: It seems to be working for us.
Egor:How does it work? When someone cancels Bidsketch, does their account get deleted straightaway, so they have to create a new one? What is the process? Is it not being saved anyways? What is the incentive for people to pause?
Ruben: If they were not to pause, if they were to cancel, then all their data gets deleted. If they were to come back, they would have to create a new account and recreate everything.
Egor: Are they being notified of that in advance? If I am cancelling, am I being told that all the data will be deleted?
Ruben: Yes, when people cancel, we explain to them that their data is going to be deleted. We make them to tick an extra check-box. We explicitly prompt them during the cancellation flow, so they can choose to pause instead of cancelling.
Egor: I want to clarify the thing about these interview. You seem to have mentioned 3 types of interviews. There are Jobs-to-be-done interviews, switch interviews and cancellation interviews. Are these all separate types?
Ruben: It’s the same type. I just cycle through, but I would say there are 2 types: ‘switching to’ and ‘switching away from’ interviews. We have also have done a lot of regular customer development interviews.
Egor: And what exactly do you mean by that?
Ruben: Just interviews that are generally shorter and are more direct. And we are not capturing their story about why they switched or not. It’s for people who have been already using the product, and when we are usually trying to get more insights around some data that we have collected somewhere or we are trying to get clarification around something.
We ask very specific questions about, for instance, the proposal thing, the term. What sort of documents, what are they sending through BidSketch, what are these documents, what do they contain, what do they have, are these documents to close a sale? Are they being sent through Bidsketch? Or are they being sent through email or other apps? This is an example of us trying to get more data through short custdev interviews, asking very direct questions.
Egor: So, with switch and cancellation interviews, you are trying to understand the Jobs-to-be-Done. With regular ones, you are just trying to clarify any questions you have about a certain aspect of your existing data.
Egor: And with Jobs-to-be-done what questions did you find the most useful?]
I actually have them in my blog post. There is a section in there, a cheat sheet with all the questions.
Part 3: What experiments did Ruben run on the pricing page? How did a quick copy change help him to increase the trial sign up rate? What tools does he use for tracking and testing?
Egor: Coming back to the original re-design of your pricing page, you said you looked at the data. How did you look it up? Did you use any tools or did you just have it in your back-end?
Ruben: Both our back-end and Kissmetrics.
Egor: And what did you use for experimentation, for A/B testing?
Ruben: It was a combination of Optimizely and Kissmetrics.
Ruben: Optimizely is good for re-directing traffic and seeing the results on that page that we are testing, and then we use Kissmetrics to see the impact throughout the funnel, on sign-ups, cancellations, etc.
Egor: So, tracking long-term effects.
Ruben: To make sure that ‘yes, it helped our conversions’, it also did not negatively impact cancellations or something else.
Egor: Now I want to dissect your current pricing page. As you can see, I numbered every element of your current pricing page.
A couple of things are going here that I find interesting. First thing you do is communicate your value proposition in the headline. Then, you seem to communicate not just the value of your product, but value of the free trial itself. I looked at SumoMe’s pricing page today and they did not have any of those elements. How did you come to that?
Ruben: Number one used to be number two based off of some other page, I think it was Basecamp or something similar. At that time, I did not do a lot of testing around ‘get started in less than a minute’ or ‘get started quickly’. That seemed like a good idea, I had that on there, and I wanted to test something different than that. Basically, just to test the value proposition. So, we tested that and it did a little bit better. So, we kept that and I did not have number two at all.
There were questions that we asked through Qualaroo surveys. Asking people what’s stopping them from signing up to where it made me want to test number 2 underneath. It helped a little bit.
We did not see really huge jumps in trials, but most of them were just a little bit better, and so we left number 2. I was actually kind of surprised with number 2, I tested it, but I remember thinking, ‘yeah, it probably won’t do anything, but I just can’t think of anything better’, but in the test it actually worked. I thought, ‘Ha! They are reading that and it actually makes a difference to them!’
Egor: Was impact just on the free trial sign-ups or did the impact translate into actual sales?
Ruben: Yeah, it did! And the order of the plans, we had the order differently, from small to big, and we tested that, the sign-ups mostly stayed the same, but the distribution was a little different. Our revenue per customer was a little higher, it got more people paying on the higher-tier plans.
Egor: It also seems to me that you are trying to communicate value through tooltips for your features. For example, explanation for Analytics is not tied to some metrics, number of hits you have got or some other technical metric, it is more about how people would use it. If I were about to sign up for BidSketch, I would see immediate value in being able to track my clients. Was it a separate test or did you just think that this is a sensible to do?
Ruben: Yeah, I did not test that. It just made sense to try to do that in a way we write up our features on the features page, and tour page and anywhere where we are explaining it, trying to make it clear where the value is. Those have changed and it’s been mostly about clarity because, through Qualaroo surveys on that page, I have seen from time to time questions that people have.
Also, in Crazy Egg I saw, ‘Yep, they are using them’, they are hovering over them, they are looking at them, but maybe I am not explaining it clearly enough or it does not make enough sense.
Egor: I can also see that further down you are using social proof and also have an FAQ in order to, in my understanding, close some of the main objections. Was it tested separately or was it just a sensible thing to add?
Ruben: You know, I have not tested the FAQ. FAQ was added based off the questions that we saw people asking. For example, when we asked them through Qualaroo, why didn’t you sign up or what is keeping you from taking on a plan… that’s what we used that area for.
And the social proof was the results that people who were signing up talked about, the ones that people want. So, two of them are based on them talking about time save. Any time we have tested what people want, like close more deals or save more time or make more money, save more time when it comes to proposals always wins. So, that’s why those specific testimonials are there and that said, there is interest in closing more sales, so we have one focused on that.
Strategy will make or break your experimentation program.
With no strategy in place, you risk running the wrong tests, in the wrong order, on the wrong goals.
But get the strategy right, and you’ll have an impactful and scalable experimentation framework.
What’s more, this framework can help you apply testing not just to your website, but across your entire organisation. Testing then becomes the mindset for growth – not an occasional bolt-on to website marketing.
It enables you to test and optimise messaging, design, user experience – even your advertising, pricing and product.
There are key habits and indicators that suggest a testing program is more tactical than strategic. The table below compares the tactical vs. strategic approaches. Read through description for each to understand where on the spectrum between these two your current approach is lying.
How to shift your approach from tactical to strategic?
#1. From each test existing in a vacuum to strategic evolution of tests.
With a tactical approach to testing, tests do not inform a well-integrated testing strategy, but exist in isolation. This means that when a test is over, you simply move onto your next planned test. As a result, your testing strategy look like this:
The diagram key:
Tests marked as red = losers
Tests marked as yellow = did not make any difference
Tests marked as green = winners
It’s a random set of tests where some win, some lose, and none of them inform your subsequent steps.
In contrast, when you employ a strategic approach your testing looks like this:
In essence, levers are factors which the data shows may impact user behaviour on the site. For example, the lever “card comparison” was based on research findings which showed that people find it difficult to compare credit cards on finance websites. As a result, they did not apply for any because they couldn’t decide which was best.
Levers inform branches of tests. Some tests win, some lose, but the tests are integrated, i.e. each test can inform subsequent tests based on its result.
For example, if you’re a pharmaceutical retailer and you found that delivery is an important factor when deciding whether to purchase oral contraceptives, then here’s what your first test could look like:
If your test won, then you could iterate that test idea. Was it the free delivery that mattered or was it the speed? Next step – two variations: “Free” vs. “Next Day”. If it was the speed, maybe we should introduce in-store delivery as well as next day delivery, and see if the extra expense is justified by extra demand. Then, we might test making it more prominent. Instead of showing it in the header, we could include as part of the page’s sub-headline.
This is how a strategic approach to testing forces you to amplify impact from your original test and uncover granular insights about your customers.
#2. From having single winning tests to scaling your impact
Once you know the intricate details about what motivates/prevents your customers from taking a desired action – you’re still not done!
The next stages you can (and should) go through are:
Scale your impact, i.e. test the same concept on other pages. In the context of Lloyds Pharmacy this could mean reinforcing the same concept on other product pages (eg. if we tested it only on major brands, we could roll out the same test concept on the smaller brands, too) or we could test the same concept further down the funnel. For example, Lloyds Pharmacy could reinforce the same benefits when the visitor continues their order.
Share your impact, i.e. apply the same concept to other areas of your business. If this concept resonated so well with your audience, let’s test including it in your PPC campaigns, meta description, and email marketing promo offers. If these work, there is sufficient evidence to then test these in your offline marketing, too!
Here’s the essence of it: You find a winning test idea and then you hammer it. To do so, follow this protocol.
If the test wins:
Amplify (same concept, same page)
Scale (same concept, other pages)
Share (from acquisition to product)
To decide which levers are most powerful in changing your customers’ behaviour, you need a broad view of your optimisation program. For one of the clients we worked with we created the following (anonymised) table:
We tracked everything: the number of tests we ran around a certain lever, the win rate, the uplift that every test concept creates. We then segmented it based on different criteria: the step in the conversion funnel where it was executed, the acquisition channel, device type. This gives us a better idea of where we can scale the impact generated by our most successful tests. For example, we can see that “trust” had the highest win rate and a relatively large uplift, but we have not yet run many tests on our PPC traffic. Let’s scale it further!
#3. From lack of priorities to effective allocation of team’s resources
It’s essential for a strategic testing program to maximise the value of its resources. The success of the program will be limited both by the volume of tests the website supports, as well as by internal resources like design and development time.
That’s why it’s essential to prioritise your tests effectively. It’s impossible to run every test we can think of – so we have to be selective and prioritise strategically.
That means planning the test roadmap by considering variables like:
The value of the area being tested (eg the flow, page or element)
The potential impact of the test
The ease (design and build time, as well as sign-off) required to launch the test
Ensuring that we’re learning about user behaviour (eg by testing across a range of levers, rather than focusing heavily on one or two)
Any risks associated with running the test
In short, we want to prioritise high-impact, high-ease tests on high-value areas:
By prioritising tests based on impact and ease, you make sure that you don’t invest your time in complex, low impact tests.
If a test is complex but has a high potential impact, you should (whenever you can) try to prove the concept first. That means simplifying the execution of the test to a point where it becomes feasible to run – the “minimum viable test” – before progressing to more complex (and potentially more impactful) iterations.
Let’s consider an example.
Minimum viable test: Credit card industry
The research we conducted when analysing the credit card industry showed that the fear of not being approved was the #1 reason preventing people from applying for credit cards.
Santander has a good example of a bad landing page. All the eligibility information is hidden under a huge block of text. Even if you find it, it’s generic, and there is no guidance on whether you, given your individual circumstances, would be approved.
To address this objection more effectively, Santander could build an eligibility checker similar to the one Barclays has:
However, it would require substantial time to build.
To understand if it is worth investing resources into this new tool, Santander could create a minimum viable test to first prove the concept. For example, they could add a new section at the top that would look similar to an eligibility checker, but upon clicking would still present the same generic information:
The visitors still would not find out the information specific to their needs, but the important point is that Santander would be able to measure the % of people who click on this button. If they do, it’s worth developing the concept further – if they don’t, their resources can be better deployed elsewhere.
#4. From retesting similar concepts and dismissing good ones to keeping a test log and continually learning
Every successful test should inform the overall testing strategy. But that can be a challenge if people on your team change and the knowledge of what worked might fade away. Without an effective knowledge base of tests, you’re facing two risks that can undermine your testing program:
Repeating previous tests: You might run similar tests again. At best, you may validate the previous result. At worst, you’ll waste resource by repeating a test – and potentially one that had a negative result.
Dismissing new concepts: A bigger risk is saying, “We already tested that”, without being able to show exactly what was tested and what the outcome was. As above, a test’s success is primarily down to the lever and the concept’s implementation. Dismissing the lever because of an unsuccessful earlier test is a huge risk.
To manage those risks more effectively, at minimum you must track:
Creative execution (screenshots)
Areas of focus
Results (raw data)
But ideally you should also include external factors such as seasonality, competitors’ activity and market conditions. External factors can have an impact on your test results. For example, during December many ecommerce sites do not see their tests achieving statistical significance. This is due to the nature of demand. During peak periods, people care less about persuasive copy, layout and design – they just need to make a purchase. As a result, a well-crafted landing page may not perform any better or worse than the original, but once the peak period is over, clear differences start to emerge.
Here’s an example from Chris Goward’s book You Should Test That! None of the variations achieved statistical significance in December, but Variation C became a decent winner in January and conversion rate difference jumped from 12.7% to 30.6%.
When you approach your testing strategically, there are no such questions. You just go to your knowledge base of tests and analyse whether the test result was a result of the lever, the concept implementation, or potentially external factors (eg seasonality, a change in market conditions, or a change in the traffic profile).
This brings us to important point. If you’re a strategist, here’s how you should approach these losers.
If the test loses due to:
Lever (the core theme of the test didn’t affect user behaviour) = abandon
Execution (the implementation – design, copy, functionality – didn’t affect user behaviour) = retest (and reprioritise)
(For a more in-depth discussion on why execution might fail your tests, read this article by Erin Weigel, Senior Designer at Booking.com)
#5. From driving minor website changes to transforming your organisation
Finally, at the heart of strategic testing is an alignment with the goals of your organisation.
That means the KPI for your tests may not be the conversion rate from visitors to customers, but a broader goal like revenue, profit or lifetime value.
For example, if your goal is to increase revenue, you might break it down as:
Revenue = Traffic x Conversion Rate x Lifetime Customer Value (LCV)
It may be the case that simply putting up your prices will increase the LCV significantly, even if it decreases the conversion rate marginally. It can be a risk to test, but it’s often a simple test to run – there’s very little design and development work involved. This is especially true in some SaaS markets where customers are less likely to have an expectation around price, giving greater elasticity.
This is exactly what Jason Cohen, the CEO of WP Engine, recommended to one of the companies at Capital Factory (the largest start-up incubator and accelerator in Austin). According to him, they doubled their prices and the effect on signups was minimal. As a result, the profits almost doubled. There you are – price inelastic demand.
So, should you also double your prices? This is what strategic testing can give you an answer to.
Transforming your organisation means not only growing it, but also challenging its deep-seated assumptions.
For example, in SaaS this might mean re-thinking how you structure your pricing plans. Would customers be convinced to upgrade to higher-tier plans because they see more value in advanced features you offer (and should you thus structure your plans as in the image below)?
Alternatively, you could test giving all features to everyone, regardless of the plan they’re on – then limit the volume of their usage instead. That way, every customer is able to experience the full benefits of the platform, and is more likely to use and engage with it, increasing their usage and subscription level:
(Or could you try and strike a balance between the two, or abandon the whole idea completely and simply charge a flat $99 fee the same way Basecamp does?)
Ultimately you need to maintain a healthy risk profile that’s appropriate for your organisation and its testing maturity.
This means not only iterating your existing test ideas (= safer tests), but also testing completely new concepts and experimenting with radical changes. If you’re not nervous about even just a small percentage of your experiments – then you’re not being radical enough, and you risk not answering important strategic questions about your business and your customers.
Ultimately, in order to transform the organisation the research/data science team needs to align everyone on making data-based decisions. This means no more sitting together as a closed group that simply sends reports to the C-suite once a month, but becoming the core link between the C-suite and the business’s customers. This comes back to the point raised above: the impact in the form of new knowledge needs to be shared with the organisation. Humans are hardwired for stories, not processing long spreadsheets. This is why storytelling backed by data – what we call insight narratives – is the most effective way to keep the data pumping through the veins of your organisation and aligning everyone on the same vision.
Avinash Kaushik put it brilliantly (when he was interviewed at SES conference):
We need to take some of the dryness and connect it to real life when we present data. So, when people ask me what the metric bounce rate is, I very rarely say that it’s the percent of sessions with single pageviews. That does not communicate what they are! What I say is, they represent – from a customer perspective – an experience that is, “I came, I puked, I left”. You are never gonna forget that! You are never gonna forget that definition because of the way it was articulated.
I found that after years of trying to convince people, I’ve tried to get data to connect to real life. When a newspaper company wrote an email campaign and I analysed it later, I basically said, “You had the 13 million one night stands with customers because you created no visitor loyalty”. Again, that was a way to make this data very REAL to them. Everyone knows what a one night stand is, and most of them were not great.
Digital Marketing Evangelist at Google
As you can see, there are clear differences between tactical and strategic optimisation programs.
It’s not to say that individual tactics won’t work – they can and do – but without a broader strategy to unite them, they’ll be limited in reach and impact. Sun Tzu, a Chinese military strategist, knew that the problem was not with the tactics themselves, but with the overall approach:
“Strategy without tactics is the slowest route to victory. Tactics without strategy is the noise before defeat.”
With an effective strategy in place, it won’t just provide a framework for testing – it’ll allow you to test deep-seated assumptions in your organisation.
And by doing that, you’ll be giving your organisation a significant competitive advantage. While most companies are stuck testing granular changes to their websites, you’ll be testing changes that can radically shift your ability to acquire, convert and monetise traffic.
Brian Balfour, ex VP of growth at Hubspot, said “Math and Metrics Don’t Lie, But They Don’t Tell Us Everything”. I couldn’t agree more. While analytics tells us what happens on our website, qualitative data is crucial for understanding the why behind visitors’ decision-making. By knowing your customers’ pain points and reasons why they love your product, you can stop guessing and hoping to win the odd hand. Instead, you can start addressing your visitors’ real problems, and we are yet to find a better way to sustainably grow your business.
To make good decisions though, you need to nail both collection and analysis of your user data. Your conclusions have to actually reflect your website audience’s problems. We’re used to looking at statistical significance with our test results, but when we’re gathering qualitative feedback, how do we know when we have enough data to draw a meaningful conclusion? The reality is that making sure that your data brings powerful insights is both an art and a science. Today I will explain strategies conversion champions use when analysing qualitativeopen-ended data.
What are on-site surveys anyways and why should you use them?
In this article when I refer to on-site surveys, I mean small pop-ups that prompt a visitor to answer a certain question(s). Qualaroo and Hotjar are our favourite data collection tools.
In contrast to other methods of qualitative research, on-site surveys can be:
Non-intrusive (they don’t significantly distract visitors from engaging with the website).
Anonymous, allowing for higher “ecological” validity of responses. This means that customers tell you what they actually think without trying to conform to your expectations (which may happen in interviews).
Don’t require extensive prior experience (as compared with something like interviews).
Immediate. In comparison to panels & interviews, you can start collecting data instantly.
Contextual. They can provide insights about your customer’s state of mind at a particular stage in your conversion funnel. This allows you to optimize for relevance!
How many responses do I need?
Often when companies run surveys, they aren’t sure how long to run them for. They may ask themselves: “What is the required sample size? Am I better off running a survey for a little bit longer? What % of my website audience should respond for the survey to be representative of their needs?”
I was asking these questions, too. When I studied for Avinash Kaushik’s web analytics certification, he suggested 5% of your overall traffic. At the time, I was looking at running surveys for some smaller websites and Avinash’s rule was applicable to only very large websites, so I could not use it.
Then, Peep Laja suggested having at least 100-200 responses as a minimum. I was not sure if I could apply this to any context though. Are 100 responses going to be as useful for a website with 10,000 monthly visitors as for a website with 1,000,000 daily visitors?
Sample size. Does it even matter?
The reality is that it depends, but most importantly you might be looking at it the wrong way. The primary factor we use in determining the number of required responses is the goal of the survey. At Conversion.com, we primarily use them for the following 2 goals:
Understanding the diversity of factors affecting user behaviour (i.e. what factors motivate or stop visitors from taking a desired action)
Ranking and prioritising these factors (in order to prioritise testing ideas)
The first goal is crucial at the start of every conversion optimization program (and this is the goal we will dive into in this article; for the other goal keep an eye on our future articles).
When pursuing this goal, we are trying to understand the diversity of factors that affect user behavior, and our purpose is not necessarily to make estimations about our website’s audience as a whole.
For example, we are not trying to answer the question of how many people like your product because of reason A or reason B, but we are just curious to understand what are the potential reasons why people like it.
We are more interested in gaining an in-depth understanding of people’s diverse subjective experiences and making meaning out of their responses, even if we are not sure if we can generalize these findings to the website’s audience as a whole. As Stephen Pavlovich puts it: “At this early stage, we’re not looking for a statistically valid breakdown of responses – we’re essentially looking for ideas and inspiration.”
This means that with on-site surveys that pursue goal #1, standard criteria for evaluating quality of your findings such as validity and reliability (think of confidence intervals and margins of error) are not applicable. Instead, you should use thematic saturation.
What is thematic saturation?
When analysing raw data, we categorise responses into themes. Themes are patterns in the data that describe a particular reason for taking or not taking a certain action (or any other factors we are interested in understanding). In simple terms, thematic saturation is when new responses do not bring significant new information, i.e. you start seeing repetition in visitors’ responses and no new themes emerge.
In the context of conversion optimization, this means asking yourself 3 questions:
Have I accurately interpreted and grouped the raw data into themes? i.e. have I identified the customers’ real pain points and motivations for taking a certain action?
Do the responses that I assigned to each of the themes fully explain that theme? (or is there diversity that I have not fully accounted for, i.e. are there any important sub-themes?)
Do the new responses that I have gathered bring new, actionable insights to the table?
If you can answer “Yes”, “Yes” and “No” to the questions above, you are likely to have reached saturation and can stop the survey.
As you can see in this example, the newest responses did not bring any new surprises. They fell under existing themes. As there was no more diversity in the data, we stopped the survey.
NB: Note how one simple concept of convenience can have several dimensions in your customers’ minds. This is why question 2 is so important. By understanding the differences in the way customers perceive your product’s benefits, you can now design a more effective value proposition!
Indeed, the answers to these questions are subjective and require experience. This is not because the method is ‘bad’, but because we are trying to explain human behaviour and there will always be a degree of subjectivity involved. Don’t be too hard pressed by your quantitative colleagues – some of the most important breakthroughs in history were based on studies with a sample size of 1. Did you know that Freud’s revolutionary theory of psychoanalysis originally started with examination of fewer than 10 client cases?
Minimum number of responses
Does this then mean that you can get away with as few as 10 responses? In theory yes, as long as you gain an in-depth understanding of your customers. It is a common practise in traditional research to set minimum requirements on the number of responses required before you start examining whether your data is saturated.
As a general rule, the team at Conversion.com looks for a minimum number of 200 responses. So does Andre Morys from Web Arts. Peep Laja from ConversionXL responded that he currently uses 200-250 as a minimum. Other professionals, including Craig Sullivan and Brian Massey say that they don’t use a minimum at all. The truth is you can use a minimum number as a guide, but ultimately it’s not the number that matters, but whether you understood diverse problems that your customers have or not.
When using minimums: Don’t think responses in general, remove all the garbage
In one survey we ran, 35% of responses appeared to be unusable, ranging from responses like “your mum” to random strikes of digits on a keyboard. When assessing if you passed the minimum threshold, don’t just look at the number of responses your survey tool has gathered, but look at the number of usable “non-garbage” responses.
Don’t rely on magic numbers, but look for saturation
As I have already said don’t rely solely on best practises, but always look for saturation. You need to realise that each website is unique and your ability to reach saturation depends on a number of criteria, including:
Your interpretative skills as a researcher (how quickly can you derive meaning from your visitors’ responses?), which in turn depends on your existing knowledge about customers and your familiarity with the industry. So, you are better off gathering more responses as long as they can help you to accurately interpret your audience’s responses.
Have you asked the right questions in the first place? It is difficult to derive meaningful insights unless you are asking meaningful questions (if you don’t know what questions to ask, check out this article).
Homogeneity/Heterogeneity of your audience. If your business is very niche and much of your audience shares similar characteristics, then you might be able to see strong patterns right from the start. This is less likely for a website with a very diverse audience.
How do I know if the 189th response won’t bring any new perspectives on the issues I am investigating?
The truth is you never know, in particular because every person is unique, but there are strategies we use to check our findings for saturation.
Strategy #1: Validate with a follow-up survey
This strategy has three steps:
Run an open-ended survey (survey 1, above)
Identify several themes
Re-run the survey in a multiple choice format to validate if the themes you identified were accurate (survey 2, above)
The first two steps is what you would normally do and you might not get an incredibly high response rate because writing proper feedback is time-consuming. The third step compensates for it though as instead of running an open-ended survey, you run it in the format of multiple choices. The key here is to include an “Other” choice option and ask for an open-ended response in case this option was chosen. This way you can ‘fail safe’ yourself by examining if people tend to choose the “Other” option.
When is it best to use this approach? It’s particularly useful on smaller websites due to low response rates.
Brent Bannon, PhD, ex growth manager at Facebook and founder of LearnRig, suggests that there is another critical reason why you should use close-ended questions as a follow-up.
item non-response [i.e. where a user skips or doesn’t provide a meaningful answer to a question] is much higher for open-ended questions than for closed-ended ones and people who respond to the open-ended question may differ systematically from your target population, so this response set will likely be more representative.
open-ended questions tend to solicit what is top-of-mind even more so than closed-ended questions so you don’t always get the most reasoned responses – this is pretty heavily influenced by memory processes (e.g. frequency and recency of exposure). Using a list of plausible motivations may get you more reliable data if you’re confident you’re not missing important/widespread motivations.
Founder of LearnRig
So, be cautious if you are asking people about something that happened a long time in the past.
Strategy #2: Run another open-ended survey to examine a particular theme in more depth
This strategy has three steps:
Run an open-ended survey (survey 1, above)
Identify several themes
Run another open-ended survey to examine a particular theme in more depth (survey 2, above)
Sometimes the responses you get might show you that there is a recurring theme, for example there is a problem with trust. However, respondents provide very limited detail about the problem, so although you identified a theme, you have not fully understood what the problem really is (saturation was not reached!). In that case, we would develop another open-ended survey to examine that particular theme because we know that additional responses can yield extra insights and explain the problem in more depth.
The trick with this work is to accept that the questions you ask may not be right first time. When I first started out, my mentor made me run surveys where it was clear that I’d asked the wrong question or not found the real answer. He kept tearing them apart until I’d learned to build them better and to iterate them. Asking good questions is a great start but these will always uncover more questions or need clarification. Good exploratory research involves uncovering more questions or solidifying the evidence you have.
It’s like shining a light in a circle – the more the area is lit, the more darkness (ignorance) you are in touch with. What you end up with is a better quality of ignorance – because NOW you actually know more precisely what you DO and DON’T know about your customers. That’s why iteration of research and AB testing is so vital – because you rarely end at a complete place of total knowledge.
Founder of Optimal Visit
When is it best to use this approach? Whenever you have not fully explored a certain theme in sufficient depth and believe that it can lead to actionable insights.
Note: Be cautious if you’re thinking of doing this type of investigation on a theme of “price”. Self-interest bias can kick in and as Stephen Pavlovich puts it “It’s hard to rationalise your response to price. This is one instance where it’s preferable to test it rather than run a survey and then test it.”
Strategy #3: Triangulate
Triangulation is when you cross-check your findings from one method/source with findings from another method/source (full definition here).
For example, when working with a major London airport we cross-checked our findings from on-site surveys with real-life interviews of their customers (two different methods: surveys and interviews; two different sources: online and offline customers). This ensured a high level of understanding of what customers’ problems actually were. Interviews allowed flexibility to go in-depth, whilst surveys showed a broader picture.
Triangulation allows you to ensure you have correctly interpreted responses from your customers, and identified their real barriers and motivations, not some non-existent problems you thought your customers might have. Interviews can provide you with more detailed and full explanations; this in turn would allow you to make more accurate interpretation of your survey results. There is strong support in academic research for using triangulation to enhance understanding of certain phenomenon under investigation.
When best to use it? Always. Cross-checking your survey findings with more in-depth data collection methods such as live chat conversations or interviews is always advisable as it provides you with more useful context to interpret your survey results.
Brian Massey from Conversion Sciences also emphasises the importance of cross-checking your data with analytics:
Onsite surveys have two roles in website optimization.
Answer a specific question, to support or eliminate a specific hypothesis.
Generate new hypotheses that don’t flow from the analytics.
In both cases, we want to corroborate the results with analytics. Self-reported survey data is skewed and often inaccurate. If our survey respondents report that search is important, yet we see that few visitors are searching, we may disregard these results. Behavioral data is more reliable than self-reported information.
Co-founder of Conversion Sciences
Be pragmatic, not perfect
Finally, we need to be realistic that it is not just the overall quality of our findings that matters, but time and opportunity cost required to get them.
That’s why it can be useful to decide on a stopping rule for yourself. Stopping rules could look like these: “After I get 10 more responses and no new themes emerge, I will stop the survey” or “I will run the survey for 2 more days and if no new themes emerge, I will stop it”.
After you pass the minimum threshold and you are sure that you correctly interpreted at least some of the real issues, you might be better off testing rather than perfecting your data.
Remember, conversion optimization is a cyclical process: we use qualitative data to inform our testing, and then we use the results from our tests to inform our next survey.
Use on-site surveys to understand your users’ barriers and motivations for taking a certain action at a particular stage in your conversion funnel
Thematic saturation should be your main quality criteria, not sample size, when trying to understand the diversity of factors that affect your visitors’ decision-making. But if you’re not sure or want to estimate beforehand, 200 responses is a good general rule (when applied to “non-garbage” responses).
You can examine if you managed to reach saturation:
By running a follow-up survey in a multiple-choice format and examining if people tend to choose “Other” as an option
By running a follow-up survey in an open-ended format to better understand a particular theme (if there is ambiguity in the original data)
By cross-checking your survey findings with other data sources/collection methods
Remember, that results from tests that are backed up by data is the best source of learning about your customers. Take your initial findings with caution and learn from how your users behave, not how they tell you they behave.