• What question keeps product leaders up at night?

    Crying babies, anxious dogs, general unease about the state of the world. Oh, and:

    “Is our product priced right?”

    And that’s for good reason – pricing can make or break your business.

    If you get it right, you unlock massive value – selling your product at the optimised price to balance profit and demand. (But get it wrong and you leave serious money on the table.)

    The problem is, there’s not always a clear answer to the question. It helps if you’re selling a product in a competitive market, as that at least gives you a starting point.

    But what if you’re introducing a disruptive product, without the customer having a clear idea of what the product should cost? When there’s no precedent, you can set whatever price you want…

  • Contents

  • How do you decide what to charge?

    That was the challenge for our client – a SaaS brand in an emerging industry with great traction.

    They weren’t strangers to experimentation. We’d run experiments for them previously on their landing pages and sign-up flow – and their approach was mature.

    Now they wanted to see – is our product priced right?

    Like many SaaS businesses, you could sign-up for a monthly subscription – but could save if you committed to a quarterly or annual plan. The plans were the same otherwise – the only change was the level of commitment.

    A rough-and-ready, anonymised version of what our client’s plan page looks like. Note: in a bid to maintain client confidentiality, all future screenshots are anonymised versions of the originals.

    Our goal was simple: find out the optimal amount for each tier.

    The only problem was… how do you do this?

  • Will customers tell you what they pay

    Many people start with the Gabor-Granger method. A research study that – at its simplest – asks customers if they’d buy a product at different price points:

    1. You recruit research participants for the study – matching your typical customer demographic.
    2. Each participant is presented with the product and asked whether they would purchase it at different price points (eg at $10, $15, $20, and so on).
    3. When you aggregate the answers from all participants, you can plot price vs demand and calculate which combination gives you the most revenue.

    So let’s say 100 people would buy it at $10, and 80 people would buy it at $15, but only 55 people would buy it at $20… What’s the right price point?

    The revenue for each price point would be $1000, $1200 and $1,100. In other words, it suggests the $15 price point would lead to the most revenue.

  • But…are people reliable?

    Good point – most people aren’t reliable. Especially when it comes to something as irrational as pricing. People will often make an emotional decision to buy a product, then justify it retrospectively – and the price is often far less important than we think.

    It becomes even harder when we ask people to vocalise their response to different price points.

    The Gabor-Granger method relies on participants telling you what they would buy at different price points (attitudinal) as opposed to actually observing whether they would truly buy at these different price points (behavioural).

    The chart below shows how research methods vary.

    All research methods have their strengths and weaknesses. The Gabor-Granger Method is no exception. Here it is plotted against a range of other research methods in terms of 1) behavioural vs. attitudinal slant and 2) quantitative vs. qualitative data type.

    No one method is perfect – and we’d never recommend that a client trusts a Gabor-Granger study on its own – but they are useful to get initial insight, especially highlighting discrepancies between current and optimal price.

  • Enter: Mixed-methods research

    Take another look at the chart of research methods above.

    The best approach by far is mixed-method research. Instead of relying on one method (eg analytics), we triangulate opinion through multiple methods (eg analytics x surveys x usability tests).

    So how does this apply to pricing for our SaaS client?

    First, we ran a Gabor-Granger study on all three subscription tiers (monthly, quarterly and annual). This allowed us to map demand for each tier against revenue generated at each price point:

    According to the Gabor-Granger study, the prices of the quarterly and annual plans were already at – or even above – their revenue maximising prices. Our opportunity lay with the Monthly plan, which was significantly lower than the revenue maximizing price.

    As you can see from the graphs for the quarterly and annual tiers, the Gabor-Granger study indicated that any increase in price would reduce demand. What’s more, revenue would drop in parallel – so there was no elasticity.

    But – take another look at the monthly chart. It shows that there was significant room to increase the price. At the time, the price of the monthly subscription was 25USD per month – but the study indicated that they would drive the most revenue by increasing the price to more than double at 51USD per month!

    So… what did we do? Change the price to 51USD and A/B test it?

    Not quite…

  • Balancing risk and reward

    Pricing tests are generally considered to be one of the riskier forms of A/B test. (It’s how Amazon got in trouble in 2021.)

    And the more radical the price increase, the more risky price tests are… and we’re talking about a 2x increase here.

    To minimise risk while maximising insight, we therefore chose to run this as an A/B/C test with the following variations:

    • Control – price stays at 25USD/mo
    • Variation 1 (V1) – lower risk – 33USD/mo
    • Variation 2 (V2)- medium risk – 41USD/mo

    We ran two variations against the original 25USD monthly price. In the first, we changed the price on the monthly subscription to 33USD (or 1.06USD per day) and in the second we changed it to 41USD (or 1.32USD per day).

    Note: we could have increased the price all the way up to 51USD like the Gabor-Granger study suggested, but a price increase of this size was perceived by the client as being particularly high-risk. So we capped ourselves at 41USD, with an option to test more in the future depending on the first A/B test’s results.

  • A/b testing price

    So, with all of that said, what was the result?

    As expected, monthly subscriptions fell for both variants – by 17% for V1 and by 27% for V2. In both cases, this decrease in demand was steeper than the Gabor-Granger study had predicted.

    What’s more, total revenue generated by the monthly plan had also fallen in both variants – by 8% in V1 and by 7% in V2.

    So, in other words, the Gabor-Granger had been slightly optimistic in its predictions…

    but – and this is quite a big but (I cannot lie) – here’s a twist I’ve been holding back.

  • But wait, there’s more – price anchoring and framing

    Let’s take a quick detour to explore pricing psychology.

    In previous experiments, this client’s customers had been particularly sensitive to price framing and price anchoring effects:

    • Price framing is when the price of a product or service is positioned in a certain way to make it less or more appealing to customers, e.g. maybe you frame the price of your subscription product in terms of daily vs. monthly cost (like we’ve done here)
    • Price anchoring is when a higher price is used first to give the user a baseline to compare against (think of it like a decoy).

    Based on some survey feedback, we’d previously tried framing the price in terms of monthly cost rather than daily cost and it had resulted in a 22% fall in total subs. Read: this client’s users were extremely sensitive to price framing.

     

    That means we can’t just look at the monthly pricing in isolation – we have to consider the overall impact on quarterly and annual sales as well.

    Given customer’s sensitivity to pricing tests, there was a chance that raising the monthly plan price would make the discounts for a quarterly or annual plan more attractive.

    Did this bear out?

    Yes.

    In a big way.

    Demand for quarterly subscriptions increased by 33% in V1 and by 67% in V2.

    So by increasing the price of the monthly plan, we slightly reduced revenue from monthly subs but massively increased revenue from quarterly subs.

    All in all, this netted out at an increase in revenue per visitor (RPV) – our primary metric for this test – of 16% with 99% statistical significance.

    As you can imagine, this result caused quite a stir within the business. On its face, we’d given the client the means of increasing revenue by 16% overnight.

    So did they pop the champagne and roll the new pricing strategy straight out?

    Again, not quite…

  • Holding horses: evaluating long term impact

    Increasing the price of the monthly subscription by over 80% was a huge decision for the client – and not one that they, or we, were willing to leave to chance in any way.

    The initial data suggested that revenue would rise if we increased the price to 41USD/mo, but we still needed to monitor long-term metrics like churn rate and LTV. After all, if short-term revenue rose but churn rose or LTV tanked, then there would be no point in making this change.

    As a result, we’ve been working with the client to track a range of long-term metrics across different cohorts. Once that data is in, we – and they – will be in a much stronger position to understand the full range of consequences and whether this price increase is likely to be a viable option for them.

    Looking beyond this specific experiment: with something as sensitive as price, we would never assume that what works on one market/region can be straightforwardly carried across to other markets/regions.

    We’ve been working with the client – using the lethal Gabor-Granger/experiment combination – to identify those markets where the price is already optimal and those where it is ripe for optimization.

    And you know what?

    The client’s product leader has never slept better!

    tl;dr

    • The Gabor-Granger method can be a great way of ascertaining demand and revenue for your product at different price points.
    • Saying that, Gabor-Granger studies are limited in many respects, so the best way to prove – or disprove – its results is with an A/B test.
    • Pricing experimentation isn’t for everyone. It can be risky and requires a high-level of maturity and sophistication. But when done right, it can offer some of the strongest ROI of any type of experiment.