• In the fast-paced digital world, businesses constantly seek ways to understand their customers better and innovate without unnecessary risk. The problem? Developing new features or products based on assumptions can be costly and often leads to disappointing results.

    Enter painted door tests.

    Painted door tests are a clever, cost-effective type of experiment used to gauge user interest in potential new features before fully developing them.

    If you have ever clicked on an ad or a button on a website and been met by a ‘coming soon’ or ‘this service isn’t available yet’ message, you may have been part of a painted door test. This post will examine what precisely a painted door test is, how it works, why it is valuable, and how to implement it effectively.

  • Contents

  • What is a Painted Door Test?

    The name ‘painted door’ comes from an architectural technique in which a door is painted on a wall for aesthetics rather than serving any functional purpose. A painted door test is a similar concept. It is a form of A/B testing in which a new feature, product, or service is presented to users as if it exists, often through a button or a link, without being fully functional.

    When users click on this “painted door,” they are either informed that the feature is not yet available or redirected to a survey or a different page. The purpose is simple: to measure user interest in a potential new offering. These tests allow companies to make data-driven decisions, ensuring that resources are allocated to features with real user demand.

    The great thing about painted door tests is that they are versatile. They can be used in multiple different ways, including:

    • New Features: Testing interest in new functionalities within an existing product.
    • Product Variations: Gauging demand for different versions of a product.
    • Service Offerings: Exploring user interest in additional services or support options.

    It is important to remember that the painted door test should look and act as similar to the real thing as possible. A clear call to action should sit as close as possible to where the user would naturally select the product on the site, if it were truly available, to gain the most accurate data. Later, we will take a closer look at how to implement painted door tests effectively, but first, we will examine why you should be using them.

  • Why Use Painted Door Tests?

    Understanding painted door tests is just the first step. This section explores how painted door tests enable data-driven decisions and enhance the overall user experience by prioritizing features that resonate.

    So: why should you consider using painted door tests?

    Gathering Information at Minimal Cost: When looking at your next experiment, think of the smallest possible experiment – in terms of time and resources – you could complete to test your hypothesis successfully. Painted door tests are a great example of a Minimum Viable Experiment (MVE). They are low-cost and quick to build, and when done correctly, they should give you everything you need to validate or invalidate your hypothesis. This method means you can test multiple ideas quickly and affordably. The reduced financial risk is particularly advantageous for startups and small businesses with limited budgets. We go into more detail about why MVEs are essential in an experimentation strategy here.

    A standout example of why you should consider a painted door test instead of going big is from one of our experiments. Early research conducted while working with a real estate company dictated that adding a map feature to the property search function would increase inquiries.

    Once we had created our hypothesis, we dedicated a lot of time and resources to developing this feature on-site. It was an extensive project, as this feature was more complicated than we initially thought. The results: the map had no impact on user behavior.

    What should we have done? We should have tested the hypothesis with a simple, low-cost painted door test. By replicating the feature with a button and call-to-action (CTA), we would have been able to see whether anyone would use the Google Maps functionality.

    When considering the execution options for our painted door test, we could have explored a couple of strategic approaches. One option could have been to incorporate a “View Map” CTA that, upon clicking, would display a message informing users that the feature isn’t currently available. This approach could help gauge interest without full functionality in place. Alternatively, we could have implemented the same “View Map” CTA but linked it directly to Google Maps. This would provide users with immediate map functionality, albeit external to our site, offering a seamless experience while still allowing us to measure engagement.

    If the painted door test showed that customers were interested, we could have used this data and implemented the feature. However, if the test showed that customers were not interested in this feature, we could have looked at alternative methods to increase inquiries.

    Build Better Products with Less Risk: Use actual user data to guide product development rather than assumptions or hunches. This approach leads to more informed and strategic decision-making. Companies can prioritize features that demonstrate user demand, fostering a deeper connection with their customers and enhancing their satisfaction and engagement. Painted door tests help ensure that only the most promising ideas move forward. By validating concepts early, companies can focus on high-potential projects and avoid costly missteps.

    Below is an excellent example of how conducting a painted door test has provided concrete data for the product team and saved nearly a year of product development time. This example highlights the importance of gathering real-time user feedback on new products.

    Domino’s, a well-known pizza restaurant chain, sought to introduce a new “premium” cookie but faced indecision among four potential options. Traditionally, product development at Domino’s spans 12 months, involving extensive market research with uncertain real-world demand, risking significant time and financial investments.

    We proposed replacing the lengthy R&D cycle with a swift 1-week painted door test. Collaborating with Domino’s R&D team, we added four “fake” cookies to the menu and measured customer interest by tracking “Add to basket” clicks. We offered these cookies at two different price points across various stores to test pricing, creating eight test variations in all.

    Before the test, the Domino’s product team believed that Chocolate Orange would be the clear winner. However, the experiment revealed that customers were far more interested in the Salted Caramel flavor, which had a 32% higher conversion rate than the initially favored flavor.

    During the test, we also found that customers were willing to pay a higher price, providing valuable insights into price elasticity. This approach delivered concrete, actionable data from real-world purchasing behavior, bypassing the limitations of traditional market research.

    Our method proved to be a low-risk, cost-effective strategy that significantly accelerated Domino’s product development process while yielding reliable insights and high rewards.

    Enhanced User Experience: Businesses can create more compelling and user-friendly products by focusing on users’ desired features. This not only improves customer satisfaction but also fosters loyalty and long-term engagement. Look at Gousto, a food delivery company that wanted to test whether adding more payment options at checkout would increase completed orders. Rather than implementing these options without data, we helped Gousto conduct a painted door test.

    Gousto Experiment

    Gousto utilized painted-door experiments to assess the viability of integrating PayPal and other payment methods into their platform. Instead of fully implementing these options right out the gate, we strategically tested user interest and behavior.

    The core of the experiment involved adding PayPal and GPay call-to-action buttons at checkout, even though these payment options were not functional. Users who clicked these buttons were given a message explaining that the respective payment method was unavailable. This setup allowed Gousto to measure the click-through rates and gauge user interest in these alternative payment options.

    While the painted door experiments initially resulted in a marginal decline in signups, the valuable learnings and insights gained far outweighed this short-term effect. These tests allowed Gousto to identify segments of their user base eager to use PayPal or GPay, potentially increasing conversion rates once these options were fully implemented. More importantly, the experiments provided crucial data for forecasting the impact on revenue once additional payment methods were rolled out.

  • Potential Negatives of Painted Door Tests

    While painted door tests provide valuable insights, there are potential drawbacks. As with any test, evaluating the positives and negatives is important to mitigate any unnecessary risks. Below are two main points to consider when considering painted door tests for your next experiment.


    Misleading Metrics: Since painted door tests measure initial interest, they may not always accurately predict actual user behavior once a feature is fully developed and implemented. Users might click on a novel feature out of curiosity, but their engagement could differ when the feature is fully functional.

    a. It is important to note that painted door tests should be part of a broader strategy. If you have a feature that was successful through a painted door test, it is still best practice to test the fully developed feature before going live. This enables you to gain more data and further validate the customer’s interest in the feature.

    User Frustration: Users may feel disappointed or frustrated if they click on a painted door expecting a new feature only to find out it’s not yet available or functional. This can impact user trust and satisfaction, especially if the testing process is not transparent or well-explained, affecting the conversion rate.

    a. This frustration can and should be mitigated by making the painted door test as transparent as possible. There should always be clear messaging supporting these tests and you should be open with customers about what is happening.
    b. The good news is that, as with any A\B tests, painted door tests can be paused if they significantly affect the conversion rate. We can also serve the test to a small % of the traffic to reduce the risk.


     

  • How to Implement a Painted Door Test

    Hopefully, the information above helps you decide whether painted door tests would be a good fit for your experimentation program. Once you know this, it’s time to implement. This requires a structured approach to ensure meaningful results.

    This section provides a guide to executing these tests effectively. Each step is crucial in gathering actionable insights, from defining a clear hypothesis to creating a compelling design and effective user engagement tracking.

    hypothesis

    Our hypothesis framework

    Identify the Hypothesis: Determine what you want to test and what the intended result is. For instance, “Will users be interested in a one-click checkout feature?” Having a well-defined hypothesis helps set clear goals and metrics for success. You can find a detailed post here if you are interested in how we build our hypothesis.

    It’s important to note, a key difference between a painted door test and a traditional A/B test lies in how success is measured. In a traditional A/B test, a statistically significant uplift in your primary metric signifies a winner. However, a painted door test operates differently: the control group sees zero clicks on the non-existent feature, meaning the variant will always show a significant uplift in clicks.

    This doesn’t automatically deem it a success. Instead, success in a painted door test requires upfront criteria, such as a minimum percentage of users engaging with the feature. For instance, would at least 20% of users need to click on the feature to justify further investment? Defining these benchmarks beforehand is crucial to making informed decisions about rolling out new products.

    Create the Painted Door: Design a visual element that clearly suggests the new feature. This could be a button, banner, or link. Ensure it stands out and is placed in a relevant area of your website or app. The design should be enticing enough to attract clicks while fitting seamlessly within the user interface.

    Track Engagement: Use analytics tools to track clicks on the painted door and measure the number of users showing interest. Tools like Google Analytics, Hotjar, or custom tracking scripts can provide the necessary data. Proper tracking is essential to capture all relevant interactions and metrics.

    Don’t just look at click-through rates; analyze user behavior before and after the click, demographic data, and qualitative feedback. This holistic approach provides a deeper understanding of user intent and potential barriers.

    Provide Feedback: Once a user clicks on your painted door, they must be informed via a popup or redirected to a page explaining that the feature is not yet available but is under consideration. Optionally, you can collect feedback or email addresses for future updates. This step is crucial for maintaining user trust and gathering valuable insights.

    Analyze Results: Assess the data to understand the level of interest. High engagement indicates strong user interest, while low engagement may suggest the feature is not worth pursuing. It could also indicate that the idea should be iterated on in a different area/way on-site. Look at click-through rates, user feedback, and any patterns or trends in the data to gain a comprehensive understanding. Use this data to decide whether to develop the feature further.

    Iterate and Test: The speed and adaptability of painted door tests lead to iteration. Once you have analyzed your data, you can iterate and test the fully developed feature before going live. If the data isn’t significant enough, you can run another test with a different design or in a different area on-site.

    Top tip: make sure not to run too many painted door tests simultaneously.

  • A new tool in your CRO toolkit.

    Painted door tests are a strategic tool in the Conversion Rate Optimisation (CRO) toolkit. They offer a low-cost, high-value method of understanding user preferences. Businesses can use these to make informed decisions, reduce risk, and maximize the potential for successful product launches.

    However, it’s important to note that painted door tests are just one part of a broader CRO strategy. For optimal results, consider integrating painted door tests with other methods, such as more A/B testing, user surveys, and usability testing. A multi-faceted approach ensures a comprehensive understanding of user behavior and preferences.

    Here are two more valuable tips to keep in mind when running your painted door tests:

    • Actionable Insights: Refine your approach by using the data collected from the painted door test. Whether you tweak the feature based on user feedback or pivot to a new idea, let the insights guide your decisions.
    • Follow-up: Use the opportunity to gather user feedback or interest via surveys or sign-ups. This additional data can provide deeper insights into user preferences and expectations. Consider offering an incentive for providing feedback, such as a discount or early access to the feature.

    Ready to explore more CRO techniques? Check out our latest case studies showcasing how we’ve helped businesses like yours achieve remarkable growth.