• If you want to know what your users are doing on your website, A/B tests are your go-to tool. But if you’re looking to uncover why they’re doing it, user testing is the key to unlocking those insights.

    One common question that arises is whether to conduct moderated or unmoderated user testing to gather insights. Both approaches can uncover the “why” behind user behavior insights that pure analytics or A/B testing alone might miss. 

    In fact, at Conversion, a GAIN specialist, we regularly use both moderated and unmoderated testing methods as complementary tools in our experimentation strategy. This article breaks down the differences, pros, and cons of each approach, and how to decide which method (or combination) is right for your needs.

  • Contents

  • What is moderated user testing?

    Moderated user testing involves a researcher actively guiding and observing a participant through tasks in real time. The session can be in-person or remote (via video call), but in either case, a moderator is present to introduce tasks, ask follow-up questions, and probe the participant’s thoughts. 

    For example, a researcher might ask a user to complete a product purchase on a website, observe where they encounter friction, and ask, “What are you thinking at this step?”

    Moderated sessions are fundamentally qualitative, capturing rich observations and direct feedback from users. This makes them extremely powerful for uncovering why users behave a certain way.

    A skilled moderator can dig into users’ motivations, clarify any confusion on the spot, and explore unexpected behaviors in depth. At Conversion, we frequently run moderated interviews or usability tests to inform our design hypotheses. 

    For example, in our work with Whirlpool Corporation, we ran an A/B test that highlighted performance issues with an interstitial element. To better understand why users were reacting negatively, we conducted moderated user interviews, which revealed that the interstitial felt “unexpected” and disrupted the experience. These insights guided a redesign of the feature, which we then validated through further experimentation.

    Pros of Moderated Testing:

    • Deep qualitative insights: Moderated sessions allow direct observation of body language, tone, and facial expressions, and let you ask “why” in the moment. This yields a deeper understanding of user motivations and frustrations. You’re not just seeing what users do, but learning why they do it, which is crucial for discovering new test ideas and solutions. 
    • Real-time flexibility: The moderator can clarify task instructions or follow interesting tangents as they arise. If a user gets stuck or confused, the facilitator can ask them to elaborate (or can adjust the task in future sessions). This adaptability helps ensure you’re gathering meaningful feedback rather than useless data points. 
    • Identify subtle usability issues: Because you can probe users’ thought processes, moderated testing often surfaces UX problems that might be overlooked in clickstream data. Minor friction points, cognitive hesitations, or emotional reactions become evident when you’re watching and listening to a user live. 

    Cons of Moderated Testing:

    • Time and resource-intensive: Each session requires a researcher’s active involvement and often involves one-on-one scheduling with participants. This typically means smaller sample sizes. Moderated tests are highly insightful but don’t scale easily; running 5–10 sessions can already be a significant time investment. 
    • Moderator bias and variability: The quality of insights depends on the moderator’s skill. Poorly worded questions or unconscious cues can bias participants’ responses. (For example, leading a user, “Did you find the checkout confusing?” can plant that idea.) Consistency is key; a structured discussion guide and training help mitigate this risk. 

    Higher cost per participant: Due to the labor-intensive nature of moderated studies, they can be more expensive on a per-user basis. They may also require specific facilities or conferencing tools. However, the return on insight is often worth the cost when the goal is to gain a deep understanding of complex user journeys or critical conversion issues.

  • What is unmoderated user testing?

    In unmoderated user testing, participants complete assigned tasks independently without the presence of a live facilitator. They might be given a scenario (e.g. “Find and purchase a pair of running shoes on our site”) via a testing platform or survey, and their screen actions and comments are recorded for later analysis. Unmoderated tests are often conducted remotely, with users participating from their homes or offices at their convenience.

    Because there is no moderator involved in real-time, unmoderated testing relies on a well-crafted test plan. Tasks and questions must be crystal clear, since participants can’t ask for clarification during the test. The upside is that unmoderated studies can gather data from more users in a shorter time, often at a lower cost. 

    For example, when a client requested eye-tracking research on a new landing page, we leveraged a remote tool (Sticky) to conduct an unmoderated test with a broad pool of participants. This approach allowed us to collect dozens of eye-tracking sessions simultaneously and follow them with a survey, rather than scheduling each participant individually.

    Pros of Unmoderated Testing:

    • Scalable and fast: Unmoderated tests can be deployed to many participants at once. You might get results from 20, 50, or more users within a day or two, which is impractical with fully moderated sessions. This makes unmoderated testing ideal when you need quick, directional feedback or a larger sample to increase confidence in findings.

    • Natural user behavior: Participants complete tasks in their environment, on their own devices, without a researcher potentially looking over their shoulder. This can lead to more natural behavior. Users are less likely to feel observed or pressured, so you may catch genuine stumbling blocks in the experience. (However, note that lack of guidance can also mean they wander off-track, a double-edged sword.)

    • Lower cost per participant: In many cases, unmoderated testing is a cost-effective option. You don’t need to pay a moderator for each session or rent a lab. Many online UX testing platforms offer panel participants and automated recording, which helps drive down costs. For straightforward usability checks or A/B test follow-ups, unmoderated studies can be a budget-friendly way to gather qualitative data at scale.

    Cons of Unmoderated Testing:

    • Limited depth of insight: Without a moderator, you can’t ask participants follow-up questions in the moment or clarify their responses. You might know what they did (e.g., 3 out of 5 users failed to find the wishlist), but you often have to infer the why from their screen recordings or written comments. In other words, unmoderated tests tend to surface the symptoms of UX issues; you may still need moderated research to diagnose the root causes.

    • Rigid test script: The tasks and questions must be carefully designed upfront. If participants misinterpret a task, there’s no way to correct the course during the session. Common pitfalls include users not understanding what they’re asked to do, or question wording inadvertently biasing their behavior. 

    For example, if your task prompt is ambiguous, participants might do entirely different things, yielding unusable data. Unmoderated testing leaves little room for error in research design.

    • No immediate observation of emotions: While many unmoderated tools capture video or audio of the user, it’s not the same as being in the room to notice subtle cues. You might miss non-verbal signals or the ability to probe an offhand remark. In unmoderated sessions, you get what you get: the recorded behavior or survey answers, and sometimes that can feel a bit hollow compared to a rich conversation with a user.

    Tip: Since unmoderated studies lack a live facilitator, it is crucial to pilot-test your setup. Run through the test internally or with a couple of trial users first to catch any confusing instructions or technical glitches. As our team emphasizes, proper experiment design is just as necessary in research as it is in A/B testing. 

    A quick pilot can save you from wasting dozens of participant sessions on flawed tasks. In one of our projects, we piloted an unmoderated test and discovered that the initial instructions weren’t providing enough context, causing confusion. 

    We adjusted the wording and timing, re-ran the pilot, and only then launched the complete study, avoiding what could have been a costly mistake. 

    When done right, this upfront effort can even yield unexpected insights. 

    “The seemingly ‘failed’ result of the pilot test actually gave us a huge A-ha moment on how users perceived these pages… and drastically shifted our strategic approach to the A/B variations themselves,” notes Nick So, our VP of Delivery.

    In other words, a misstep in testing design can itself reveal something fundamental about user expectations, as long as you’re paying attention.

  • Which testing method should you use?

    Both moderated and unmoderated testing have a place in a robust optimization and UX research program. The best choice depends on your goals, resources, and the stage of the project. Here are some guidelines to help you decide:

    • Use moderated testing for exploratory research and complex scenarios. When you need to gain a deep understanding of user motivations or when evaluating a complex flow or prototype, a moderated session is invaluable. 

    The ability to ask “why did you do that?” is key to uncovering insights that drive innovative hypotheses. 

    For instance, providing a comprehensive assessment is crucial for decision-making. Sitting down with users (via Zoom or in-person) to watch them go through it will likely reveal pain points that numbers alone won’t show. 

    Moderated testing is also preferable if your target audience is particular or tasks are high-stakes. You wouldn’t want a dozen users floundering in an unmoderated test that deals with, say, sensitive financial data or intricate B2B workflows. Instead, a moderated approach allows you to prioritize quality over quantity in the feedback.

    • Use unmoderated testing for validation and fast feedback loops if you have a relatively clear idea of what you want to test, for example, the usability of a new feature or a content comprehension check. Unmoderated studies can quickly confirm whether users succeed or struggle. 

    They are great for getting broad input on straightforward questions. Maybe you want 50 peoples’ first impressions of a homepage hero image: an unmoderated test or on-site survey can gather that data within hours. 

    Unmoderated testing also shines when you need to benchmark an experience (e.g., how long does it take on average for users to find an item using your site search?) or when you want to test with users across many time zones without scheduling. Just remember to keep tasks specific and straightforward, and invest time in writing clear instructions (again, pilot testing is your friend here).

    • Consider a mixed approach for the best of both worlds. Moderated and unmoderated testing are not mutually exclusive; instead, they complement each other. Using them together can amplify their strengths. 

    At Conversion, our philosophy is to mix methods to get a 360° view of the user. Quantitative techniques like A/B tests or analytics tell us what is happening, while qualitative research tells us why.

    A moderated interview might reveal an unexpected user need, which you can then validate at scale with an unmoderated survey to see how widespread that sentiment is. 

    Alternatively, you might start with unmoderated usability sessions to identify the most common UX issues, and then follow up with moderated sessions to delve deeper into those specific problems. In practice, we often alternate between the two. 

    In our experience, a blended strategy drives the most significant impact. One of our ongoing partnerships is a great example: with Whirlpool Corporation, we established a regular cadence of both A/B testing and UX research. This mixed-methods program enables us to continually gather qualitative insights to inform new experiments and quantitative results, measuring their impact. The Whirlpool team gets to see the whole picture, not just that a change improved revenue by X%, but why it resonated with customers (or didn’t). 

    Their Senior Optimisation Manager put it well: “Conversion has become a trusted partner… quite literally an extension of our in-house capability,” a nod to how seamlessly we integrate research with testing.

  • Integrating user research into experimentation

    When it comes to moderated vs. unmoderated testing, the answer isn’t one or the other; it’s figuring out when to use each, and often using both. 

    Moderated sessions offer depth and discovery, while unmoderated sessions offer scale and speed. The true power of conversion optimization lies in uniting these methods within an overarching experimentation framework. By doing so, you ensure that every A/B test is not just a shot in the dark but a data-informed hypothesis grounded in real user behavior and feedback.

    Above all, remain user-centric. Any test or optimization should ultimately serve the needs of your users. Moderated and unmoderated research are tools to keep you connected to those needs, whether through the voice of a single user in an interview or the patterns of thousands of users clicking through your funnel. The companies that win in CRO are those that never lose sight of the customer experience behind the metrics. 

    In the end, the moderated vs. unmoderated question isn’t a competition at all. It’s a collaboration. When used thoughtfully together, they ensure your UX research is both broad and deep, and your optimization efforts are both data-driven and user-informed. That is the formula for creating digital experiences that not only convert but also delight.

  • The Author

    Christopher Barlow – User Experience Consultant