How pilot testing can dramatically improve your user research

Matt Wright and Nick So

Quality user research allows you to generate deep, meaningful user insights. It is a key information source within Conversions’s Explore phase, and can be used to generate solid experiment hypotheses.

A list of words and phrases that work wonders according to Ogilvy.
User research is a part of the Explore phase within Conversion’s Infinity Optimization Process™.

Unfortunately, conducting user research isn’t always as easy as it sounds…

Do any of the following sound familiar?

  • During research sessions, your participants don’t understand what they have been asked to do,
  • The phrasing of your questions has given away the answer or has caused bias in your results,
  • During your tests, your participants are unable to complete the assigned tasks in the time provided,
  • After conducting participants sessions, you spend more time analyzing the research design rather than the actual results.

If you have experienced any of these, don’t worry. You’re not alone.

Even the most seasoned researchers experience “oh-shoot” moments, when they realize that there are flaws in their research approach.

Fortunately, there is a way to significantly reduce these moments: It’s called pilot testing.

Pilot testing is a rehearsal of your research study, allowing you to test your research approach with a small number of test participants before you conduct your main study. Although this is an additional step, it may be the time best spent on any research project.

Just like proper experiment design is a necessity, it is important to take the time to critique, test, and iteratively improve your research design, before the research execution phase. By doing so, you can ensure that your user research runs smoothly, and dramatically improve the output from your study.

And the best part? Pilot testing can be applied to all types of research approaches, from basic surveys to more complex diary studies.

Start with process.

At Conversion, our research approach is unique for every project, but always follows a defined process:

  1. Developing a defined research approach (Methodology, Tools, Participant Target Profile)
  2. Pilot testing of research design
  3. Recruiting qualified research participants
  4. Execution of research
  5. Analyzing the outputs
  6. Reporting on research findings
A list of words and phrases that work wonders according to Ogilvy.
User Research Process at Conversion

Each part of this process can be discussed at length, but, our focus today is on pilot testing.

You should always begin any research with the high-level question: “What are we aiming to learn through this research?”. You can use this question to guide the development of your research methodology, select research tools, and determine the participant target profile. Pilot testing allows you to quickly test and improve this approach.

Conversion’s pilot testing process consists of two phases: 1) an internal research design review, and 2) participant pilot testing.

During the design review, members from our research and strategy teams sit down as a group and spend time critically thinking about the research approach. This involves reviewing:

  • Our high-level goals for what we are aiming to learn
  • The tools we are going to use
  • The tasks participants will be asked to perform
  • Participant questions
  • The research participant sample size, and
  • The participant target profile

Our team often spends a lot of time discussing the questions we plan to ask participants. It can be tempting to ask participants numerous questions over a broad range of topics. This inclination is often due to a fear of missing the discovery of an insight. Or, in some cases, is the result of working with a large group of stakeholders across different departments, each trying to push their own unique agenda.

However, applying a broad, unfocused approach to participant questions can be dangerous. It can cause a research team to lose sight of its original goals and produce research data that is difficult to interpret, thus limiting the number of actionable insights generated.

To overcome this, we use the following approach when creating research questions:

Phase 1: To start, the research team creates a list of potential questions. These questions are then reviewed during the design review. The goal is to create a concise set of questions that are clearly written, do not bias the participant, and compliment each other. Often, this involves removing a large number of the questions from our initial list and reworking those that remain.

Phase 2: The second phase of Conversion’s research pilot testing consists of participant pilot testing.

This follows a rapid and iterative approach, where we pilot our defined research approach on an initial 1 to 2 participants. Based on how these participants respond, the research approach is evaluated, improved, and then tested on 1 to 2 new participants.

Researchers repeat this process until all of the research design ‘bugs’ have been ironed out, much like QA-ing a new experiment. There are different criteria you can use to test the research experience, but we focus on testing three main areas: clarity of instructions, participant tasks and questions, and the research timing.

  • Clarity of instructions: This involves making sure that the instructions are not misleading or confusing to the participants
  • Testing of the tasks and questions: This involves testing the actual research workflow
  • Research timing: We evaluate the timing of each task and the overall experiment

Let’s look at an example.

A list of words and phrases that work wonders according to Ogilvy.

Recently, a client approached us to do research on a new area of their website that they were developing for a new service offering. Specifically, the client wanted to conduct an eye tracking study on a new landing page and supporting content page.

With the client, we co-created a design brief that outlined the key learning goals, target participants, the client’s project budget, and a research timeline. The main learning goals for the study included developing an understanding of customer engagement (eye tracking) on both the landing and content page and exploring customer understanding of the new service.

Using the defined learning goals and research budget, we developed a research approach for the project. Due to the client’s budget and request for eye tracking we decided to use Sticky, a remote eye tracking tool to conduct the research.

We chose Sticky because it allows you to conduct unmoderated remote eye tracking experiments, and follow them up with a survey if needed.

In addition, we were also able to use Sticky’s existing participant pool, Sticky Crowd, to define our target participants. In this case, the criteria for the target participants were determined based on past research that had been conducted by the client.

Leveraging the capabilities of Sticky, we were able to define our research methodology and develop an initial workflow for our research participants. We then created an initial list of potential survey questions to supplement the eye tracking test.

At this point, our research and strategy team conducted an internal research design review. We examined both the research task and flow, the associated timing, and finalized the survey questions.

In this case, we used open-ended questions in order to not bias the participants, and limited the total number of questions to five. Questions were reworked from the proposed lists to improve the wording, ensure that questions complimented each other, and were focused on achieving the learning goals: exploring customer understanding of the new service.

To help with question clarity, we used Grammarly to test the structure of each question.

Following the internal design review, we began participant pilot testing.

Unfortunately, piloting an eye tracking test on 1 to 2 users is not an affordable option when using the Sticky platform. So, we got creative and used some free tools to test the research design.

We chose to use Keynote presentation (timed transitions) and its Keynote Live feature to remotely test the research workflow, and Google Forms to test the survey questions. GoToMeeting was used to observe participants via video chat during the participant pilot testing. Using these tools we were able to conduct a quick and affordable pilot test.

The initial pilot test was conducted with two individual participants, both of whom fit the target participant criteria. The pilot test immediately pointed out flaws in the research design, which included confusion regarding the test instructions and issues with the timing for each task.

In this case, the initial instructions did not provide our participants with enough context around what they were looking for, resulting in confusion about what they were actually supposed to do. We had also assumed that 5 seconds would be enough time for each participant to view and comprehend each page. However, the supporting content page was very context rich and 5 seconds did not provide participants enough time to view all the content on the page.

With these insights, we adjusted our research design to address these issues, and then conducted an additional pilot with two new individual participants. All of the adjustments seemed to resolve the previous ‘bugs’.

In this case, pilot testing not only gave us the confidence to move forward with the main study, it actually provided its own “A-ha” moment.

Through our initial pilot tests, we realized that participants expected a set function for each page. Participants expected the landing page to grab their attention and attract them to the service, whereas, they expected the supporting content page to provide more details on the service and educate them on how it worked. Insights from these pilot tests reshaped our strategic approach to both pages.

“The seemingly ‘failed’ result of the pilot test actually gave us a huge A-ha moment on how users perceived these two pages, which not only changed the answers we wanted to get from the user research test, but also drastically shifted our strategic approach to the A/B variations themselves.”

– Nick So, VP of Delivery, Conversion

In some instances, pilot testing can actually provide its own unique insights. It is a nice bonus when this happens, but it is important to remember to always validate these insights through additional research and testing.

Final Thoughts

Still not convinced about the value of pilot testing? Here’s one final thought:

By conducting pilot testing, you not only improve the insights generated from a single project, but also the process your team uses to conduct research. The reflective and iterative nature of pilot testing will actually accelerate the development of your skills as a researcher.

Yes, pilot testing will require an investment of both time and effort. But trust me, that small investment will deliver significant returns on your next research project and beyond.

Join 5,000 other people who get our newsletter updates