Process & Operations Archives | Conversion.com

Introducing our hypothesis framework

Download printable versions of our hypothesis framework here.

Experiments are the building blocks of optimisation programmes. Each experiment will at minimum teach us more about the audience – what makes them more or less likely to convert – and will often drive a significant uplift on key metrics.

At the heart of each experiment is the hypothesis – the statement that the experiment is built around.

But hypotheses can range in quality. In fact, many wouldn’t even qualify as a hypothesis: eg “What if we removed the registration step from checkout”. That might be fine to get an idea across, but it’s going to underperform as a test hypothesis.

For us, an effective hypothesis is made up of eight key components. If it’s reduced to just one component showing what you’ll change (the “test concept”), you’ll not just weaken the potential impact of the test – you’ll undermine the entire testing programme.

That’s why we created our hypothesis framework. Based on almost 10 years’ experience in optimisation and testing, we’ve created a simple framework that’s applicable to any industry.

Conversion.com’s hypothesis framework

Conversion.com Hypothesis Framework

What makes this framework effective?

It’s a simple framework – but there are three factors that make it so effective.

  1. Putting data first. Quantitative and qualitative data is literally the first element in the framework. It focuses the optimiser on understanding why visitors aren’t converting, rather than brainstorming solutions and hoping there’ll be a problem to match.
  2. Separating lever and concept. This distinction is relatively rare – but for us, it’s crucial. A lever is the core theme for a test (eg “emphasising urgency”), whereas the concept is the application of that lever to a specific area (eg “showing the number of available rooms on the hotel page”). It’s important to make the distinction as it affects what happens after a test completes. If a test wins, you can apply the same lever to other areas, as well as testing bolder creative on the original area. If it loses, then it’s important to question whether the lever or the concept was at fault – ie did you run a lousy test, or were users just not affected by the lever after all?
  3. Validating success criteria upfront: The KPI and duration elements are crucial factors in any test, and are often the most overlooked. Many experiments fail by optimising for a KPI that’s not a priority – eg increasing add-to-baskets without increasing sales. Likewise the duration should not be an afterthought, but instead the result of statistical analysis on the current conversion rate, volume of traffic, and the minimum detectable uplift. All too often, a team will define, build and start an experiment, before realising that its likely duration will be several months.

Terminology

Quant and qual data

What’s the data and insight that supports the test? This can come from a huge number of sources, like web analytics, sales data, form analysis, session replay, heatmapping, onsite surveys, offsite surveys, focus groups and usability tests. Eg “We know that 96% of visitors to the property results page don’t contact an agent. In usability tests, all users wanted to see the results on a map, rather than just as a list.”

Lever

What’s the core theme of the test, if distilled down to a simple phrase? Each lever can have multiple implementations or test concepts, so it’s important to distinguish between the lever and the concept. Eg a lever might be “emphasising urgency” or “simplifying the form”.

Audience

What’s the audience or segment that will be included in the test? Like with the area, make sure the audience has sufficient potential and traffic to merit being tested. Eg an audience may be “all visitors” or “returning visitors” or “desktop visitors”.

Goal

What’s the goal for the test? It’s important to prioritise the goals, as this will affect the KPIs. Eg the goal may be “increase orders” or “increase profit” or “increase new accounts”.

Test concept

What’s the implementation of the lever? This shows how you’re applying the lever in this test. Eg “adding a map of the local area that integrates with the search filters”.

Area

What’s the flow, page or element that the test is focused on? You’ll need to make sure there’s sufficient potential in the area (ie that an increase will have a meaningful impact) as well as sufficient traffic too (ie that the test can be completed within a reasonable duration – see below). Eg the area may be “the header”, “the application form” or “the search results page”.

KPI

The KPI defines how we’ll measure the goal. Eg the KPI could be “the number of successful applications” or “the average profit per order”.

Duration

Finally, the duration is how long you expect the test to run. It’s important to calculate this in advance – then stick to it. Eg the duration may be “2 weeks”.

Taking this further

This hypothesis framework isn’t limited to A/B tests on your website – it can apply anywhere: to your advertising creative and channels, even to your SEO, product and pricing strategy.
Any change and any experience can be optimised – and to do that effectively requires a data-driven and controlled framework like this.

Don’t forget – you can download printable versions of the hypothesis framework here.

Specialist teams or x-functional pods? A developer’s view

Conversion.com is an agency comprised of specialists that will look for opportunities to improve client’s ROI through methodical research, testing and learning.  We analyze user behaviour and expectations of a website, in order to increase engagement levels and consequently, conversions.

Testing is at the heart of everything we do, so we’re always trying to improve and find better ways of doing things. Typically, our company is split into three major ’specialist teams’ – consultants, designers and developers.

Consultants: Their role is to perform in-depth research of a client’s website and get relevant insights about the business. Consequently, test ideas are generated and wireframes created. Also they are the main bridge between our clients and internal teams.

Designers: They feed into the wireframe stage by collaborating on ideas on how to implement the test concept. After approval on this stage they elaborate the final design file that will be transferred to the developers.

Developers: These geeks have the ability to transform the final design file into code readable by browsers. This is the final stage of the test creation flow.

After this internal process the test runs to a live audience through an A/B testing system, where at the end consultants analyse the final results and make recommendations for the client’s site.

Here is how the teams typically interact within the company:

As can be seen, developers come in at the very end of the process.  After designers have completed the final file they assign to one of the developers available at that moment. This is great from a developer’s standpoint, because they have the opportunity to work on many different clients and retain a good working knowledge across all of them. However the downside to this is that the work overload can be an issue. This happens because different consultants have different deadlines to deliver tests, so at times, congestion becomes unavoidable. Sometimes many tests come in to the development team simultaneously, and it is difficult to manage requests in order to deliver each test at the desired time.

Because of these issues, we had an idea to grab an element of each team and make them work more closely together. We have created a cross functional team a.k.a. pod.

What exactly is a pod?

A pod is like a small startup inside the company. Instead of organizing your business in separate functional departments, you create teams that contain a member of each function. Let’s illustrate what we have done within our company:

Graph 2

Clear goals and collaboration

With the team working collectively on the same clients, it’s much easier to sync up schedules. Since we always have a priority list for our tasks the team will work towards those goals by order. For example if a developer needs designer approval for a certain test, the designer will stop whatever they are doing to evaluate the developer’s work because that is the current priority for the whole team.

 Tidy schedule

Because there are clear goals, the project manager is able to build a clear schedule for everyone in the team. This helps the developer to know what work is coming soon to his stack. In this way, the developer can manage his time, along with his other number of tasks. This allows the developer to shift his projects the way he prefers as soon as he delivers his work on the expected deadlines.

Earlier technical evaluation

We have introduced a new format for the test idea/concept phase. Before the pod, the developer had little input at this stage. The developer is now an active member of the conceptual phase, bringing valuable know-how on potential implementation issues. Sometimes even a very slight different approach can save many hours of development and help the team deliver a certain test faster (for example – implementing native placeholders can cause cross browser compatibility problems. The developer might ask at this stage ‘is this really required for the test? Will this make a significant difference to conversions?’) Also, assimilating with the test at the very beginning can be good for the developer to research and develop some code practices that will be required to implement it (e.g. get familiar with new frameworks).

Faster test development

Since the developer has a clear pipeline he can start to develop the test before he actually receives the final design file from the designer. How is this possible? Well, before the designers start to work on the final photoshop file, there is a wireframe stage. As soon as we get approval from the client on the wireframe the developer can start to work at the same time as the designer prepares the final file. This is possible because the wireframe gives a clear indication of what the test is all about. With this visual info the developer is able to develop a big chunk of the HTML, CSS and javascript. Remember that from the test idea phase the developer already knows what functionality and goals the test is supposed to deliver. This allows the developer to finish around 70-80% of his work even before the designer delivers the file. With the final file developer just needs to make some tweaks on the code (e.g spacing, colours, etc.). So far, this new process has allowed us to deliver tests 35% faster than before.

Quick decision-making

Because the members are simply around each other, as opposed to working in silos, it is easier to take a minute to discuss something momentarily. Moreover, interrupting one of your team does not feel so intrusive because if you need something to finish the pod’s priority task, they are more open to being interrupted in order to collectively help meet the team’s goals.

Flexibility

Because the pod is like a small startup within the company, it allows the team to change processes and try new ways of working. This can be very useful in finding more efficient ways of working which we can then share with the other pods.

Results

As optimizers, a testing culture is a vital part of how we work. This means we also need to measure everything and be able to critically evaluate how things are doing. Here are the results we have observed so far by moving from a specialist-teams to cross-functional pod approach:

  • 35% faster test delivery time from start to finish  By developing test ideas in parallel, as opposed to serially, we have seen a significant reduction in the total time lapsed from the inception of a test to the final launch.
  • 28% reduction in actual developer build time  By integrating the developer more closely in the design and consulting phases, devs have a much better idea of how to go about building the test at the point they start working on it, meaning the build time is dramatically reduced.
  • 66% reduction in bugs reported during QA  Consequently, developers are able to build tests more intelligently by anticipating any issues, and feeding in to the test development earlier on to avoid prospective clashes.
  • Happier team members  Although there are a few downsides to working in the pod, such as less variety of sites we get to work on, the individual members of the pod are generally much happier with this new approach, because they are working as a team throughout the whole process. This means fewer internal conflicts and more efficient workflows.
  • More time to work on other projects  Because we have increased efficiency across the board, pod members have more time to spend working on other tasks, such as internal assignments and creative projects. The introduction of a project manager also means that consultants spend more time doing valuable conversion-related work and less admin, which is likely to be correlated with the uplift in team happiness!

While it is still early days for the pod, the initial results and general consensus are a positive indication. As a developer, there are far fewer conflicts and less back-and-forth between the design and consulting teams, and we have become much more connected to the conversion aspect of what we do. The developer becomes more of an expert on a smaller number of clients’ sites (as opposed to a generalist working across the whole spectrum). Despite the small downsides – for example, if a pod developer is needed to work on a different client’s site they may initially be less familiar with the technical setup of the site; the surplus time the developer has as a result of working in the pod can be used for more internal sharing and learning which may be more valuable in the long term. The developer also has to adapt to many more meetings than they are typically used to (!) however the benefits of being more involved in the project overall makes it worth our while.

Do you have anything to add? Questions or comments? Let us know in the comments below!

 

 

Introducing our fully Optimizely certified developer team

We are proud to announce that the entire Conversion.com development team are Optimizely certified developers!

Our savvy dev team have been working closely with Optimizely since October 2013 and the entire team are now Optimizely Developer Certified. We’ve really gotten to know Optimizely over the years and we’re co-creating more innovative solutions all the time (so watch this space!)

Be sure to check out our must-know 6 essential tips for working with Optimizely from our very own James Marchant (second-left in the picture).
Stay tuned for more updates!