How to hire the perfect CRO consultant for your programme

Hiring an expert CRO consultant is complicated – particularly in today’s job market – which can make it difficult to know for certain if you’re choosing the right person for your programme. 

As one of the world’s fastest growing CRO agencies, we here at Conversion are constantly adding skilled consultants to our team, so in the first half of this post, we’re going to give an overview of the kinds of attributes and traits that we look for when we hire our consultants.

In the second half,  we’ll then go on to explain why, under certain conditions, it might actually make more sense for you to enlist the help of a full-service CRO agency, who will be able to run your entire, end-to-end experimentation programme for you.

So, in summary, throughout this post we’re going to cover all of the following topics (feel free to click the links to skip ahead): 

What is a CRO consultant?

A CRO consultant – sometimes also known as a conversion optimisation consultant, a CRO specialist, a CRO expert, an experimentation consultant, etc. – is the chief strategist behind your conversion rate optimisation programme. Their role is to develop and oversee your programmes’s strategy from start to finish, from research right on through to the analysis of experiment results and the application of learnings to new experiments. 

CRO consultants often have some level of expertise in all of the following areas:

  • Multivariate and AB testing
  • Analytics
  • Behavioural science
  • Statistics
  • UX design
  • Copywriting
  • User research
  • Wireframing

What, specifically, does a CRO consultant do?

A CRO consultant is responsible for stewarding your experimentation programme from start to finish. This will usually involve:

  • Developing a testing strategy – a CRO consultant will set the agenda for your CRO programme and manage a testing roadmap to help deliver long term results 
  • Identifying problems with the customer experience of your website through quantitative and qualitative research. Research will typically include things like user testing, heatmap studies, on-site surveys, data analytics, etc.
  • Ideating test concepts to address issues identified in quantitative and qualitative research
  • Analysing and evaluating experiments
  • Reporting on results and devising iterations on concepts to drive the testing strategy
  • Prioritising insights and testing strategy to align with your goals and deliver a strong ROI
An infographic displaying a typical delivery process for a CRO consultant
Here’s an example of a (somewhat) typical CRO consultant delivery process

What doesn’t a CRO consultant do?

While CRO consultants are often extremely versatile, with expertise in a range of disciplines, rarely do they have the ‘full-stack’ of skills necessary to deliver an advanced CRO programme on their own. 

Specifically, CRO consultants don’t usually have expertise in: 

  • Design
  • Web Development
  • Quality Assurance

This means that you’ll often have to bring in the capabilities of other specialists if you want to create an experimentation programme that yields strong results. 

4 things to look for when hiring a CRO consultant

So, now that you understand exactly what a CRO consultant does (and also what they don’t), you’ll want to know how to select the right consultant for your programme. In this next section, we’ll run over some of the most important things to look for as you go about trying to hire your CRO consultant.

1. Data-driven outlook

Conversion rate optimisation, at its core, is about applying the scientific method to your website changes to improve the rate at which your users convert into customers. 

This means running experiments, collecting data, and using this data to guide your decisions. 

A CRO expert who’s not data-driven is a contradiction in terms, so make sure that whoever you hire is committed to letting the data – rather than bias or opinion or committee – guide the course of your experimentation programme. 

Tip: If you want to gauge how data-driven a candidate is, ask them to present their top 3 concepts for your website. If these concepts are chosen based on intuition and gut-feel, then you may want to consider someone else for the position. If they’re chosen based on research and data, you may be on to a winner. 

2. Proven track record

Running a CRO programme is hard. There are many places where an inexperienced CRO consultant might slip up, so it’s important to make sure that whoever you hire has a proven track record delivering high-level CRO programmes that produce quantifiable results. 

Case studies are an excellent source of information about a CRO consultant’s past work. If your prospective consultant has a set of strong case studies – possibly even involving a similar company to your own – then you can be fairly confident that they’ll be able to deliver your programme for you. 

A screenshot of our client success page
Here’s a screenshot of our client success page, where we post some of our latest case studies and show the world what we can do

Of course, case studies aren’t a foolproof indication of a consultant’s quality – they’re designed to let the consultant put their best foot forward and present themselves in as favourable a light as possible. But at the very least, they’ll tell you about the kinds of projects your candidate has worked on and show you some of the best results they’ve been able to achieve in the past.

Reviews and testimonials are another great way of gauging the quality of a CRO consultant’s past work. If they have lots of happy clients who were willing to leave them rave reviews and glowing testimonials, there’s a good chance that the service they provided was of a high-standard. 

To give you an idea, here’s an example of a testimonial that our friends at Canon left us off the back of an experimentation programme we ran for them:

A positive testimonial we received from Canon

3. Thought leadership content

Many of the foremost experts in the CRO industry spend time producing content that outlines their methodology and that provides a good insight into the way they think and work. 

If you want to get a sense as to the quality of work you can expect from a consultant, their content is a good place to start.

Tip: One good way of judging the merit of a piece of content – aside from reading it yourself! – is to look at the reception it receives from other industry experts. If they’re raving about it, you can be fairly sure that the person who wrote it knows their stuff.

Our blog post about our experimentation framework, for example, written by our very own Stephen Pavlovich, is widely shared within the industry and has been used to inform the CRO methodologies of many brands across the globe – including Facebook and Microsoft.

4. Strong communication skills 

Conversion rate optimisation is still a relatively new discipline, which can sometimes mean that it’s difficult to gain the support of senior stakeholders when trying to roll out innovative or novel experiment concepts. 

If you have a CRO consultant who can effectively articulate their vision for the programme, as well as the rationale behind each of their experiments, then their chances of gaining the support of senior decision makers goes way up. This is extremely important if you want your experimentation programme to have a real impact. 

7 reasons why hiring a full-service CRO agency might make more sense

A good, in-house CRO consultant can be extremely useful when it comes to setting up and running your experimentation programme, but there may be situations when it makes more sense to bring in the services of a full-service CRO agency instead. 

In this last section, we’re going to go over a few of the reasons that you might have for hiring a CRO agency rather than an in-house specialist. 

1. Internal limitations

As discussed above, while a good CRO consultant will bring many valuable skills to the table, they’re unlikely to come with the ‘full-stack’ of CRO skills – skills like design, web development, and QA. 

If you have a strong internal team with plenty of free capacity, then it may well make sense for you to hire an in-house consultant, who can work alongside your internal team to deliver your experimentation programme. 

But if, as is more often the case, your internal team is already working at or near full capacity, then it may be more sensible for you to hire a full-service CRO agency, whose consultant, design, development, project management, and QA teams will be able to focus exclusively on experimentation without other priorities taking up capacity. 

2. Specialisation

While your organisation may be chock full of design and development talent, the skills required for CRO will often be subtly but significantly different from those of your internal specialists. For example, it’s extremely rare that an in-house development team will have all of the skills needed to work within experimentation tools. This may mean that, on top of hiring a CRO consultant, you’re also forced to hire a new, CRO-focussed developer as well. 

On the other hand, a good CRO agency will have deep expertise and specialisation in every aspect of CRO, which will often mean that they’re better equipped to support a CRO consultant in building out an experimentation programme than your internal team would be. 

3. Experiment velocity

With an entire team of dedicated experts working on your website, a CRO agency will be able to run a greater number of high-quality experiments than would a single, in-house CRO consultant. 

For organisations with a greater number of opportunities for optimisation and more revenue at stake, it may make more sense to bring in an agency, who will be able to deliver compounding results at a far faster pace. 

4. Higher win rate

A report from 2019 looking at more than 28,000 experiments found that agencies were able to rack up almost 21% more wins than in-house teams. 

Of course, this win rate differential will vary from case to case – there’ll no doubt be in-house specialists who are able to outperform agencies – but if a high win rate is one of your top priorities, you may want to go down the agency route instead.  

5. Access to a huge database of past experiments

In our case, we’ve stored data on more or less every experiment we’ve ever run. This means that we now have a database with thousands of experiments that we’re able to draw upon to inform our experimentation programmes and deliver maximum ROI for our clients. 

As an example, let’s say that we start working with a new client who has an ecommerce website. One of the first steps in our process is to dig through our database and look for the kinds of experiments that were successful – and unsuccessful – for our other clients who operate in a similar market space. Of course, each client’s website is different, and there’s no guarantee that what worked for one website will work for another, but at the very least, this data gives us additional information that we can use to guide our strategy. 

Our database is one of the biggest reasons that we’re so often able to generate a positive return on investment for our clients within the first 12 weeks of working together – and it’s also why our agency-wide win rate is as high as it is. But in-house consultants won’t have access to this kind of database, which puts them at a big disadvantage in this respect.

6. More innovation

If you hire an in-house consultant, your CRO consultant team will consist of one person. If you hire an agency, you’ll be hiring the collective expertise of a whole team of consultants. 

At the time of writing, we currently have a team of 12 consultants and 6 associate consultants, with these numbers growing all the time. This means that when our consultants run into difficulty, they’re able to draw upon the collective creativity, experience, and knowledge of our entire consulting team. This creates all kinds of synergies and allows us to be far more innovative than almost any single CRO consultant – no matter how brilliant.

7. Strategic experimentation

Most good CRO consultants will be able to deliver an effective experimentation programme that helps you optimise your website, but an experienced CRO agency will often have the capability to extend your experimentation programme far beyond the limits of your website. 

For example, we’ve worked with a number of our clients to experiment on their pricing and product strategies, allowing them to gain extremely valuable information about their customers while significantly reducing the risks associated with developing new products and applying untested pricing strategies. 

Read more about our approach to product experimentation. 

By hiring a CRO agency with advanced experimentation capabilities, you’ll have a chance to explore opportunities that go far beyond the usual remit of conversion rate optimisation. 


Final thoughts

Hiring a CRO consultant can be a complicated process. All CRO consultants – whether in-house or agency-side – will need to have certain skills and attributes if they’re going to drive meaningful results through experimentation. We hope that this blog post will give you everything you need to pick the right person – or persons – for your programme’s needs. 

Prepare for Launch: Lessons from 1,000 A/B Test Launches

In this article, we provide a guide for the A/B test launch process that will help you to keep your website safe and to keep your colleagues and/or clients happy. 

You’ve spent weeks, maybe months, preparing for this A/B test. You’ve seen it develop from a hypothesis, to a wireframe, through design, build and QA. Your team (or client, if you work agency-side) are excited for it to go live and all that’s left to push is the big red button. (Or the blue one, if you’re using Optimizely). Real users are about to interact with your variation and, hopefully, it’ll make them more likely to convert: to buy a product, to register for an account or simply to make that click.

But for all the hours you’ve put into preparing this test, the work is not over yet. At Conversion, we’ve launched thousands of A/B tests for our clients. The vast majority of those launches have gone smoothly, but launching a test can be intense and launching it properly is crucial. While we’re flexible and work with and around our clients, there are some fixed principles we adhere to when we launch an A/B test.

Get the basics right

Let’s start with the simplest step: always check that you’ve set the test up correctly in your testing platform. The vast majority of errors I have witnessed in the launching of tests have been minor errors in this part of the process. Make sure that you have:

  • Targeted the correct page or pages;
  • Allocated traffic to your Control and Variation/s;
  • Included the right audience in your test.

Enough said.

Map out the user journey

You and your team might know your business and its website better than anyone, but being too close to a subject can sometimes leave you with blinkered vision. By the end of the development process, you’ll be so close to your build that you might not be able to view it objectively.

Remember that your website will have many different users and use cases. Sure, you’re hoping that your user will find their way from the product page, to the basket page, to the payment page more easily in your variation. But, have you considered how your change will impact users who want to apply a voucher? Do returning users do something new users don’t? Could your change alienate them in some way? How does your test affect users who are logged in as well as logged out? (Getting that last one wrong caused my team a sleepless night earlier this year!)

Make sure you have thought about the different use cases happening on your website. Ask yourself:

  • Have I considered all devices? If the test is for mobile users, have you considered landscape and portrait?
  • Does your test apply across all geographies? If not, have you excluded the right ones?
  • Have you considered how a returning user’s journey differs from that of a new user?

One of the best ways to catch small errors is to involve colleagues who haven’t been as close to the test during the QA process. Ask them to try and identify user cases that you hadn’t considered. And if they do manage to find new ones, add these to your QA checklist to make sure future tests are checked for their impact on these users.

Test your goals

No matter how positively your users receive the changes you’ve made in your variation, your A/B test will only be successful if you can report back to your team or client with confidence. It’s important that you add the right goals to your results page, and that they fire as intended.

At Conversion, shortly before we launch a test, we work our way through both the Control and Variation and deliberately trigger each goal we’ve included: pageviews, clicks and custom goals too. We then check that these goals have been captured in two ways:

  1. We use the goals feature in our Conversion.com Optimizely Chrome Extension to see the goal firing in real-time.
  2. A few minutes later, we check to see that the action has been captured against the goal in the testing platform.

This can take a little time (and let’s be honest, it’s not the most interesting task) but it’ll save you a lot of time down the line if you find a goal isn’t firing as intended.

Know your baseline

From the work  you’ve done in preparation, you should know how many people you expect to be included in your experiment e.g. how many mobile users in Scotland you’re likely to get in a two-week period. In the first few minutes and hours after you’ve launched a test, it’s important to make sure that the numbers you’re seeing in your testing platform are close to what you’d expect them to be.

(If you don’t have a clear notion of how many users you expect to receive into your test, use your analytics platform to define your audience and review the number of visits over a comparable period. Alternatively, you could use your testing platform to run an A/A test where you do not make any changes in the variation. That way, you can get an idea of the traffic levels for that page).

If you do find that the number of visits to your test is lower than you’d expect, make sure that you have set the correct traffic allocation up in your testing tool. It may also be worth checking that your testing tool snippet is implemented correctly on the page. If you find that the number of visits to your test is higher than you’d expect, make sure you’re targeting the right audience and not including any groups you’d planned to exclude. (Handy hint: check you haven’t accidentally used the OR option in the audience builder instead of the AND option. It can catch you out!) Also, make sure that you’re measuring like-for-like i.e. are you looking at unique visits in your analytics tool and comparing it to unique hits to your test.

Keep your team informed

At Conversion, our Designers and Developers are involved in the QA process and so they know when a test is about to launch. (We’ve recently added a screen above our bank of desks showing the live test results. That way everyone can celebrate [or commiserate] the fruits of their labour!) When the test has been live for a few minutes, and we’re happy that goals are firing, we let our client know and ask them to keep an eye on it too.

Check the test regularly

So the test is live. Having a test live on a site (especially when you’re managing that for a client) is a big responsibility. Provided you’ve taken all the right steps earlier in the process, you should have nothing to worry about, but you should take precautions nonetheless.

Once you’ve pressed the Play button, go over to the live site and make sure you can see the test. Try and get bucketed into both the Control and Variation to sense check that the test is now visible to real users.

At Conversion, there’ll be someone monitoring the test results, refreshing every few minutes, for the first couple of hours the test is live. We’ll check in on the test every day that it runs. That person also checks that there’s at least one hit against each goal and that the traffic level is as expected.

A couple of hours into the running of a test, we’ll make sure that any segments we have set up (e.g. Android users, logged in users, users who interacted with our new element) are firing. You don’t want to run a test for a fortnight and then find that you can’t report back on key goals and segments.

(Tip: if you’re integrating analytics tools into your test make sure they’re switched on and check inside of those tools soon after the test launches to make sure you have heatmap, clickmap or session recording data coming through).

Make sure you have a way to pause the test if you spot anything amiss, and we’d recommend not launching on a Friday, unless someone can check the results over the weekend.

Finally, don’t be afraid to pause

After all the buildup and excitement of launching, it can feel pretty depressing having to press the pause button if you suspect something isn’t quite right. Maybe a goal isn’t firing or you’ve forgotten to add a segment that would come in very handy when it’s time to report on the results. Don’t be afraid to pause the test. In most cases, it will be worth a small amount of disruption at the start, to have trustworthy numbers at the other end. Hopefully, you’ll spot these issues early on. When this happens, we prefer to reset the results to ensure they’re as accurate as they can be.

Conclusion

Launching an A/B test can be a real thrill. You finally get to know whether that ear-worm of an idea for an improvement will actually work. In the few hours either side of that launch, make sure you’ve done what you need to do to preserve confidence in the results to come and to keep your team and client happy:

  • Get the basics right: it’s easy to make a small error in the Settings. Double check these.
  • Map out the user journey: know how users are likely to be impacted by your changes.
  • Test your goals: make sure you’ve seen some data against each goal from your QA work.
  • Know your baseline: check the initial results against traffic levels in your analytics tools.
  • Keep your team informed: don’t hog all the fun, and let others validate the results with you.
  • Check regularly: don’t go back to a lit firework; do go back to a live test…regularly.
  • Don’t be afraid to pause: pause your test if needed. It needs to be the best version it can be.

Introducing our hypothesis framework

Download printable versions of our hypothesis framework here.

Experiments are the building blocks of optimisation programmes. Each experiment will at minimum teach us more about the audience – what makes them more or less likely to convert – and will often drive a significant uplift on key metrics.

At the heart of each experiment is the hypothesis – the statement that the experiment is built around.

But hypotheses can range in quality. In fact, many wouldn’t even qualify as a hypothesis: eg “What if we removed the registration step from checkout”. That might be fine to get an idea across, but it’s going to underperform as a test hypothesis.

For us, an effective hypothesis is made up of eight key components. If it’s reduced to just one component showing what you’ll change (the “test concept”), you’ll not just weaken the potential impact of the test – you’ll undermine the entire testing programme.

That’s why we created our hypothesis framework. Based on almost 10 years’ experience in optimisation and testing, we’ve created a simple framework that’s applicable to any industry.

Conversion.com’s hypothesis framework

Conversion.com Hypothesis Framework

What makes this framework effective?

It’s a simple framework – but there are three factors that make it so effective.

  1. Putting data first. Quantitative and qualitative data is literally the first element in the framework. It focuses the optimiser on understanding why visitors aren’t converting, rather than brainstorming solutions and hoping there’ll be a problem to match.
  2. Separating lever and concept. This distinction is relatively rare – but for us, it’s crucial. A lever is the core theme for a test (eg “emphasising urgency”), whereas the concept is the application of that lever to a specific area (eg “showing the number of available rooms on the hotel page”). It’s important to make the distinction as it affects what happens after a test completes. If a test wins, you can apply the same lever to other areas, as well as testing bolder creative on the original area. If it loses, then it’s important to question whether the lever or the concept was at fault – ie did you run a lousy test, or were users just not affected by the lever after all?
  3. Validating success criteria upfront: The KPI and duration elements are crucial factors in any test, and are often the most overlooked. Many experiments fail by optimising for a KPI that’s not a priority – eg increasing add-to-baskets without increasing sales. Likewise the duration should not be an afterthought, but instead the result of statistical analysis on the current conversion rate, volume of traffic, and the minimum detectable uplift. All too often, a team will define, build and start an experiment, before realising that its likely duration will be several months.

Terminology

Quant and qual data

What’s the data and insight that supports the test? This can come from a huge number of sources, like web analytics, sales data, form analysis, session replay, heatmapping, onsite surveys, offsite surveys, focus groups and usability tests. Eg “We know that 96% of visitors to the property results page don’t contact an agent. In usability tests, all users wanted to see the results on a map, rather than just as a list.”

Lever

What’s the core theme of the test, if distilled down to a simple phrase? Each lever can have multiple implementations or test concepts, so it’s important to distinguish between the lever and the concept. Eg a lever might be “emphasising urgency” or “simplifying the form”.

Audience

What’s the audience or segment that will be included in the test? Like with the area, make sure the audience has sufficient potential and traffic to merit being tested. Eg an audience may be “all visitors” or “returning visitors” or “desktop visitors”.

Goal

What’s the goal for the test? It’s important to prioritise the goals, as this will affect the KPIs. Eg the goal may be “increase orders” or “increase profit” or “increase new accounts”.

Test concept

What’s the implementation of the lever? This shows how you’re applying the lever in this test. Eg “adding a map of the local area that integrates with the search filters”.

Area

What’s the flow, page or element that the test is focused on? You’ll need to make sure there’s sufficient potential in the area (ie that an increase will have a meaningful impact) as well as sufficient traffic too (ie that the test can be completed within a reasonable duration – see below). Eg the area may be “the header”, “the application form” or “the search results page”.

KPI

The KPI defines how we’ll measure the goal. Eg the KPI could be “the number of successful applications” or “the average profit per order”.

Duration

Finally, the duration is how long you expect the test to run. It’s important to calculate this in advance – then stick to it. Eg the duration may be “2 weeks”.

Taking this further

This hypothesis framework isn’t limited to A/B tests on your website – it can apply anywhere: to your advertising creative and channels, even to your SEO, product and pricing strategy.
Any change and any experience can be optimised – and to do that effectively requires a data-driven and controlled framework like this.

Don’t forget – you can download printable versions of the hypothesis framework here.

CRO is like poker

Conversion rate optimisation (CRO) and poker have a lot of similarities, and it’s more than just the opportunity to either make or lose a lot of money.

 

Anyone can play

Anyone can take a seat at a poker table and play a few hands. The game is relatively easy to pick up and there really isn’t any prerequisite knowledge needed apart from knowing how a deck of cards works.

The same can be said of CRO. There are plenty of tools out there that will allow you to start doing the basics of CRO in a couple of hours. Your free Google Analytics account can give you a pretty good understanding of where people are abandoning your site. Sign up for an Optimizely account and you can start running your first A/B tests as soon as you add the code to your pages.

The problem is, because it’s so easy to start doing something that feels like CRO, many companies think they’re doing CRO already so don’t seek help to do it better. Everyone starts playing with the assumption that they will win after all. But only the players willing to invest adequate time and even money into getting better will make consistent returns in the long run. That might mean reading up on the theory, looking at what others have done to be successful, or even getting professional help.  

Anyone can win the odd hand

The reason people get addicted to poker is that from time to time they probably will win a big hand and make some money. The problem is that over the long run the relatively infrequent big wins will be cancelled out by the all-too-frequent losses.

The same is true of CRO. Anyone can run a test and it’s within the realms of possibility that you might just get a winner too, maybe even a big one at that. We know from experience that small changes to sites can have big impact so you certainly can stumble upon these impactful changes.

If you want to be making a sustained impact on your conversion rate over time though, you’ll need a CRO strategy in place that can deliver these big wins on a regular basis.

Over time, a data-driven strategy will deliver better results

In poker a beginner’s luck will run out. It doesn’t matter too much what happens hand to hand, it matters what happens over the long-run – over hundreds of hands. A successful poker player adopts strategies that give them statistically better odds of winning. Over time, this statistical advantage is what means they are still there at the final table, with the biggest stack of chips. They may throw a few big plays here and there, but the majority of play is about being smart and using the data available to make good decisions consistently.

In CRO each split-test we run is like a hand of poker for the poker player. Being successful at CRO is not necessarily about getting a big uplift in one test, nor is it about being successful with every test you run. Being successful at CRO is about using the data you have available to you to devise testing strategies that deliver continuous improvement over time. There may be the odd test along the road that does deliver a 20, 30, 40% uplift in conversion rate.

The mark of a good CRO professional, however, is not getting that 40% winner, it’s what they do after that 40% winner to iterate on it and go further. It’s how they learn and adapt when a test doesn’t deliver an uplift to turn the data from that losing hand into a winning hand next time.

Finally, you play your opponent, not the cards

This is a well known mantra of poker and it stems from the fact that you have little control over what cards you’re dealt so can’t rely on good cards to win hands. Instead, by gathering data on your opponent such as their play style – how they play hands in which they win and how they play hands in which they lose for example – you can devise strategies to beat them no matter what hand you’ve been dealt.

This is true in CRO, although I wouldn’t suggest that you think of your potential customers as your opponents necessarily.

You might not have much control over the hand you’re dealt in terms of the product you’re selling or the service you’re offering. What you can control is how you use what you’ve been dealt, and it’s essential to understand how your visitors think so that you can decide how best to influence them using what you have. Likewise, there is only so much that web analytics data can tell you about why visitors are abandoning your checkout. You need to understand the motivations and thought processes of visitors at each stage of your funnel to know how to make them take the action you want.

CRO and poker have the same appeal. The simplicity of the objective – getting people to buy or getting people to fold. The potential for great returns if you’re successful. The thrill of getting that big uplift in a test or winning that big hand. Both CRO and poker though aren’t easy, and both need a lot of time and effort invested to do well.

There are a lot more unsuccessful poker players than successful ones as a result, and I think the same is probably true in CRO. Hopefully this post has given you a good idea of what can makes the difference.

Specialist teams or x-functional pods? A developer’s view

Conversion.com is an agency comprised of specialists that will look for opportunities to improve client’s ROI through methodical research, testing and learning.  We analyze user behaviour and expectations of a website, in order to increase engagement levels and consequently, conversions.

Testing is at the heart of everything we do, so we’re always trying to improve and find better ways of doing things. Typically, our company is split into three major ’specialist teams’ – consultants, designers and developers.

Consultants: Their role is to perform in-depth research of a client’s website and get relevant insights about the business. Consequently, test ideas are generated and wireframes created. Also they are the main bridge between our clients and internal teams.

Designers: They feed into the wireframe stage by collaborating on ideas on how to implement the test concept. After approval on this stage they elaborate the final design file that will be transferred to the developers.

Developers: These geeks have the ability to transform the final design file into code readable by browsers. This is the final stage of the test creation flow.

After this internal process the test runs to a live audience through an A/B testing system, where at the end consultants analyse the final results and make recommendations for the client’s site.

Here is how the teams typically interact within the company:

As can be seen, developers come in at the very end of the process.  After designers have completed the final file they assign to one of the developers available at that moment. This is great from a developer’s standpoint, because they have the opportunity to work on many different clients and retain a good working knowledge across all of them. However the downside to this is that the work overload can be an issue. This happens because different consultants have different deadlines to deliver tests, so at times, congestion becomes unavoidable. Sometimes many tests come in to the development team simultaneously, and it is difficult to manage requests in order to deliver each test at the desired time.

Because of these issues, we had an idea to grab an element of each team and make them work more closely together. We have created a cross functional team a.k.a. pod.

What exactly is a pod?

A pod is like a small startup inside the company. Instead of organizing your business in separate functional departments, you create teams that contain a member of each function. Let’s illustrate what we have done within our company:

Graph 2

Clear goals and collaboration

With the team working collectively on the same clients, it’s much easier to sync up schedules. Since we always have a priority list for our tasks the team will work towards those goals by order. For example if a developer needs designer approval for a certain test, the designer will stop whatever they are doing to evaluate the developer’s work because that is the current priority for the whole team.

 Tidy schedule

Because there are clear goals, the project manager is able to build a clear schedule for everyone in the team. This helps the developer to know what work is coming soon to his stack. In this way, the developer can manage his time, along with his other number of tasks. This allows the developer to shift his projects the way he prefers as soon as he delivers his work on the expected deadlines.

Earlier technical evaluation

We have introduced a new format for the test idea/concept phase. Before the pod, the developer had little input at this stage. The developer is now an active member of the conceptual phase, bringing valuable know-how on potential implementation issues. Sometimes even a very slight different approach can save many hours of development and help the team deliver a certain test faster (for example – implementing native placeholders can cause cross browser compatibility problems. The developer might ask at this stage ‘is this really required for the test? Will this make a significant difference to conversions?’) Also, assimilating with the test at the very beginning can be good for the developer to research and develop some code practices that will be required to implement it (e.g. get familiar with new frameworks).

Faster test development

Since the developer has a clear pipeline he can start to develop the test before he actually receives the final design file from the designer. How is this possible? Well, before the designers start to work on the final photoshop file, there is a wireframe stage. As soon as we get approval from the client on the wireframe the developer can start to work at the same time as the designer prepares the final file. This is possible because the wireframe gives a clear indication of what the test is all about. With this visual info the developer is able to develop a big chunk of the HTML, CSS and javascript. Remember that from the test idea phase the developer already knows what functionality and goals the test is supposed to deliver. This allows the developer to finish around 70-80% of his work even before the designer delivers the file. With the final file developer just needs to make some tweaks on the code (e.g spacing, colours, etc.). So far, this new process has allowed us to deliver tests 35% faster than before.

Quick decision-making

Because the members are simply around each other, as opposed to working in silos, it is easier to take a minute to discuss something momentarily. Moreover, interrupting one of your team does not feel so intrusive because if you need something to finish the pod’s priority task, they are more open to being interrupted in order to collectively help meet the team’s goals.

Flexibility

Because the pod is like a small startup within the company, it allows the team to change processes and try new ways of working. This can be very useful in finding more efficient ways of working which we can then share with the other pods.

Results

As optimizers, a testing culture is a vital part of how we work. This means we also need to measure everything and be able to critically evaluate how things are doing. Here are the results we have observed so far by moving from a specialist-teams to cross-functional pod approach:

  • 35% faster test delivery time from start to finish  By developing test ideas in parallel, as opposed to serially, we have seen a significant reduction in the total time lapsed from the inception of a test to the final launch.
  • 28% reduction in actual developer build time  By integrating the developer more closely in the design and consulting phases, devs have a much better idea of how to go about building the test at the point they start working on it, meaning the build time is dramatically reduced.
  • 66% reduction in bugs reported during QA  Consequently, developers are able to build tests more intelligently by anticipating any issues, and feeding in to the test development earlier on to avoid prospective clashes.
  • Happier team members  Although there are a few downsides to working in the pod, such as less variety of sites we get to work on, the individual members of the pod are generally much happier with this new approach, because they are working as a team throughout the whole process. This means fewer internal conflicts and more efficient workflows.
  • More time to work on other projects  Because we have increased efficiency across the board, pod members have more time to spend working on other tasks, such as internal assignments and creative projects. The introduction of a project manager also means that consultants spend more time doing valuable conversion-related work and less admin, which is likely to be correlated with the uplift in team happiness!

While it is still early days for the pod, the initial results and general consensus are a positive indication. As a developer, there are far fewer conflicts and less back-and-forth between the design and consulting teams, and we have become much more connected to the conversion aspect of what we do. The developer becomes more of an expert on a smaller number of clients’ sites (as opposed to a generalist working across the whole spectrum). Despite the small downsides – for example, if a pod developer is needed to work on a different client’s site they may initially be less familiar with the technical setup of the site; the surplus time the developer has as a result of working in the pod can be used for more internal sharing and learning which may be more valuable in the long term. The developer also has to adapt to many more meetings than they are typically used to (!) however the benefits of being more involved in the project overall makes it worth our while.

Do you have anything to add? Questions or comments? Let us know in the comments below!