Managed Service Sucks

Go back to blog

Managed service sucks Pin It

Software and Services Don’t Mix

Why you shouldn’t buy services from your testing platform.

Split-testing software vendors have traditionally relied on their managed service to win and retain clients.

From Maxymiser to Adobe, Monetate to Qubit, the managed service has been essential to their growth. Even today, most companies cite a lack of resource as the biggest barrier in their optimisation program – and a managed service can help overcome that.

Except most managed services suck.

For software vendors, a managed service can throttle their growth and limit their potential. And for their customers, a managed service can lead to substandard results in their conversion optimisation programme.

And as the optimisation and testing industry continues to expand exponentially, this is only going to get worse.

The core of the problem is simple:

Software and service don’t scale at the same rate.

Scale is crucial to the success of software vendors. After all, most testing platforms have taken significant investment: Qubit has taken $75M, Monetate $46M, and Maxymiser was acquired by Oracle in August 2015.

But it’s challenging when these companies offer essentially two products – software and service – that scale at very different rates.

With limited cost of sales, a fast-growth software vendor may expect to increase its sales 3–5x in a year.

Look at the rise of Optimizely. Their product’s ease-of-use and their partner program allowed them to focus on the software, not a managed service. And that meant they could grow their market share rapidly:

 

testing-market

Between 2012 and 2015, they’ve grown 8x.

Now compare that growth to a marketing services agency. Even a fast-growth mid-size agency may only grow 50% a year – or to put it another way, 1.5x.

If you combine software and service in one company, you’re creating a business that is growing at two very different rates. And this creates a challenge for testing platforms who offer a managed service.

They have three options:

  1. Move away from managed service to self-serve and partner-led growth.
  2. Attempt to scale managed service to keep up with software growth.
  3. Some combination of 1 and 2.

Most will choose option 2 or 3, rather than going all-out on 1. And this choice threatens the quality of their managed service and their ability to scale through partners.

The cost of scaling services

To enable scaling – and to minimise costs – software vendors have to exploit efficiencies at the expense of quality:

  1. They strip back the service to the absolute minimum. They typically cut out the quantitative and qualitative analysis that supports good testing.
  2. They rely on cookie-cutter testing. Instead of creating a bespoke testing strategy for each client, they replicate the same test across multiple websites, regardless of whether it’s the right test to run.
  3. They load account managers with 10–20 clients – meaning the service is focused on doing the minimum necessary to limit churn.

In short, to keep up with the growth of the platform, they inevitably have to sacrifice the quality of the managed service in the interest of making it scale.

Let’s look at each of these three points in turn.

#1 Stripped-back service

At its core, conversion optimisation is simple:

Find out why people aren’t converting, then fix it.

The problem is that the first part – finding out why they aren’t converting – is actually pretty hard.

Earlier this year, I shared our take on Maslow’s hierarchy of needs – our “hierarchy of testing”:

Conversion.com hierarchy of testing

The principle is the same as Maslow’s – the layers at the bottom of the pyramid are fundamental.

Starting at the top, there’s no point testing without a strategy. You can’t have a strategy without insight and data to support it. And you can’t get that without defining the goals and KPIs for the project.

In other words, you start at the bottom and work your way up. You don’t jump straight in with testing and hope to get good results.

In particular, the layers in the middle – data and insight – are essential for success. They link the testing program’s goals to the tests. Without them, you’re just guessing.

But all of this comes at a cost – and it’s typically the first cost that managed services cut. Instead of using a similar model to the pyramid above, they jump straight to the top and start testing, without the data and insight to show where and what they should be testing.

Ask them where they get their ideas from, and they’ll probably say heuristics – a nicer way of saying “best practice”.

#2 Cookie-cutter testing

Creating tests that aren’t based on data and insight is just the start.

To maximise efficiency (again, at the expense of quality), managed services will typically use similar tests across multiple clients. After all, why build a unique test for one client when you can roll it out across 10 websites with only minimal changes?

Break down the fees that managed services charge, and it’s easy to see why they have to do this.

Let’s assume Vendor X is charging £3k to deliver 2 tests per month. If we allow £1k/day as a standard managed service rate, then that gives 24 hours – or 12 hours per test.

At Conversion.com, we know that even just to build an effective test can take longer than 12 hours – and that’s before you add in time for strategy, design, QA and project management.

The cookie-cutter approach is problematic for two core reasons:

  1. They start with the solution, and then find a problem for it to fix. It’s clear that this is going to deliver average results at best. (Imagine if a doctor or mechanic took a similar approach.)
  2. It limits the type of tests to those that can be easily applied across multiple websites. In other words, the concepts aren’t integrated into the website experience, but are just pasted on the UI. That’s why these tests typically add popups, modify the calls-to-action and tweak page elements.

#3 Account manager loading

This focus on efficiencies means that account managers are able to work across at least 10–20 clients. Even assuming that account managers are working at 80% utilisation, that means that clients are getting between 1.5 and 3 hours of their time each week.

Is that a problem?

At Conversion.com, our consultants manage 3–5 clients in total. We feel that limit is essential to deliver an effective strategy for optimisation.

Ultimately, it reflects our belief that conversion optimisation can and should be integral to how a company operates and markets itself – and that takes time.

Conversion optimisation should let you answer questions about your commercial, product and marketing strategy:

  • How should we price our product to maximise lifetime value?
  • How do we identify different user segments that let us personalise the experience?
  • Which marketing messages are most impactful – both on our website and in our online and offline advertising?

Not “Which colour button might work best?”

Conversion optimisation isn’t a series of tactical cookie-cutter tests that can be churned out for your website, while 19 other clients compete for your AM’s attention.

The impact on test results

It’s not surprising that a managed service with a “one-size-fits-most” approach for its clients doesn’t perform as well as testing strategy from a dedicated optimisation agency.

The difference in approach is reflected in results (and, of course, the cost of the service).

But some managed services are misleading their clients over the success of their testing program.

There are three warning signs that the value of a managed service is being overreported:

  1. Weak KPIs: A KPI should be as closely linked as possible to revenue. For example, you may want to see whether a new product page design increases sales. But many managed services will track – and claim credit for – other KPIs, like increasing “add to cart”. While it may be interesting to track, it doesn’t indicate the success of a test. No business made more money just by getting more visitors to add to cart.
  2. Too many KPIs: There’s a reason why managed services often track these weak KPIs alongside effective KPIs, like visit to purchase or qualified lead. That’s because the more KPIs you track – bounce rate, add to cart, step 1 of checkout – the more likely you are to see something significant in the results. At 95% significance, there’s a 1 in 20 chance of getting a false positive. So if you’re testing 4 variations against the control, and measuring 5 KPIs for each – the chances are you’re going to get a positive result in one KPI, even when there isn’t one.KPI table
  3. Statistical significance: The industry’s approach to statistical significance has matured. People are less focused on just hitting a p value of 0.05 or less (ie 95% significance). Instead, strategists and platforms are also factoring in the volume of visitors, the number of conversions, and the overall test duration. And yet somehow we still hear about companies using a managed service for their testing, where the only result in the last 12 months is a modest uplift at 75% significance.

The role of managed service

Managed service has a place. It can be essential to expand a new market – especially where the product’s learning curve is steep and may limit its appeal to a self-serve audience.

But the focus should always be on the quality of the service. Vendors can subsidise the cost of their service if needed – whether through funding or the higher profit margin in software – to deliver an effective optimisation program.

Then, their growth should come through self-service and partners. As above, service and software scale at different rates – and the faster a software vendor champions self-service and a partner program, the faster they’ll grow.

 

Disclaimer: I’m the CEO of Conversion.com, an agency that specialises in conversion optimisation. We partner with many of the software vendors above. While we have a vested interest in companies choosing us over managed service, we have an even greater interest in making sure they’re testing effectively.

May 31st, 2016

About the Author

 

Stephen Pavlovich

Stephen is the CEO of Conversion.com. He manages the agency’s strategy and growth, while regularly meddling in other people’s work.

Comments (2)

  •  

    By: Paul Rouke20 July 2016 at 1:30pm

    Wow thank you for such an honest, frank and "disruptive" article Stephen. I applaud your stance on this (and for what its worth wholeheartedly agree).

    I'd like to add an extremely important result and impact of the "software & services" model - truth be told the services side are so far removed from delivering true intelligent, strategic, customer insight driven conversion optimisation that they are giving our industry a bad name. We have spoke to businesses that a year in to having the double wammy of tools & services, reach the conclusion that "A/B testing doesn't really deliver any value for them, so its probably something they shouldn't continue investing in.

    What a crying shame. But the fact is its the reality.

    Have you and your team any plans of tackling this and breaking the monopoly?!

    Reply

  •  

    By: Duncan Heath7 August 2017 at 3:35pm

    Couldn't agree more Stephen.

    I've lost count of the times I spoken to companies unsatisfied with managed service solutions and I've had to explain that they often simply don't have the knowledge, skills, time or inclination to 'get under the skin' of your business and customers.

    Without really embedding yourself in an organisation to understand what users really want and need (and where this contrasts with what is currently offered), you're never going to achieve anything more than small gains. As Paul says, it's not going to be possible to deliver truly intelligent CRO.

    And don't get me started on false positives - a major reason why many of these managed services (and some agencies I should add) get away with crap testing for so long...until the client realises their actual sales haven't increased whatsoever since the "optimisation" work began!

    Reply

Post your comment here

Get involved in the conversation! Leave a comment and we or someone else in the community will be sure to reply.

 
 

Get in touch

If you're looking for a personal, results-focused approach to conversion optimization, we'd love to hear from you.

Get In Touch

For the best experience, please return your device to portrait mode.