Tools & Platforms Archives |

Migrating to Optimizely X Experiments from Optimizely Classic

With the release of Optimizely X and the new Optimizely X Experiments, many Optimizely users are wondering how they can migrate to the new platform and what it means for their tests in the old platform (Optimizely Classic). As an Optimizely three-star partner, we get access to Optimizely’s newest products before release, including Optimizely X Experiments. We can prepare ourselves and our clients for big product updates like this. Armed with this knowledge, I’d like to guide you through the process of migration and answer a few common questions.

What’s changed and why should I migrate?

The new platform delivers some great improvements as well as some brand new features which will hopefully enhance your testing. Here are some key features and changes that Optimizely are bringing to the table in Optimizely X Experiments:

  • A new visual editor that loads faster and provides the ability to view websites in different responsive states;
  • An updated code editor (gone is the old code engine) that has a separate section for variation CSS, new utility functions and the ability to control when variation code is loaded onto a page;
  • Two new features: Pages (a templating system that combines URL targeting and Activation modes) and Events (an updated version of Goals);
  • A new Results page with an updated stats engine (each Metric now shows “confidence intervals” rather than “difference intervals” and the ordering of the Metrics affect how quickly they will reach significance);

A burning question for some is: why should I migrate over to Optimizely X Experiments? While there isn’t a golden reason why you should, there are a few reasons you might want to consider:

  • Code editor improvements. With visual editor changes no longer generating code and the task of deciding when code should execute is not down to Optimizely but the developer, code can be cleaner, easier to read, and potentially easier to implement within the new platform.

How do I migrate to Optimizely X Experiments?

Before you start the migration, you first need to evaluate the status of your tests within Optimizely Classic:

  • Do you have tests that are still running and haven’t finished?
  • Do you have any winners that are running which you haven’t implemented fully onto your site yet?

The answers to these questions are crucial, as there are a few features from Optimizely Classic that cannot be carried over automatically to Optimizely X Experiments:

  • Experiment code
  • Experiment results
  • Goals

Thankfully, Optimizely allows us the option to run both Optimizely Classic and Optimizely X at the same time. This means that if you answered yes to one of the above questions, you can combine them in one snippet (a soft transition) without worrying about experiments being paused/disabled.

Optimizely Snippet
This screenshot from the settings panel in Optimizely X shows Optimizely X being enabled, as well as both Optimizely X and Optimizely Classic set to load in the same snippet.

Enabling both at the same time, however, does come at a cost: an additional 50KB is added to your snippet when using this option. This could affect the speed at which Optimizely loads your tests – and impact performance. You will therefore need to weigh up the advantages of this soft transition and the slight disadvantage that comes with the additional snippet size.

If you don’t have any active experiments or winners, then you can enable Optimizely X and simply select the “Use only Optimizely X” option in the Snippet Configuration (a hard transition). This will mean that all of your Optimizely Classic experiments will be disabled (but still accessible).

What if I want to move my existing Optimizely Classic experiments to Optimizely X Experiments?

Unfortunately, there is no automatic way of moving your experiments from the old platform to the new one so you will need to manually do this. There are a few things first, however, that you will need to make a note of for each of your Optimizely Classic experiments in order to move them over to Optimizely X Experiments:

  1. Experiment Name (if you want to keep the same name)
  2. URL Targeting
  3. Activation Mode
  4. Audiences used
  5. Goals

When creating a new experiment in the new platform, Optimizely requires the following information:

  1. Experiment Name
  2. Page(s)
  3. Audience(s)
  4. Metric(s)
  5. Variation names and distributions (optional)

The Experiment Name will be the same as your old experiment name that you had before.

Pages is one of the new features in Optimizely X Experiments (as stated above) which combines URL targeting and the Activation mode type. Both URL targeting and the Activation mode function in the same way as in Optimizely Classic, so it should be pretty simple for you to carry them over.

Page creation window
The Page creation window

Audiences are automatically available between Optimizely Classic and Optimizely X Experiments, so all you need to do is add them into the experiment.

Old Audiences
Old Audiences
New Audiences
New Audiences

Metrics are the events you want to use to measure your experiment. Events are an updated version of goals and creating them is similar to before. You may wish to read Optimizely’s Knowledge Base article on them to familiarise yourself on how they work.

Finally, Variation names and distributions can be added/changed depending on what your old test had.

If your test is a multi-page test, then you will need to create separate Pages for each of the Sections in the experiment and apply them to your new experiment. Within the visual editor, you will then have the ability to switch between the Pages that you’ve applied to the experiment and then add the appropriate changes/code in the editor.

Switch Pages in the Editor

As of writing this, there is no support for multivariate tests nor dimensions (so you will not be able to apply advanced segmentation to your tests). We expect this to be added relatively soon.

Finally, with the new platform comes a complete rethink of how you have to code your experiments which, of course, presents a challenge in porting over experiments. (If you are not a technical user, and don’t have knowledge of Javascript and jQuery, then you will need to utilise your nearest friendly Front End Developer to help you!).

Optimizely Classic required you to write your code in a specific format in order to use Optimizely’s algorithm so that it could execute as fast as possible as well as utilising force parameters and custom functions. In Optimizely X Experiments, you no longer have to write your code in this way but you do have to manage the timing of your own code as all variation code is executed immediately before the page has loaded. Don’t be alarmed though – Optimizely have provided a few useful utility functions to help with executing code at the right time. Converting your code from the old form should be fairly simple. As an example, take this code from an experiment in Optimizely Classic:

// Update the CTA text

$(‘.cta’).html(‘Click Me!’);

And now we can see how that same functionality is done within Optimizely X Experiments:

// Import the utils library
var utils = window.optimizely.get('utils');

// Wait for the element
  // Update the CTA text
  element.innerHTML = 'Click Me!';

As you can see, I am using the waitForElement utility function in place of the jQuery selector that I had before. This function will wait for the element to appear within the DOM and then execute the promise function. You can repeat this process for each line of code that is outside of the force parameters.

If you want to access the jQuery that is bundled in your snippet, then you can import it in a similar way to the utility functions:

var $ = window.optimizely.get('jquery');

Lastly, if you had any CSS that was injected via Javascript/jQuery within your experiment, you can now separate this into the new Variation CSS which will make it much easier to create and edit your styling.

Variation CSS in Editor

And that’s it! You should now be ready to publish your experiment to the world.

If you are using our free Optimizely Chrome Extension, then you will be happy to hear that we have updated it so that it is fully compatible with Optimizely X Experiments.

Do you have any suggestions or tips/tricks for migrating form Optimizely Classic to Optimizely X Experiments? Please share by leaving a comment below.

Managed Service Sucks

Software and Services Don’t Mix

Why you shouldn’t buy services from your testing platform.

Split-testing software vendors have traditionally relied on their managed service to win and retain clients.

From Maxymiser to Adobe, Monetate to Qubit, the managed service has been essential to their growth. Even today, most companies cite a lack of resource as the biggest barrier in their optimisation program – and a managed service can help overcome that.

Except most managed services suck.

For software vendors, a managed service can throttle their growth and limit their potential. And for their customers, a managed service can lead to substandard results in their conversion optimisation programme.

And as the optimisation and testing industry continues to expand exponentially, this is only going to get worse.

The core of the problem is simple:

Software and service don’t scale at the same rate.

Scale is crucial to the success of software vendors. After all, most testing platforms have taken significant investment: Qubit has taken $75M, Monetate $46M, and Maxymiser was acquired by Oracle in August 2015.

But it’s challenging when these companies offer essentially two products – software and service – that scale at very different rates.

With limited cost of sales, a fast-growth software vendor may expect to increase its sales 3–5x in a year.

Look at the rise of Optimizely. Their product’s ease-of-use and their partner program allowed them to focus on the software, not a managed service. And that meant they could grow their market share rapidly:



Between 2012 and 2015, they’ve grown 8x.

Now compare that growth to a marketing services agency. Even a fast-growth mid-size agency may only grow 50% a year – or to put it another way, 1.5x.

If you combine software and service in one company, you’re creating a business that is growing at two very different rates. And this creates a challenge for testing platforms who offer a managed service.

They have three options:

  1. Move away from managed service to self-serve and partner-led growth.
  2. Attempt to scale managed service to keep up with software growth.
  3. Some combination of 1 and 2.

Most will choose option 2 or 3, rather than going all-out on 1. And this choice threatens the quality of their managed service and their ability to scale through partners.

The cost of scaling services

To enable scaling – and to minimise costs – software vendors have to exploit efficiencies at the expense of quality:

  1. They strip back the service to the absolute minimum. They typically cut out the quantitative and qualitative analysis that supports good testing.
  2. They rely on cookie-cutter testing. Instead of creating a bespoke testing strategy for each client, they replicate the same test across multiple websites, regardless of whether it’s the right test to run.
  3. They load account managers with 10–20 clients – meaning the service is focused on doing the minimum necessary to limit churn.

In short, to keep up with the growth of the platform, they inevitably have to sacrifice the quality of the managed service in the interest of making it scale.

Let’s look at each of these three points in turn.

#1 Stripped-back service

At its core, conversion optimisation is simple:

Find out why people aren’t converting, then fix it.

The problem is that the first part – finding out why they aren’t converting – is actually pretty hard.

Earlier this year, I shared our take on Maslow’s hierarchy of needs – our “hierarchy of testing”: hierarchy of testing

The principle is the same as Maslow’s – the layers at the bottom of the pyramid are fundamental.

Starting at the top, there’s no point testing without a strategy. You can’t have a strategy without insight and data to support it. And you can’t get that without defining the goals and KPIs for the project.

In other words, you start at the bottom and work your way up. You don’t jump straight in with testing and hope to get good results.

In particular, the layers in the middle – data and insight – are essential for success. They link the testing program’s goals to the tests. Without them, you’re just guessing.

But all of this comes at a cost – and it’s typically the first cost that managed services cut. Instead of using a similar model to the pyramid above, they jump straight to the top and start testing, without the data and insight to show where and what they should be testing.

Ask them where they get their ideas from, and they’ll probably say heuristics – a nicer way of saying “best practice”.

#2 Cookie-cutter testing

Creating tests that aren’t based on data and insight is just the start.

To maximise efficiency (again, at the expense of quality), managed services will typically use similar tests across multiple clients. After all, why build a unique test for one client when you can roll it out across 10 websites with only minimal changes?

Break down the fees that managed services charge, and it’s easy to see why they have to do this.

Let’s assume Vendor X is charging £3k to deliver 2 tests per month. If we allow £1k/day as a standard managed service rate, then that gives 24 hours – or 12 hours per test.

At, we know that even just to build an effective test can take longer than 12 hours – and that’s before you add in time for strategy, design, QA and project management.

The cookie-cutter approach is problematic for two core reasons:

  1. They start with the solution, and then find a problem for it to fix. It’s clear that this is going to deliver average results at best. (Imagine if a doctor or mechanic took a similar approach.)
  2. It limits the type of tests to those that can be easily applied across multiple websites. In other words, the concepts aren’t integrated into the website experience, but are just pasted on the UI. That’s why these tests typically add popups, modify the calls-to-action and tweak page elements.

#3 Account manager loading

This focus on efficiencies means that account managers are able to work across at least 10–20 clients. Even assuming that account managers are working at 80% utilisation, that means that clients are getting between 1.5 and 3 hours of their time each week.

Is that a problem?

At, our consultants manage 3–5 clients in total. We feel that limit is essential to deliver an effective strategy for optimisation.

Ultimately, it reflects our belief that conversion optimisation can and should be integral to how a company operates and markets itself – and that takes time.

Conversion optimisation should let you answer questions about your commercial, product and marketing strategy:

  • How should we price our product to maximise lifetime value?
  • How do we identify different user segments that let us personalise the experience?
  • Which marketing messages are most impactful – both on our website and in our online and offline advertising?

Not “Which colour button might work best?”

Conversion optimisation isn’t a series of tactical cookie-cutter tests that can be churned out for your website, while 19 other clients compete for your AM’s attention.

The impact on test results

It’s not surprising that a managed service with a “one-size-fits-most” approach for its clients doesn’t perform as well as testing strategy from a dedicated optimisation agency.

The difference in approach is reflected in results (and, of course, the cost of the service).

But some managed services are misleading their clients over the success of their testing program.

There are three warning signs that the value of a managed service is being overreported:

  1. Weak KPIs: A KPI should be as closely linked as possible to revenue. For example, you may want to see whether a new product page design increases sales. But many managed services will track – and claim credit for – other KPIs, like increasing “add to cart”. While it may be interesting to track, it doesn’t indicate the success of a test. No business made more money just by getting more visitors to add to cart.
  2. Too many KPIs: There’s a reason why managed services often track these weak KPIs alongside effective KPIs, like visit to purchase or qualified lead. That’s because the more KPIs you track – bounce rate, add to cart, step 1 of checkout – the more likely you are to see something significant in the results. At 95% significance, there’s a 1 in 20 chance of getting a false positive. So if you’re testing 4 variations against the control, and measuring 5 KPIs for each – the chances are you’re going to get a positive result in one KPI, even when there isn’t one.KPI table
  3. Statistical significance: The industry’s approach to statistical significance has matured. People are less focused on just hitting a p value of 0.05 or less (ie 95% significance). Instead, strategists and platforms are also factoring in the volume of visitors, the number of conversions, and the overall test duration. And yet somehow we still hear about companies using a managed service for their testing, where the only result in the last 12 months is a modest uplift at 75% significance.

The role of managed service

Managed service has a place. It can be essential to expand a new market – especially where the product’s learning curve is steep and may limit its appeal to a self-serve audience.

But the focus should always be on the quality of the service. Vendors can subsidise the cost of their service if needed – whether through funding or the higher profit margin in software – to deliver an effective optimisation program.

Then, their growth should come through self-service and partners. As above, service and software scale at different rates – and the faster a software vendor champions self-service and a partner program, the faster they’ll grow.


Disclaimer: I’m the CEO of, an agency that specialises in conversion optimisation. We partner with many of the software vendors above. While we have a vested interest in companies choosing us over managed service, we have an even greater interest in making sure they’re testing effectively.

Introducing:’s Optimizely Chrome Extension

Today we are very excited to announce the public launch of our Optimizely Chrome Extension. We’ve been using the extension internally and been improving the functionality over the past year.

We began rolling it out to our clients over the last few months, and after some great feedback we decided to put it into private beta last month. Today, we are thrilled to share it with everyone!

Solving problems you never knew you had.

We feel like’s Optimizely Chrome Extension is one of those wonderful tools that you never knew you needed – until you start using it. And now that we’ve started, we couldn’t live without it.


What does the extension do?

The feature list is impressive and always growing. At the time of launch, the core features we want to highlight are below. Here are 8 great reasons you should install the extension today.

  • Quickly see whether Optimizely is running on the page (if the circle turns blue, Optimizely has been detected on the page)

Icons x and 0

  • See how many experiments are running on the page (that’s the white number within the blue circle)

Icon 3

  • Toggle between QA mode to see your experiments and variations that are not yet live with the flick of a switch*

optimizely extension QA


*Be sure to set up the QA cookie first – only users with the QA cookie set-up will be able to see tests in QA mode.

  • Switch variations quickly within an experiment with the handy drop-down selector. This will reload the page and bucket you into whichever variation you have selected.


  • Jump straight into the Results and Editor pages of any experiment. Just make sure you’re logged into your Optimizely account!


  • Copy an experiment URL to the clipboard – this way you can be sure that you and your colleagues are looking at the same thing!

Copy Url

  • Thought the QR code was dead? Wrong! We finally found a great use for the QR code. Snap a picture of the QR code to see the same experiment and variation quickly on your mobile device.

  • Finally, make sure you are tracking the right events with the events console. This will show you Optimizely tracking, segmentation info and manual activation info.


Check out the FAQ for the full feature list and more detail!

Once you’ve installed it, be sure to take it out for a spin.

We’d suggest the following activities as a great way to get started and set you up for success:

  • Read the FAQ
  • Set up a QA cookie, to make the best use of QA mode and to check out your experiments before they go live!
  • Visit a page you’re running a test on, and check out all the experiments and variations with one handy interface

Before you start using the extension please be sure to review Optimizely’s best practices, and be sure to mask descriptive names of your tests.

Send us your feedback!

We hope you’ll find this new tool as useful as we have. If you want to send us feature requests, report bugs, or tell our Optimizely Certified Development team just how much you appreciate them, please use the handy little “Get in touch” button on the extension.

What are you waiting for?

Click here to get your hands on the extension and start saving time!

5 questions you should be asking your customers

On-site survey tools provide an easy way to gather targeted, contextual feedback from your customers. Analysis of user feedback is an essential part of understanding motivations and barriers in the decision making processes.

It can be difficult to know when and how to ask the right questions in order to get the best feedback without negatively affecting the user experience. Here are our top 5 questions and tips on how to get the most out of your on-site surveys.

On-site surveys are a great way to gather qualitative feedback from your customers. Available tools include Qualaroo and Hotjar.
On-site surveys are a great way to gather qualitative feedback from your customers. Available tools include Qualaroo and Hotjar.

1. What did you come to < this site > to do today?

Where: On your landing pages

When: After a 3-5 second delay

Why: First impressions are important and that is why your landing pages should have clear value propositions and effective calls to action. Identifying user intentions and motivations will help you make pages more relevant to your users and increase conversion rates at the top of the funnel.

2. Is there any other information you need to make your decision?

Where: Product / pricing pages

When: After scrolling 50% / when the visitor attempts to leave the page

Why: It is important to identify and prioritise the information your users require to make a decision. It can be tempting to hide extra costs or play down parts of your product or service that are missing but this can lead to frustration and abandonment. Asking this question will help you identify the information that your customers need to make a quick, informed decision.

3. What is your biggest concern or fear about using us?

Where: Product / pricing pages

When: After a 3-5 second delay

Why: Studies have found that “…fear influences the cognitive process of decision-making by leading some subjects to focus excessively on catastrophic events.”.  Asking this question will help you identify and alleviate those fears, and reduce the negative ffect they may be having on your conversion rates.

4. What persuaded you to purchase from us today?

Where: Thank you / confirmation page

When: Immediately after purchase. Ideally embedded in the page (try Wufoo forms)

Why: We find that some of our most useful insights come from users who have just completed a purchase. It’s a good time to ask what specifically motivated a user to purchase. Asking this question will help you identify and promote aspects of your service that are most appealing to your customers.

5. Was there anything that almost stopped you buying today?  

Where: Thankyou / confirmation page

When: Immediately after purchase

Why: We find that users are more clear about what would have stopped them purchasing after they have made a purchase. Asking this question can help you identify the most important barriers that are preventing users from converting. Make sure to address these concerns early in the user journey to avoid surprises and reduce periods of uncertainty.

What questions have you asked your customers recently? Have you asked anything that generated valuable insights? Share in the comments below!

6 Essential tips for any developer using Optimizely

Developing within Optimizely is a unique undertaking that has few parallels with conventional front end development. In this post I will outline six gems of knowledge that I have gained while building tests in Optimizely for a wide range of clients. Please note: this post assumes that you have knowledge of writing code within Optimizely.

1. The Optimizely log is your best friend

My first essential tip is the use of the invaluable Optimizely log. The log contains all the information on bucketing, segmentation, audiences and code execution on your site on page load while also displaying execution times of each part in the process (for further documentation on the log, see this Optimizely knowledge base article).

To access the log, you simply type the following into the console of your page:


this will then return something like so:


This is extremely useful to locate any code that is preventing the rest of your test(s) from running that in-turn will increase flicker for the user. As all code must be written in the Identifier/Action format (see this useful knowledge base article from Optimizely on how code is executed within tests), it can be easy to forget to add non-optimised code in your experiment. Here is an example of code being delayed because it doesn’t follow the correct format:


The code highlighted is a simple variable declaration which unfortunately does not follow the correct format. You can also see, on the lines preceding, that Optimizely is continually waiting for the document to be ready in order to execute the code (our example shows it took 617ms from encountering the code to the document becoming ready, however on slower sites this can take much longer).

You will inadvertently run into this when writing tests, and it is useful to use this tool to check that all of your code is compliant in order to reduce any possible flicker for the user.

2. Use custom jQuery functions in order to add non-compliant code

There will be times when you need to add code that won’t be in the right format and there’s no way you can transform it. When this issue arises, you can simply create your very own custom jQuery functions in order to bypass it. To do so, you will need to first define your custom function within the Optimizely force parameters (see this knowledge base article on force parameters) and then reference the defined function in your code, e.g.:

/* _optimizely_evaluate=force */
$.fn.customFunction = function() {
    var number = 10;
    for (var i = 0; i < 10; i++) {
/* _optimizely_evaluate=safe */

In this example, we have defined a function originally named ‘customFunction’ which increments a number ten times before appending it to the subject when called. The ‘customFunction’ is then called on the element(s) with the class ‘main’.

The potential applications of this method are large, with other examples ranging from timeouts to loading external scripts. Another benefit for this is that the function needs to be called using a selector that is run within the Optimizely code engine, this can be then utilised in order to check for when that specific element is available to modify/change.

3. Use ‘onmousedown’ events for AJAX loaded buttons & outbound links to track goals inside Optimizely

Most Optimizely tracking goals can be added via the “Create Goal” window within the editor, however there may be times when you need to use custom events in order to record clicks on buttons & links.

When a click goal is added within the editor, Optimizely adds an ‘onmousedown’ event (see Optimizely’s knowledge base article on click goals) to the specified element and this is attached after all variation code has been run. The reason for this is so that it can more accurately track elements that may send the user away from the page.

In cases where you want to track an element that may be pulled in via AJAX or you want to manually track outbound links, then you can use the same event listener on those elements. For example, if you wanted to track a link that goes to an external page, you could do something like this:

$('.link').mousedown(function(event) {
    window.optimizely.push(["trackEvent", "eventName"]);

If the element you want to track is loaded via AJAX (or it loads later after page load), then this method poses a problem as Optimizely will continually poll the page until document ready, at which point it will execute the code regardless of whether the element is there or not. To get around this, you can use the .live() jQuery function (this is currently deprecated in all new versions of jQuery and replaced with ‘on’, however Optimizely uses the older 1.6.4 version that has this available) within the Optimizely force parameters like so:

/* _optimizely_evaluate=force */
$('.link').live('mousedown', function(event) {
    window.optimizely.push(["trackEvent", "eventName"]);
/* _optimizely_evaluate=safe */

This code will then execute on all elements with class ‘link’, regardless of when they get loaded onto the page.

4. Use the force variation parameters to bypass audience and targeting settings

When developing a test, you will want to check your code in as close to the environment that a user might see as possible. While the visual editor and the preview modes are convenient to use, they do directly modify the page in a way that could potentially affect the test and you are of course not seeing it in the same environment that the user will see it in. Fortunately, Optimizely allows you to force tests to appear regardless of their audience and targeting settings.

Firstly, you will need to make sure your Optimizely project allows you to use the force variation parameters. To check this, simply go to Settings->Privacy within the dashboard and make sure “Disable the force variation parameter” is unchecked. If it is checked, just uncheck it and hit “Save”.


Once that setting has been updated, you can then go to your site and add the following URL parameter:


Just replace “EXPERIMENT ID” with the ID of your experiment (you can find this in the editor URL for your experiment under the parameter “experiment_id”) and “VARIATION INDEX” (this is a zero based index where 0=Control, 1=Variation #1, etc.) so that it looks something like this:

If your test is a multivariate test however, then you will need to use a slightly different syntax:


As you can see, you separate each section variation with an underscore, where

will show variation 1 of section 1, the control of section 2 and variation 1 of section 3 of your test.

Always remember that this method doesn’t check for whether your test will run under certain conditions, and should only be used to check if your code works. For more information on force variation parameters, check out Optimizely’s knowledge base article on them.

5. Check the revision of the Optimizely snippet to ensure you are seeing your latest code

To ensure speed and reliability, the Optimizely snippet is loaded using Akamai that, like all other CDNs, will evaluate the best server on their distributed network to serve the code with (generally this is the closest geographically). This however comes at a price of increased save times due to the nature of invalidating files on CDNs. I’m sure many of you have had to wait minutes for your code to update, and while there is no way to improve the speed of this, you can check if the code you see on the site is the latest code that was last saved using the editor.

Whenever you save a test in Optimizely, the console displays the next revision number that it is saving to and repeats this until it has fully invalidated to the CDN.


The first line that is highlighted displays the previous revision (“12”) along with the new revision (“13”) and the number of times it has been attempted (in this case it only attempted it once but this has been known to go over 100 if the Optimizely platform is under large load). The second line shows that it has completed updating the CDN to correct revision. Edit: As noted by Toby Urff from Optimizely, the editor also shows when the snippet is up-to-date by showing “(Uploading to CDN)” next to the “Save” button while it saves and then is hidden once the revision number of the snippet matches the revision number of that save.



Once you have the test saved, you can go to your page and check the current revision by entering the following into the console:


This will return the revision number that you are currently in


This, combined with checking the log, is incredibly useful for debugging and checking your code is running correctly and you are seeing the latest version. This should also speed up your development time as you know exactly when the code has been updated.

6. Use the page’s jQuery to access more functions

As I stated above, Optimizely provides a reduced snippet of 1.6.4 that doesn’t have some commonly used functions such as .hide(), .ajax(), .getScript(), etc. This can provide a problem when attempting to build larger more complex tests that may require external APIs/scripts. There are a few solutions to this: either load the full version of jQuery via Optimizely or not to load jQuery at all and use the page’s version.

Loading the full version via Optimizely can be a quick solution, but at the same time you are significantly increasing the snippet size. It also becomes redundant if you already have jQuery on the page. On the other hand, while using the page’s version of jQuery may seem at first to be a better option in terms of overall file size, it can actually be slower if you are loading jQuery at the end of the page (as it is generally recommended that you should do).

A middle ground to these two options is to include the reduced snippet with Optimizely, but use the website’s version to access functions such as .ajax(). To do this, you will need to write a function within Optimizely’s force parameters to check the existence of the page’s jQuery and then execute your script. This could look something like this:

/* _optimizely_evaluate=force */
function checkjQuery() {
    if (typeof window.$ !== "undefined") {
            url: '/path/to/file',
            type: 'POST',
            data: {param1: 'value1'},
        }).done(function() {
    } else {
        setTimeout(checkjQuery, 50);
/* _optimizely_evaluate=safe */

Here you can see that we are accessing the website’s version of jQuery using window.$ and making sure it is not ‘undefined’. We are then accessing the .ajax() function via the window.$ scope to access a fictional file called ‘/path/to/file’. This code should load as soon as jQuery is available on the page. One thing to be aware of when using this method: if you are adding or changing content in an element that you create outside of the force parameters, then you must check for the existence of that as well as the page’s jQuery. This is because there is a possibility of the page’s jQuery loading faster than Optimizely can get to the line of code that creates the new element.

In conclusion

These tips have been slowly accrued while I have been working on the Optimizely platform over two years as a developer for These have proven invaluable to me and I hope they bring the same benefit to you as well.

Did you find anything useful in here? Do you have essential tips for coding in Optimizely that haven’t been mentioned? Get in touch with us in the comments and let us know your thoughts!