• In this special episode, Matt Wright talks with Brent Kostak, Product Marketing Lead for Optimization and Experimentation at Adobe, and David Arbour, Senior Research Scientist at Adobe Research. Together, they explore the launch of Adobe’s Experimentation Accelerator, a new AI-first platform built to automate and scale experimentation programs across enterprises. The conversation dives into how AI is transforming experimentation from manual testing into a continuous, insight-driven process powered by Adobe’s new Agent Orchestrator platform and specialized AI agents.

    The guests discuss key use cases and business challenges the Accelerator addresses, from automating experiment analysis and identifying high-impact opportunities to unifying data across teams. Arbour explains how the system grounds AI reasoning in statistical rigor and historical data, ensuring consistency, replicability, and reliability. Kostak highlights early beta results showing 200% increases in experiment variation velocity and major gains in ARR impact, while customers like AAA Northeast have discovered new strategic insights for campaign optimization.

    Looking ahead, both guests predict that experimentation will become tightly embedded into daily workflows, with AI agents proposing, interpreting, and even executing experiments automatically. Their advice for successful adoption: focus on organizational readiness, define clear non-AI goals, and treat AI as a collaborative augmentation tool rather than a replacement. Adobe’s agentic AI framework, combining data, content, and journey orchestration — is positioned to make experimentation faster, smarter, and more connected than ever before.

    To view the complete podcast and transcript click here.

  • Contents

  • Introductions

    Matt Wright:
    Hi everyone, and welcome to a very special episode. Today, we’re joined by the team behind Adobe’s upcoming Experiment Accelerator, diving into the potential of AI-powered experimentation.

    We’ve got two special guests: Brent Kostak from Adobe and David Arbour.

    I probably can’t do justice introducing all that you both do, so I’ll let you take it from here. Brent, can you start with a quick intro, and then we’ll hand it to David?

    Brent Kostak:
    Thanks, Matt, and thanks to the Conversion team for having us.
    I’m Brent Kostak, and I lead product marketing for optimization and experimentation at Adobe.


    We’re here to discuss some of the AI-first application launches for Experimentation Accelerator and upcoming innovations.

    David Arbour:
    Thanks, Brent, and thanks again for having us. I’m David Arbour, a Senior Research Scientist at Adobe Research. I work on experimentation, causal inference, and AI in support of those two areas. A lot of my focus lately has been on this new initiative we’ll be discussing.

    Matt Wright:
    Fantastic. It’s great to have you both here.
    Before we dive in, I’d love to get Adobe’s perspective on this: what are the top use cases for experimentation and optimization?

  • Top use cases for experimentation and optimization

    Brent Kostak:
    Great question. Adobe’s been in this market for a long time with Adobe Target and Journey Optimizer.

    From a use-case standpoint, we’re seeing things expand beyond traditional UI/UX teams into channel marketers and lifecycle journey experts. The main use cases we’re seeing today include:

    • Driving higher-impact campaigns and customer journeys – focusing on higher conversion and revenue uplift across the end-to-end journey.

    • Automating experimentation analysis – so teams can prioritize faster and understand what and why they’re testing.

    • Growth-focused experimentation – helping both marketing and product teams improve subscription and service growth.

    These three use cases align with our core personas around optimization, experimentation, and growth.

    David Arbour:
    I completely agree with that. From my perspective, it’s never been easier to create experiment content—but it’s never been harder to extract meaningful insights from it.

    We all know we need to experiment a lot, and generating variants is easy now. The real challenge is making that analysis approachable and reusable so each experiment builds long-term learning, not just one-off results.

  • What is experimentation accelerator & key challenges

    Matt Wright:
    Let’s talk specifically about Experimentation Accelerator and AI-guided experimentation. What is it, and how would you describe it to someone new?

    Brent Kostak:
    Sure. The Adobe Journey Optimizer Experimentation Explorer is a new AI-first application we’re launching on September 30th.

    It’s designed for Adobe Target and Journey Optimizer customers to:

    • Accelerate and automate experimentation analysis

    • Identify high-impact opportunities to scale growth

    • Scale experimentation programs across the enterprise

    It’s built on Adobe’s Agent Orchestrator platform, aligning with Adobe’s broader agentic AI innovation strategy.

    Essentially, it helps teams automate analysis, spot where to test next, and mature their experimentation programs across people, processes, and technology.

  • Challenges and pain points addressed

    Matt Wright:
    What are some of the problems or challenges this helps solve?

    David Arbour:
    It’s honestly never been a better time to work on experimentation. A few years ago, I wouldn’t have said that!

    Here’s what we’ve seen:

    • It’s easy to create lots of test variants.

    • But many teams then realize their experiment is underpowered, or they’re unsure how to interpret results.

    That leads to a few key needs:

    1. Making it clear why you’re running an experiment—what decision you’re trying to inform.

    2. Using AI to identify relationships between variants—what worked, what didn’t—and using those learnings to guide the next test.

    AI can help reveal patterns across experiments, surfacing which attributes tend to drive success.

    Brent Kostak:
    Exactly. There’s also an organizational shift happening.

    We’re seeing companies use AI agents and workflows to align business goals and experimentation efforts across different teams.

    By creating a centralized integration point, we’re helping teams share learnings across units, not just in PowerPoint decks or Jira tickets. Instead, they can collaborate in an automated, conversational way, breaking silos and scaling insights.

    David Arbour:
    Right—and that’s crucial. Too often, experiment results live in slide decks that no one revisits.

    If someone leaves the organization, that knowledge disappears.
    What we’re building reduces that friction—making experimentation insights persistent, searchable, and actionable long-term.

  • How it works: AI insights & reliability

    How AI Experiment Insights Work

    Matt Wright:
    Let’s dig into how this actually works.
    Sometimes when people use AI, they give it a lot of context but get shallow answers. How do you make sure that doesn’t happen here?

    David Arbour:
    Great question—and honestly, it’s what keeps me up at night.
    You could just dump experiment data into a large language model (LLM) and ask, “Why did this work?” Sometimes the answer looks plausible, but you run into two major problems:

    1. Omissions – Important details get skipped.

    2. Hallucinations – AI makes things up.

    So instead, we take a different approach:

    • We extract representations (attributes and features) of the content itself.

    • We learn from historical experiments how those attributes relate to performance.

    • We anchor results in fixed, auditable scores — meaning if you recheck in a month, you’ll get the same answer.

    This makes it grounded, consistent, and tied to real experimental data, not random AI text generation.

    We also use these patterns to recommend what to test next — for example:

    “We noticed empathetic tone performs better. Consider adding that to your next campaign.”

    Everything stays fact-based and replicable, not just “the model said so.”

    Handling Conflicts and Scaling

    Matt Wright:
    What happens when results conflict or change over time? How does the model handle that?

    David Arbour:
    That’s where we blend AI reasoning with classical statistics.
    Classical stats gives guarantees; AI gives interpretability.
    We model time-based factors like:

    • Seasonality

    • Brand differences

    • Audience shifts

    The more experiments you run, the more accurate the model becomes.
    We start with a baseline, but over time, it becomes tailored to your data — your customers, your industry, your campaigns.

    Brent Kostak:
    Exactly — and this connects to Adobe’s three pillars:
    Data, Content, and Journeys.

    • Data: What David just described—context, modeling, and insights.

    • Content: Making sure AI-generated content is on-brand and compliant.

    • Journeys: Ensuring cross-channel orchestration—no conflicting experiments across campaigns or channels.

    All of this runs on the Adobe Experience Platform, so you get enterprise-level scalability and visibility.

  • AI agents, assistants & Orchestrator platform

    Understanding Adobe’s AI Framework

    Matt Wright:
    Adobe often mentions “AI capabilities,” “AI features,” and “AI agents.”
    How are those different?

    Brent Kostak:
    Good question. Think of it in three layers:

    1. AI Assistant – The conversational interface within Adobe apps (for example, “Summarize my experiments”).

    2. AI Agents – Specialized reasoning models that take action or query data across systems.

    3. Agent Orchestrator Platform – The layer that manages and coordinates all those agents across Adobe Experience Cloud.

    The Experimentation Agent powers Experimentation Accelerator, but other related ones include:

    • Journey Agent – connects campaigns and touchpoints.

    • Data Insights Agent – drives analytics and interpretation.

    • Audience Agent – helps understand and segment users.

    These all interconnect through the Agent Orchestrator, enabling consistent reasoning and data flow.

    How Agents Work and Specialization

    Matt Wright:
    Are all Adobe agents built the same way?

    David Arbour:
    They share an orchestrator, but each is specialized for its use case.
    For example, the Experimentation Agent focuses deeply on testing logic, analysis, and insight delivery.

    Two big design priorities:

    1. Rich features – The agent must understand and access the right experiment data.

    2. Natural conversation – Translating user intent correctly (e.g., “show me underperforming variants” → actual statistical query).

    It’s surprisingly complex to get both right, so we spend a lot of time aligning human language to technical action.

    Building a Custom Agent

    Matt Wright:
    What’s it like to build one of these agents? How do you evaluate or optimize it?

    David Arbour:
    It’s part technical, part sociological.
    In classical machine learning (pre-2020), you had labels and accuracy metrics.
    Now, success depends on human feedback loops — understanding what’s helpful and in-scope for real users.

    We annotate hundreds of examples:

    • Was the answer correct?

    • Was it frustrating?

    • Did it go out of scope?

    Then we refine prompts and training examples until the experience feels natural and reliable.

    Future Evolution and Adaptability

    Matt Wright:
    How will this evolve as LLMs advance?

    David Arbour:
    We’re moving toward adaptive reasoning.
    Right now, each agent has a defined set of tools. In the future, a single agent could dynamically compose hundreds of specialized tools under the hood.

    For example:

    “Find the best-performing segment in my last experiment.”
    The agent could then identify segments, analyze attributes, and even propose new campaign paths — all in one flow.

    It’s like giving the system more Lego blocks to build richer workflows.

    Brent Kostak:
    And because this runs on Adobe Experience Platform, everything’s grounded in real-time customer profiles and behavioral data.

    Partners can use Agent Composer, SDK, and Registry to build their own custom agents or fine-tune ours for specific business use cases.

    That’s a major differentiator versus point solutions in the market.

  • Beta results & real-world impact

    Early Customer Impact

    Matt Wright:
    You’ve had a beta running for a while. What business impact have you seen so far?

    Brent Kostak:
    Yes, we’ve been in beta for several months.
    Two great examples:

    1. Adobe.com (Customer Zero)

    • Massive internal program managing Adobe’s website testing.

    • Using Experimentation Accelerator increased:

      • 200% more experiment variations

      • Higher win rates

      • Over 200% increase in ARR impact per test

    It wasn’t just faster testing—it was smarter, more consistent outcomes.

    2. AAA Northeast (Beta Partner)

    • Used AI Experiment Insights to guide a new member benefits campaign.

    • AI highlighted which messaging and engagement pathways would perform best.

    • They learned not only which variants worked but also why — enabling broader marketing strategy insights.

    These weren’t just “conversion bumps” — they gained contextual understanding that shaped future campaigns.

  • What sets Adobe apart

    Differentiation in the Market

    Matt Wright:
    Other platforms are releasing agentic AI features too. What sets Adobe apart?

    David Arbour:
    Two main things:

    1. Grounding in statistical rigor

    2. Integration across the experience stack

    Many competitors emphasize “velocity”—run 100x more experiments. But:

    • Without 100x more users or better analysis, that’s just noise.

    • It leads to frustration or weakened stats.

    Adobe instead focuses on analyzing smarter, not just testing faster.

    We keep statistical integrity (confidence sequences, causal analysis) while using AI to scale insight generation.

    That means:

    • You can run more variants safely.

    • You can reuse learnings for future experiments.

    • You don’t lose trust in your data.

    Brent Kostak:
    Exactly. And our Experiment Insights combine:

    1. Content patterns – what’s working in messaging and design

    2. Audience data – who’s responding

    3. Test behavior – what’s driving causal impact

    Plus, customers can bring their own multi-armed bandit or modeling approaches to fine-tune their programs.

    And then there’s Adaptive Experiments — a new, human-in-the-loop method that lets teams:

    • Add or remove variants mid-experiment

    • Maintain statistical validity

    • Accelerate iteration

    That’s groundbreaking — it challenges the old “flush your data” rule and opens up new adaptive workflows.

  • Future of experimentation & final advice

    The Future (Next Year and Beyond)

    Matt Wright:
    Looking ahead — how do you see experimentation programs evolving a year from now?

    David Arbour:
    Experimentation will become tightly integrated into everyday workflows.
    Instead of being a separate process, agents will:

    • Spot opportunities

    • Propose experiments

    • Even run them automatically if approved

    It’ll shift from isolated testing to continuous, intelligent learning.

    Brent Kostak:
    Totally agree.
    Experimentation is expanding beyond conversion metrics.
    Teams will start optimizing:

    • Prompt quality in conversational AI

    • Engagement experience in real-time interactions

    • Operational efficiency inside organizations

    Optimization will mean more than “higher revenue”—it’ll mean better experience orchestration.

    How to Adopt AI Successfully

    Matt Wright:
    Three-quarters of AI initiatives reportedly fail. Any advice on successful adoption?

    David Arbour:
    Yes — define success without using the word “AI.”
    Focus on the use case and measurable outcome.

    AI should support your goal, not be the goal.
    It’s easy to get 85% of the way fast—but that last 15% is where failure happens if the goal isn’t clear.

    Brent Kostak:
    Exactly.
    It’s about organizational readiness as much as technology.

    Teams that succeed think about:

    • How AI will augment human workflows

    • How automation fits with existing analytics and experimentation culture

    • How to operationalize insights across business units

    Without that, you may get short-term gains but miss transformational potential.

    Closing

    Matt Wright:
    This has been fascinating. There’s so much more coming in the next six months, I’m sure.
    Thank you both for sharing your insights.

    Brent Kostak:
    Thank you, Matt.
    Keep an eye out for:

    • Adobe Summit announcements

    • New podcasts and events

    • Ongoing thought leadership from our teams

    Matt Wright:
    Fantastic. Thanks again, Brent and David — and thanks to everyone for listening.

  • Links

    1. Adobe Experimentation Accelerator 
    2. Upcoming Adobe Events