Learn from the Testing Experts
11th March, 2026
DENVER
Keynote Speakers
Operationalizing AI – The AI-Enabled Team
In this current climate, it’s even more important to ensure that team members are as optimized as possible in order to pivot to best support and deliver in the most efficient and nimble way possible. Although sometimes overlooked, playbooks have been a successful option to continue to foster collaboration and as we all know, the right people “in the room” are highly influential in the overall quality of the SDLC.
Introduction: Defining what it means to operationalize AI—not just building models, but integrating them successfully into business operations.
Roadmapping for AI: Strategies for aligning pilots, capacity, and team structure to business goals.
“Lab vs. Crowd” Methods: Balancing deep experimentation with broader organizational buy-in.
Building Support Systems: How to grow AI skills and capabilities company-wide, starting from a lean or solo team.
Deep Dives:
Successes and challenges from scaling AI at CampMinder.
Techniques for centralizing AI processes and moving from pilots to full operations.
Capacity building, resource allocation, and nurturing a culture of innovation.
Real-World Examples: Melissa’s lessons learned through hands-on operationalization with cross-functional teams.
Takeaways from this talk
- Operationalizing AI requires more than technical aptitude; it’s about centralizing, scaling, and integrating AI across the whole organization, starting with good roadmapping and support systems.
- The most successful teams blend focused R&D (“lab”) with broad enablement and engagement (“crowd”)—you need buy-in and participation beyond isolated pilots.
- Real momentum comes from intentionally building capacity, nurturing innovation, and sharing responsibility so that AI becomes part of everyday work, not just a side project—especially when starting with a small or single-person team.
- These practical insights equip leaders, engineers, and testers to systematically build and scale AI capabilities, avoiding common pitfalls and ensuring lasting organizational impact.
The democratization of Automated Testing
The barrier to entry for test automation has collapsed. With the rise of AI agents and LLMs, writing automation code has become the easy part, allowing SDETs and QAs to ramp up on new languages instantly and shift from traditional BDD to inclusive, spec-driven development. But as coding becomes democratized, a new and steeper challenge emerges: the exploding complexity of distributed systems. With massive cloud migrations and microservices scaling continuously, the difficulty has shifted from how to script a test to what to test in a web of interconnected services. This talk explores why the future of test automation isn’t about mastering syntax, but about mastering the SDLC and your domain. We will discuss how to navigate this paradigm shift where technical versatility is baseline, but system knowledge is the true differentiator.
Takeaways from this talk
- Coding is no longer the barrier: AI agents and LLMs have turned code generation into the “easy part,” enabling instant adaptation to new languages.
- Shift from Author to Editor: Your role evolves from writing lines of code to critically auditing AI-generated output, requiring a sharp eye to validate logic and security.
- Methodologies are modernizing: The industry is moving away from traditional BDD toward more inclusive, spec-driven development models.
- Complexity is the new bottleneck: The primary challenge has moved from scripting tests to managing the exploding complexity of distributed systems and microservices.
- Strategy over Syntax: With web-scale interconnected services, the difficulty lies in knowing what to test rather than how to script it.
- Domain knowledge is the differentiator: Success now depends on mastering the SDLC and deeply understanding system architecture, as technical versatility becomes the new baseline.

