Learn from the Testing Experts
13th May, 2026
SALT LAKE
Featured Speakers
Testing the Untestable: Cracking the Code of AI Systems
AI is rewriting how software is built — and it’s completely transforming how we test it. Traditional QA methods were designed for deterministic systems, but today’s AI-enhanced applications behave in probabilistic, emergent, and sometimes unpredictable ways. How do testers ensure safety, reliability, and trust when the software itself is learning, evolving, and making decisions?
In this presentation, Kevin shares a practical, entertaining, and deeply insightful guide to testing AI systems drawn from real work at FamilySearch, where AI is used to label, extract, and publish data from billions of historical documents. You’ll learn why inputs → outputs is the new golden rule of AI testing, how evals and dashboards replace traditional pass/fail thinking, and how to design guardrails for AI agents before they act unexpectedly.
The session explores:
Why AI features require a different mindset than traditional regression testing
How to measure coverage when AI can generate thousands of test cases per feature
The role of LLMs, agentic systems, and AI managers in modern test workflows
How to safely test semi-autonomous and fully autonomous AI agents
Practical patterns for building evals, categorizing risks, and sharing dashboards
Packed with vivid examples — from ML-assisted data extraction to agents accidentally trying to “hack a Swiss bank” — this talk reframes QA for the age of AI and gives testers a clear blueprint to stay indispensable in an AI-partnered future.
Takeaways from this talk
- Why AI features require a different mindset than traditional regression testing
- How to measure coverage when AI can generate thousands of test cases per feature
- The role of LLMs, agentic systems, and AI managers in modern test workflows
- How to safely test semi-autonomous and fully autonomous AI agents
- Practical patterns for building evals, categorizing risks, and sharing dashboards
No Budget, No Problem: Scalable API Load Testing with PewPew using HAR files
Modern performance testing doesn’t have to be expensive or complex. In this session, you’ll learn how to build scalable API load tests using nothing but free tools and browser-generated .HAR files. We’ll walk through a practical workflow that captures real user interactions, isolates performance-critical endpoints, and transforms them into structured load scenarios.You’ll see how to:Generate and extract .HAR files using browser-based toolsParse and organize API calls to identify key performance targetsConvert .HAR data into reusable scripts with a free coordination toolSimulate realistic traffic patterns and stress conditions—without paid softwareWhether you’re part of a lean QA team or just starting out with performance testing, this session offers a replicable blueprint for building robust test suites with zero budget.
Takeaways from this talk
How to use .HAR files to capture real-world API behaviorA free, repeatable workflow for building load tests from scratchTechniques for identifying and prioritizing performance-critical endpointsHow to simulate realistic traffic without commercial toolsA practical framework you can implement immediately—no budget required
Tutorial Speaker
From Automation to Autonomy: Building intelligent Test Agents for Modern Quality Engineering
Traditional test automation has reached its limits—scripts break with every UI change, test suites grow unmanageable, and QA teams spend more time maintaining tests than finding bugs. The future of quality assurance lies not in better automation, but in autonomous agents that can think, learn, and adapt. In this session, we’ll explore the complete lifecycle of autonomous test agents—from understanding requirements to self-healing execution and intelligent analysis. Through a live demonstration, you’ll see how AI-powered agents can independently plan test strategies, generate comprehensive test scenarios, execute and adapt tests in real-time, and provide root cause analysis—all without human intervention. We’ll move beyond the hype to examine when autonomous agents deliver real value versus when traditional approaches remain superior. You’ll leave with practical frameworks for implementing agentic automation in your organization, strategies for building trust in AI-driven testing, and a clear understanding of how to balance human expertise with machine intelligence. Whether you’re struggling with test maintenance overhead, looking to accelerate your testing cycles, or preparing for the next evolution in QA, this session will provide actionable insights into the autonomous testing revolution.
Takeaways from this talk
Understand the complete autonomous agent lifecycle and its capabilities See a live demonstration of an agent handling the full testing workflowLearn decision frameworks for when to adopt agentic vs. traditional automationDiscover strategies for validating and trusting autonomous test agents Get practical first steps for implementing autonomous testing in your team.


