Learn from the Testing Experts
13th March, 2026
SALT LAKE
Featured Speakers
Testing the Untestable: Cracking the Code of AI Systems
AI is rewriting how software is built — and it’s completely transforming how we test it. Traditional QA methods were designed for deterministic systems, but today’s AI-enhanced applications behave in probabilistic, emergent, and sometimes unpredictable ways. How do testers ensure safety, reliability, and trust when the software itself is learning, evolving, and making decisions?
In this presentation, Kevin shares a practical, entertaining, and deeply insightful guide to testing AI systems drawn from real work at FamilySearch, where AI is used to label, extract, and publish data from billions of historical documents. You’ll learn why inputs → outputs is the new golden rule of AI testing, how evals and dashboards replace traditional pass/fail thinking, and how to design guardrails for AI agents before they act unexpectedly.
The session explores:
Why AI features require a different mindset than traditional regression testing
How to measure coverage when AI can generate thousands of test cases per feature
The role of LLMs, agentic systems, and AI managers in modern test workflows
How to safely test semi-autonomous and fully autonomous AI agents
Practical patterns for building evals, categorizing risks, and sharing dashboards
Packed with vivid examples — from ML-assisted data extraction to agents accidentally trying to “hack a Swiss bank” — this talk reframes QA for the age of AI and gives testers a clear blueprint to stay indispensable in an AI-partnered future.
Takeaways from this talk
- Why AI features require a different mindset than traditional regression testing
- How to measure coverage when AI can generate thousands of test cases per feature
- The role of LLMs, agentic systems, and AI managers in modern test workflows
- How to safely test semi-autonomous and fully autonomous AI agents
- Practical patterns for building evals, categorizing risks, and sharing dashboards
No Budget, No Problem: Scalable API Load Testing with PewPew using HAR files
Modern performance testing doesn’t have to be expensive or complex. In this session, you’ll learn how to build scalable API load tests using nothing but free tools and browser-generated .HAR files. We’ll walk through a practical workflow that captures real user interactions, isolates performance-critical endpoints, and transforms them into structured load scenarios.You’ll see how to:Generate and extract .HAR files using browser-based toolsParse and organize API calls to identify key performance targetsConvert .HAR data into reusable scripts with a free coordination toolSimulate realistic traffic patterns and stress conditions—without paid softwareWhether you’re part of a lean QA team or just starting out with performance testing, this session offers a replicable blueprint for building robust test suites with zero budget.
Takeaways from this talk
How to use .HAR files to capture real-world API behaviorA free, repeatable workflow for building load tests from scratchTechniques for identifying and prioritizing performance-critical endpointsHow to simulate realistic traffic without commercial toolsA practical framework you can implement immediately—no budget required

