Learn from the Testing Experts
22nd May, 2026
SYDNEY
Featured Speaker
Agentic Testing – What, Why, How?
UiPath has recently been named a leader by a slew of Analysts like Gartner, Forrester and IDC as a leader in the Agentic Testing space. But what is Agentic Testing? Why do we believe this will change the way we test and how does Agentic testing work today and where is this heading towards? In this session, I will share about UiPath’s Agentic Testing capabilities and how we see the future of agents being an integral part of every tester’s day to day work.
Takeaways from this talk
Learn what is Agentic Testing. See a demonstration of how Agentic Testing works. Thoughts to take away to see how this can help enhance the way you work.
UX-First Testing: How Human-Centric Design Improves Automation Outcomes
Using the real-world transformation of a 15-year-old legacy monolith (CTS2), this session demonstrates why fixing the user journey is a prerequisite for reliable automation.
For over a decade, our business units operated under the constraints of a complex, “intertwined” system where manual workarounds were the only way to get things done. Improvements were high-risk, and the user experience was buried under legacy code. We knew we needed automation, but we realized a fundamental truth: Automation is a multiplier—if you automate a bad process, you just create more problems, faster.
By “de-lumping” the Work-Based Assessment (WBA) process from the monolith, we stopped looking at checkboxes and started looking at humans. We observed how users handled documents and interacted with providers to build a flow that mirrors real-world logic. This shift didn’t just reduce process time from weeks to days—it created a clean, predictable environment where automation could finally thrive.
Takeaways from this talk
- Don’t Pave the Cow Path: Automation is useless if it simply accelerates an inefficient or broken process. Fix the flow first, then automate for speed.
- Test the Flow, Not Just the Field: A tester’s true value lies in validating the information flow (the E2E user journey), not just confirming that an input field accepts text.
- Efficiency is a Quality Metric: Quality isn’t just the absence of bugs; it’s the presence of efficiency. When a system is designed for the human, automation becomes a natural byproduct, not a struggle.
- Think Beyond the Title: Don’t let “Tester” or “Analyst” limit your impact. Observe the user, understand the business, and deliver the solution that is best for the person at the other end of the screen.
Zero Trust for AI Agents: LLM Observability in Practice Testing, QA and Automation Conference
When a chatbot starts talking about its mother, something has gone seriously wrong.
We’ve already seen rogue AI let customers order 200 nuggets, buy a car for $1, and now hold full-blown emotional conversations with unsuspecting users. These aren’t edge cases — they’re a preview of what happens when we deploy AI agents without a trust framework.
So how do you trust AI agents? You don’t. And that’s perfectly fine.
Cybersecurity solved this problem years ago with Zero Trust: never assume, always verify. It’s time we applied the same philosophy to our AI agents. In this talk, we’ll explore what a robust LLM observability framework looks like in practice — one that gives you real-time visibility into agent behaviour, kill switches when things go sideways, and fallback mechanisms before your agent goes rogue and starts ordering fast food in bulk.
Whether you’re building agentic systems or deploying them for customers, this session will show you that trusting AI isn’t about blind faith — it’s about designing for failure from day one.
Takeaways from this talk
- Zero Trust applies to AI — never assume an agent will behave correctly; always verify its outputs and actions in real time.
- Observability is your safety net — if you can’t see what your agent is doing, you can’t control it; LLM observability is non-negotiable.
- Kill switches save reputations — every agentic deployment needs a circuit breaker before it orders 200 nuggets on your customer’s behalf.
- Trust is earned through guardrails, not vibes — the goal isn’t to distrust AI forever, it’s to build the verification layer that earns confidence over time.
Panel Discussion Speaker
Paul Maxwell-Walters
Paul is a senior QA Engineer and SDET and previous QA Lead with around 15 years experience. He has worked in and led QA teams in Australia and the UK in industries such as digital media, banking, education, energy and defence – most recently specialising in the startup space. He has worked in and led QA efforts in Gen AI projects in Ed-Tech. He does public speaking and writing, with an article on testing Gen AI systems published by Ministry of Testing.




