Learn from the Testing Experts
21st November, 2025
MANILA
Keynote Speakers
From Quality Assured to People-Assured: The Human Core of AI Testing
As Artificial Intelligence reshapes how we build and test software, one truth becomes clearer than ever: quality is still a human story.
In this keynote, we will explore the evolution of testing from Quality Assured to People-Assured, a movement that redefines what assurance means in the age of intelligent automation.
Through vivid stories and real use cases, this keynote reveals how AI-driven tools are transforming QA pipelines, accelerating defect detection, self-healing test scripts, and predictive analytics, while reminding us that empathy, ethics and leadership remain the cornerstones of innovation.
At the center of this talk lies the idea of People-Assured Quality, a framework built on three pillars: Ethical Intent, Empathic Design, and Evolving Leadership. It’s a call to design systems that don’t just work well, but mean well.
The session also introduces the Human-in-the-Loop Testing Framework, a real-world model where AI and QA professionals collaborate in continuous learning cycles, AI accelerates insight, while humans assure meaning, trust, and accountability.
Blending thought leadership, technical depth, and human reflection, this keynote challenges QA professionals and leaders alike to embrace AI not as competition, but as an ally that amplifies our most human qualities: curiosity, conscience, and care.
Because the future of testing isn’t machine-led or human-led. It’s purpose-led.
Takeaways from this talk
- Reimagine QA in the era of AI by understanding the evolution from automation to assurance guided by human purpose.
- Apply the three pillars of People-Assured Quality – Ethical Intent, Empathic Design, and Evolving Leadership – to create AI systems users can trust.
- Learn from the Human-in-the-Loop Framework, a model for blending AI-driven intelligence with human-driven insight to achieve faster, fairer and more meaningful assurance
Built on Quality: The Rise of Full Stack Engineers with Testing in Their DNA
In the early days of software testing, QA was often seen as the last gate before release. That mindset no longer works in today’s fast-paced digital world.
This session shares how the Quality Engineer (QE) role is transforming software delivery — blending development, automation, and testing into a unified discipline.
Drawing from real experiences leading teams and building AI-powered testing platforms, Michelle Lagare explores how QEs are evolving into full stack engineers who code, automate, and design with quality in their DNA.
She’ll discuss how organizations can create capacity for deeper testing, integrate automation within sprints, and use AI to accelerate feedback and decision-making.
Participants will leave with a clear roadmap for scaling quality across their teams — not by adding testers, but by empowering engineers to build software built on quality.
Takeaways from this talk
- Quality Engineering is the new standard.
Understand why QA is no longer a phase, but a mindset embedded in every part of the development process. - Automation belongs inside the sprint.
Learn how integrating automation with CI/CD shortens feedback loops and increases delivery confidence. - AI as an enabler, not a replacement.
See how tools like AIQE help teams move from repetitive scripting to intelligent, insight-driven testing. - Career paths are changing.
Discover how testers are becoming full stack engineers — bridging code, testing, and DevOps. - Quality culture scales organizations.
Learn practical steps to shift from blockers to builders, and from manual effort to measurable impact.
Featured Speakers
Agentic AI for Testing: From Static Scripts to Autonomous Quality Engineers
As software systems grow in complexity and release cycles accelerate, traditional testing approaches struggle to keep up.
Enter Agentic AI — a new frontier where autonomous, goal-driven agents perform testing tasks with minimal human intervention.
In this talk, we’ll explore how Agentic AI is reshaping the software testing landscape, allowing for dynamic, intelligent, and context-aware testing strategies that are adaptive, continuous, and strategically aligned with business outcomes.
Takeaways from this talk
- What is Agentic AI (in the context of testing)?
We’ll define Agentic AI and differentiate it from conventional AI/ML testing tools. You’ll learn how agentic systems:
- Types of Testing Agents in Practice Today
Explore real-world agentic testing agents including:
Exploratory Testing Agents – autonomously explore UI/UX paths using LLMs
Test Case Generation Agents – generate and prioritize test cases from specs, user stories, or code
Bug Reproduction & Triage Agents – read logs, understand failure patterns, and recreate issues
Security Agents – mimic adversarial behavior for penetration testing and vulnerability exploration
Self-healing Test Agents – maintain flaky tests or repair broken test scripts automatically
- State of the Art: Tools & Platforms
We’ll review key tools and platforms driving agentic testing forward, such as:
Diffblue (for code-level test generation with AI)
AutonomIQ (now Sauce Labs) – for autonomous test generation and maintenance
TestGPT / ChatGPT Code Interpreters – leveraged for LLM-assisted bug diagnosis
ReTest, Cerberus Testing, and OpenAgents – open-source projects advancing multi-agent testing environments
Google’s internal agentic prototypes that assist SREs with automated incident triage and rollback decisions
- How Are Organizations Using Agentic AI for Testing?
We’ll walk through real use cases from early adopters:
- When and When Not to Use Agentic AI in Testing
Data sensitivity and hallucination risks with LLM agents
Scalability and observability challenges for long-running agents
Situations where deterministic automation still outperforms agents (e.g., compliance audit trails)
Best practices to pair agents with human testers in hybrid testing strategies
- The Future: From Testers to Test Orchestrators
We’ll close by discussing where this is headed — including agent orchestration platforms, continuous learning from production, and integration with DevOps observability tools — and how testers will evolve into supervisors and strategists in an agent-powered ecosystem.
Performance Testing Trends in 2025: What’s Next and Why It Matters
In this talk, you will discover that performance testing is no longer just about load and response times, because it is evolving into a smarter, faster, and more user-centric discipline. In 2025, we’re witnessing an exciting shift: AI-driven insights and real-time analytics are transforming how we identify bottlenecks, cloud-based test environments are enabling effortless scalability, and low-code/no-code tools are accelerating test creation like never before. At the same time, practices such as shift-left and shift-right testing are helping teams embed performance checks across the entire lifecycle. Add to that a growing emphasis on cybersecurity and accessibility, and performance testing is quickly becoming a cornerstone for delivering seamless digital experiences.
Takeaways from this talk
- A clear view of the latest advancements redefining performance testing.
- Insights into how these changes affect your role and skills as a performance tester.
- Why prioritizing user experience is no longer optional—it’s the new benchmark for success.
Testing the Untestable
Have you come across difficult and complex scenarios that require different communication channels and external interactions? Some might cost you $ to test with them, some might not be available until after a certain time, or some might still be being built! If your system is as complex as Alaska Airlines, take a page off how they decoupled their Test Environments and allowed their QA and Devs to test independently yet using the same set of data to ensure the level of stability and quality in their myriad of applications.
Takeaways from this talk
Service Virtualization saves time & money & hair!
Test Data Management doesn’t need to replicate the entire database
Environment Management allows easy configuration and management of environments to build synthetic & hybrid test scenarios
Leveraging GenAI for Test Automation
Generative AI (GenAI) is fundamentally changing how we approach software quality assurance. It’s moving test automation away from a static, rule-based process and toward a more dynamic and intelligent system. By using models like large language models (LLMs), GenAI can grasp context and create new content, which is a total game-changer for the field. This technology helps us tackle common problems like the slow pace of creating tests, a lack of comprehensive test coverage, and the high cost of maintaining them. The result is faster and more efficient software delivery.
How GenAI Can Help with Automation TestingGenAI brings several powerful benefits that significantly improve the test automation process:
- Faster Test Script Creation Instead of manually writing the same old code, testers can now use natural language to tell GenAI what to do. For example, you could simply say, “test the user login process,” and GenAI will analyze your requirements and existing code to generate an executable test script. This works for popular frameworks like Selenium and Playwright, dramatically cutting down the time and effort needed to write tests so testers can focus on more important, complex tasks.
- Smarter Test Data Generation Generating diverse and realistic test data can be a real headache. GenAI can solve this by creating synthetic data that closely mirrors real-world situations, including the tricky edge cases that are often missed. This ensures you get much better test coverage and find bugs that might have otherwise slipped through.
- Tests That Fix Themselves Keeping tests updated is a major pain point. When a user interface changes, test scripts often break because their element locators are no longer valid. GenAI can be used to create self-healing tests that automatically detect these changes and update the script. This saves a ton of time on maintenance and makes your tests much more reliable.
- Better Test Coverage and Accuracy By analyzing your codebase, looking at past bug reports, and even observing user behavior, GenAI can intelligently spot high-risk areas and create targeted tests for them. It goes beyond the basic, predefined tests and generates a wider variety of scenarios. This ensures that crucial features and less-used parts of the code are thoroughly checked, helping to catch and prevent bugs before they ever reach a live product.
The Unspoken Challenge: How Your Teams can Test And Evaluate Gen AI and the Software that Uses It
There has been much talk in QA about using Gen AI tools in our work to generate test cases and automation – however almost nothing on how testers can test the non-deterministic outputs of AI and evaluate their quality. Typically, direct testing of Gen AI is ignored and left for AI engineers and data scientists.
This is a grave mistake. As products using Gen AI like ChatGPT become mainstream and used by all, testers need to learn how to write and automate testing of them – and fast. My talk is on techniques, tools and workflows used to validate the outputs of AI models and agents – including correct prompting, standard outputs, evals and LLM-as-Judge. I will call on my recent experience as QA lead at a Gen AI tech-ed startup , along with other approaches and use cases, to give info, tips and one or more demos to help you and your teams meet the challenge of testing Gen AI-powered applications.
Panel Discussion Speakers
Paul Maxwell-Walters
Paul is a senior QA Engineer and SDET and previous QA Lead with around 15 years experience. He has worked in and led QA teams in Australia and the UK in industries such as digital media, banking, education, energy and defence – most recently specialising in the startup space. He has worked in and led QA efforts in Gen AI projects in Ed-Tech. He does public speaking and writing, with an article on testing Gen AI systems published by Ministry of Testing
Allen Borromeo
With 12 years of experience in the technology sector, I have developed and implemented effective strategies, plans, and procedures that drive organizational success. I specialize in setting comprehensive goals focused on performance and growth, while establishing policies that align with the company’s culture and vision. As a leader, I motivate teams to achieve maximum performance and dedication. I consistently evaluate performance through data analysis and metrics interpretation, and regularly report key insights and updates to the CEO. My experience also includes active participation in business expansion initiatives and managing strong relationships with partners and vendors.
Ebenezer Uy
I’m a technology leader and innovator with 20+ years of driving digital transformation, automation, and AI adoption across global enterprises. In my role as VP of Technology Delivery at Yondu, I help organizations—especially in banking, insurance, and other regulated industries—bridge the gap between vision and execution, turning complex strategies into measurable results.
I partner closely with senior executives to scale digital capabilities, modernize delivery organizations, and unlock new revenue streams through enterprise automation and AI-driven solutions. Along the way, I’ve built and led high-performing teams that thrive in dynamic, fast-changing environments.
Lately, my work and research have been focused on Automation and AI in banking and finance—exploring smarter ways to manage risk, enhance efficiency, and create customer value—and on AI safety, fairness, and explainability, because innovation should always be paired with responsible governance.
Recognized as an innovation awardee, speaker, and trusted advisor, I’m passionate about shaping the future of AI-driven enterprises and mentoring the next generation of leaders who will take transformation even further.
Jose Ferdinand Espero
Software Testing/ Test Engineering/ Quality Engineering Director with a demonstrated history of working in the information technology and services industry leading, coaching, and managing test teams and deliveries for Production, Implementation, and Escalations (L3), Patch and Release testing inclusive of Regression and Defect Retesting, Integration Testing of different products, Test Automation, Frameworks, and tooling encompassing Performance Testing, Automated Security Testing, Automation Frameworks for functional testing, and automated solutions (Build Verification Tests, Daily Health Checks, Scheduled/On Demand Regression) for QA Environment build and deployment through the CI .
Currently managing a team of 150+ resources across different regional locations (both perms and contingents) with a thrust of ensuring continuous testing throughout the SDLC providing an environment of collaboration and accountability empowering every team member to be part of a high performing team.
A Certified Scrum Master and a Certified Scrum Product Owner, also skilled in HP UFT, Agile Methodologies, Test Automation, VBScript, and Test Management. Strong program and project management professional with a Master of Business Administration degree from Philippine Christian University and a Bachelor of Library and Information Science focused in IT Track from Univ. of the Philippines – Diliman.Software Testing/ Test Engineering/ Quality Engineering Director with a demonstrated history of working in the information technology and services industry leading, coaching, and managing test teams and deliveries for Production, Implementation, and Escalations (L3), Patch and Release testing inclusive of Regression and Defect Retesting, Integration Testing of different products, Test Automation, Frameworks, and tooling encompassing Performance Testing, Automated Security Testing, Automation Frameworks for functional testing, and automated solutions (Build Verification Tests, Daily Health Checks, Scheduled/On Demand Regression) for QA Environment build and deployment through the CI . Currently managing a team of 150+ resources across different regional locations (both perms and contingents) with a thrust of ensuring continuous testing throughout the SDLC providing an environment of collaboration and accountability empowering every team member to be part of a high performing team. A Certified Scrum Master and a Certified Scrum Product Owner, also skilled in HP UFT, Agile Methodologies, Test Automation, VBScript, and Test Management. Strong program and project management professional with a Master of Business Administration degree from Philippine Christian University and a Bachelor of Library and Information Science focused in IT Track from Univ. of the Philippines – Diliman.
Mark Anthony
Mark Anthony S. Macaranas is Test Manager at Security Bank Corporation and a seasoned Software-Quality professional with more than twenty years of experience in Test Management, Automation, and Quality Governance. He has worked across multiple sectors — including Information Technology, Banking, and Oil and Gas — and has undertaken international assignments that have informed his approach to delivering large-scale, regulated systems.
His technical specialisms include Test Strategy, Automation, and Performance Testing. He also has experience with AI-related testing methods, including validation of natural-language-processing and computer-vision projects, and has led efforts to develop quality-governance standards for enterprise systems.
Throughout his career Mr. Macaranas has authored Software QA Playbooks, designed and delivered capability-building programmes. He has presented at international conferences, holds ISTQB and Scrum Master certifications, and remains committed to advancing testing practices through the practical adoption of emerging technologies.










