Learn from the Testing Experts

15th October, 2025

HOUSTON

>> Home

>> Register

>> Programme Schedule

Keynotes

Don Jackson

Don Jackson

Technical Evangelist
Perforce Software

Sick of Framework Frustrations? Agentic AI Can Remove them All

Even with modern frameworks like Selenium, Playwright, or Cypress, most testers still deal with fragile locators, flaky scripts, and too much time spent on test upkeep. This session introduces a smarter way forward: Agentic AI that works more like a human — no scripts, no frameworks, and no maintenance overhead.

Takeaways from this talk

  • Understanding runtime agentic AI test automation
  • Being able to articulate the benefits and differences between a co-pilot approach to test automation and a runtime agentic AI approach
Nagmani LNU

Nagmani LNU

Director Of Quality Engineering
Swivel

From Structured Prompt Engineering to Autonomous QA Agents – The Next Frontier in Intelligent Software Testing

As Artificial Intelligence (AI) adoption accelerates, establishing best practices for leveraging AI tools in software quality engineering has become increasingly critical. Among these, structured prompt engineering has emerged as a key enabler for scaling AI implementations across the enterprise. Its influence extends beyond building AI-enabled applications—it is transforming everyday engineering activities, particularly test case generation, one of the most resource-intensive stages of the Software Development Life Cycle (SDLC).

In a recent study, we applied structured prompts to evaluate the effectiveness of an AI-based test management tool in creating end-to-end test scenarios for a payments application. The results were striking: with well-engineered prompts, the acceptance rate of generated API and UI test cases rose to 94%–100%, compared to just 11%–23% without them. This demonstrates that prompt engineering is not merely a support technique but a foundational practice for enterprise-wide AI adoption in software testing.

Building on this foundation, the next leap forward is the development of autonomous AI QA agents. These agents represent the future of intelligent software testing, working hand-in-hand with Quality Engineers to streamline and enhance day-to-day tasks. I am currently developing such an agent, capable of reading acceptance criteria, generating comprehensive test cases, and uploading them to a test management tool—leveraging AWS Lambda and Bedrock with LLMs such as Claude.

Takeaways from this talk

I will guide the audience through this evolutionary journey of AI in testing:

  • Past (2010s): Early applications of machine learning in self-healing tests and low-code/no-code automation.
  • Present: Generative AI, powered by structured prompt engineering, delivering near-perfect accuracy in test case generation, supported by a detailed case study.
  • Future: Fully autonomous QA agents collaborating seamlessly with engineers to deliver next-generation QA automation. If the proof of concept is complete by the time of presentation, I will showcase a live demo; otherwise, I will present the detailed architecture of this solution as a preview of what’s to come.

By connecting today’s proven practices in prompt engineering with tomorrow’s vision of autonomous QA agents, this keynote will highlight both the immediate value and the transformative potential of AI in reshaping software quality engineering.

Featured Speakers

Bhaumik Shroff

Bhaumik Shroff

Senior QA Engineer
HCSS

Evolve or Obsolete: Why QA Must Embrace QualityOps Now

In today’s fast-moving world of DevOps, cloud-native platforms, and AI-driven delivery, traditional Quality Assurance stands at a pivotal crossroads. Manual testing, script-heavy automation, and siloed QA practices can no longer match the speed, complexity, and scale of modern software development. The message is clear: evolve—or risk irrelevance.

This session explores the strategic leap from conventional QA to QualityOps—an engineering-centric paradigm that embeds quality throughout the software lifecycle. Drawing on over two decades of global quality engineering leadership, this talk will reveal how QualityOps enables QA professionals to shift left, shift right, and shift smarter. You’ll learn how to apply real-time quality signals, intelligent automation, and deep cross-functional collaboration to elevate QA from a tactical function to a strategic enabler.

Takeaways from this talk

Attendees will leave with actionable insights to modernize their automation strategy, redefine QA roles, and align quality efforts with business agility—ultimately future-proofing both their teams and their careers in the age of continuous delivery.

Monika Malik

Monika Malik

Lead Data/AI Engineer
AT&T

Gen-AI for Smart Document Processing: OCR, Summarization & Translation

Advanced OCR and Text Extraction: Investigate how generative models can improve the accuracy and efficiency of OCR systems, especially for complex or non-standard documents.

Automated Summarization and Translation: Use Gen-AI to automatically summarize lengthy documents and translate text into different languages while preserving context and meaning

Takeaways from this talk

How Gen-AI can reduce manual work like invoice processing using effective techniques of OCR

David Turan

David Turan

Senior Software Engineer in Test Crown Castle

Next-Gen Test Automation: Playwright, JavaScript and AI

Playwright with JavaScript already enables fast, resilient, cross-browser automation—but when combined with Artificial Intelligence (AI), it unlocks smarter testing capabilities. In this session, we’ll explore how AI can help with self-healing scripts, intelligent test case generation, defect prediction, and visual validations with Playwright and Javascript.

Kathleen Conway

Kathleen Conway

Senior Engineering Manager SimSpace

Past, Present and Future: the value of engineers in test

The evolving landscape of software development has fundamentally transformed the role of test engineers. This presentation explores how emerging trends—including test automation, shift-left practices, and AI integration—are reshaping quality engineering roles and redefining how teams deliver value. We’ll examine strategies for leveraging established testing expertise while adapting to new paradigms that demand broader technical and collaborative skills from quality professionals.

Takeaways from this talk

  • Identify critical legacy testing skills to preserve – Attendees will learn which foundational QE competencies (like risk assessment, test design thinking, and quality advocacy) remain essential even as tools and processes evolve.
  • Map emerging technical and collaborative skills – Participants will understand the new skill sets required for modern quality roles, including API testing, CI/CD integration, cross-functional collaboration, and working with AI-assisted testing tools.
  • Create a personal transition roadmap – Attendees will leave with a practical framework for assessing their current skills, identifying gaps, and planning their professional development to thrive in the evolving quality landscape.
Taisiia Bahbouche

Taisiia Bahbouche

QA Engineer II
Thryv

Build a robust Quality Roadmap

This topic explores the process of creating a clear, actionable roadmap to guide quality assurance and improvement initiatives within an organization. It focuses on aligning QA activities with business goals, product strategies, and customer expectations to deliver measurable outcomes. Attendees will learn how to define quality objectives, prioritize efforts, allocate resources effectively, and anticipate risks to ensure consistent quality over time.

Takeaways from this talk

  • Resource & Tool Mapping: Identifying the people, processes, and technologies needed to support quality initiatives.
  • Risk Assessment: Anticipating challenges such as shifting requirements, evolving standards, or limited budgets.
  • Continuous Improvement: Building feedback loops, tracking performance metrics, and iterating on the roadmap to stay relevant.

Panel Discussion Speakers

Vinayak Sen

Vinayak Sen

Product Manager
Slalom

Vinayak Sen

It’s been over 70 years since Alan Turing defined what many still consider to be the ultimate test for a computer system — Can a machine exhibit intelligent behavior that is indistinguishable from that of a human? Originally coined the imitation game, the Turing test involves having someone evaluate text conversations between a human and a machine designed to respond like a human. The machine passes the test if the evaluator cannot reliably tell the difference between the human versus machine-generated text. Although the Turing test generally serves as a starting point for discussing AI advances, some question its validity as a test of intelligence. After all, the results do not require the machine to be correct, only for its answers to resemble those of a human.

Whether it’s due to artificial “intelligence” or imitation, we live in an age where machines are capable of generating convincingly realistic content. Generative AI does more than answer questions, it writes articles and poetry, synthesizes human faces and voices, creates music and artwork, and even develops and tests software. But what are the implications of these machine-based imitation games? Are they a glimpse into a future where AI reaches general or super intelligence? Or is it simply a matter of revisiting or redefining the Turing test? Join Tariq King as he leverages a live audience of software testing professionals to probe everything from generative adversarial networks (GANs) to generative pre-trained transformers (GPT). Let’s critically examine the Turing test and more because it’s judgment day — and this time, we are the judges!

Vinayak Sen

Parth Saxena

Software Engineering Lead
JPMorganChase

Parth Saxena

Parth Saxena is a industry leader in software engineering with extensive experience driving innovation across diverse domains and emerging technologies. Parth is passionate about practical, real-world use of Generative AI to solve complex enterprise challenges. His recent work explores intelligent agents, RAG architectures, and LLM automation

Vinayak Sen

Jorge Hernandez

IT Team Manager Salesforce nCino
BOK Financial

Jorge Hernandez

Software Quality Assurance executive leader with more than 18 years’ experience creating and delivering automation, performance, end to end, web, test strategies in complex industries and organizations. Holds 3 technology patents and PMP, ISTQB, AWS certifications, passionate about technology and building high performance Agile teams using trust as the foundation of long-lasting professional partnerships.

Vinayak Sen

Jessica M

Director of Quality Engineering
TrustCloud

Jessica M

Meet Jessica Mosley, a dynamic and experienced leader with over 10 years of experience in leading teams and driving organizational growth. With over 20 years of experience in software development, IT, Customer Experience and DevOps, Jessica has developed a keen eye for identifying opportunities to improve quality and efficiency in software development.

Jessica is passionate about creating a better workplace and fostering human connection through the adaptation of a quality mindset. She firmly believes that quality is not just a metric to be measured, but a philosophy that should be ingrained in every aspect of an organization’s culture.

>> Home

>> Register

>> Programme Schedule