Learn from the Testing Experts
1st October, 2025
ATLANTA
Keynote Speakers
AI-Assisted Security Testing (AIAST)
As software teams race to deliver faster and more frequently, traditional security testing often gets left behind — too slow, too noisy, and too disconnected from the development process. That’s where AI-Assisted Security Testing (AIAST) comes in. It’s a smarter, faster way to test security by bringing artificial intelligence into the DevSecOps pipeline.
In this talk, we’ll explore how AI is helping security teams detect vulnerabilities earlier, reduce false positives, and even suggest fixes — all in real time. Whether it’s scanning code, watching runtime behavior, or understanding patterns in past incidents, AIAST brings automation and intelligence together to make security testing both effective and developer-friendly.
We’ll walk through real examples of how AI is being used in the field today — from AI-powered code reviews to predictive models that anticipate security issues before they happen. You’ll leave with a clear picture of how AIAST is changing the way we think about application security, and how it can be a game-changer for your organization too.
Takeaways from this talk
-
What is AIAST really about?
It’s about using AI to make security testing smarter — from spotting issues in code to predicting future risks — and doing it all without slowing down your development cycle.
-
Why do we need AI in security testing?
Manual reviews and traditional tools can’t keep up. AI helps reduce false alarms, speed up analysis, and prioritize the real threats — saving teams time and headaches.
-
How does it actually work?
AI is now helping in all major types of testing — static (code), dynamic (runtime), and interactive — by learning patterns, analyzing behaviors, and flagging issues that matter.
-
What are the real benefits?
Developers get faster, more accurate feedback.
• Security teams get better visibility and fewer distractions.
• Organizations ship secure software without delays.
-
Real-life examples you’ll see:
• AI tools that review code and point out security flaws like a senior engineer.
• Machine learning models that spot unusual behavior in apps before anyone else does.
• Systems that learn from past bugs to catch new ones proactively.
-
What’s next for AIAST?
We’re heading toward even more automation — think AI writing security patches or acting as your real-time security advisor in the IDE. But we’ll also need to stay mindful of explainability, bias, and trust in AI-driven decision
Smarter Testing for AI-Generated Code
AI is quickly becoming a key part of how we create software, but the ways we test software haven’t kept up. Our usual tests, made for code written by people, don’t work well for AI-made code because it can be unpredictable, sometimes lacks full context, and changes quickly. In this talk, we’ll look at new methods and ways of setting up systems that top companies are starting to use. These help us reliably check AI-made code, even though it’s less predictable than human-written code.
We’ll walk through the limitations of current approaches like static code analysis and manual code reviews when applied to AI-generated code. Then, we’ll introduce new paradigms such as traffic replay, temporary test setups, and Model Context Protocol (MCP) servers that provide AI with real-world production behavior to reason against. These methods help us test how the code actually behaves, catching problems and wrong assumptions before they reach users, even if an AI assistant wrote the code.
This session will include real-world case studies from large enterprises, examples of how to incorporate deterministic feedback into your CI/CD pipelines, and frameworks for evaluating AI readiness in your dev stack. By the end of the session, you’ll be equipped with a mental model for evolving your software delivery lifecycle to handle AI as a QA collaborator—not just a tool—and to ship safer, smarter systems.
Takeaways from this talk
- AI is generating an unprecedented wave of code that is full of faulty assumptions
- Improving software quality requires testing to become part of the AI’s feedback loop.
- Creating an AI testing feedback loop requires new techniques and processes like traffic replay, just-in-time-environments and direct MCP integration.
- Adopting AI testing isn’t just a tooling decision. Success hinges on embracing new workflows, revising CI/CD practices, and aligning teams around deterministic validation.
Agentic Success by Design: A Framework for Feasibility and ROI
The rapid rise of AI and agentic systems has sparked an explosion of ideas—but most of these initiatives fail to deliver measurable business outcomes. In fact, recent research from MIT shows that 95% of AI pilots generate little to no impact on P&L. The question leaders are asking is: how do we separate high-value, feasible use cases from innovation noise?
In this session, Costa introduces the Agentic Feasibility Framework (AAF)—a practical, data-driven methodology designed to help organizations evaluate, prioritize, and execute AI opportunities with confidence. The AAF provides business leaders, product owners, and executives with the tools to rigorously qualify ideas, score feasibility, and systematically prioritize the agent development backlog—dramatically improving the likelihood of business success.
Takeaways from this talk
Participants will learn how to:
- Rigorously qualify use cases for AI feasibility and business value
- Apply the AAF scoring model to evaluate opportunities objectively
- Best practices for prioritizing an AI/Agent backlog with confidence
- Monitor, measure, and optimize ROI throughout agent development
- This session offers a pragmatic roadmap to move beyond hype and systematically increase the impact of your AI initiatives.
Featured Speakers
Hyperautomation: The Future of Scalable and Intelligent Test Automation
Hyperautomation is no longer just a buzzword—it’s a strategic shift that blends RPA, AI/ML, low-code platforms, and intelligent testing frameworks to achieve end-to-end automation. In the evolving QA landscape, hyperautomation enables teams to extend test coverage, reduce manual effort, and build more adaptive test suites that can evolve with the pace of agile and DevOps practices.
This session explores how test automation fits within a broader hyperautomation strategy, the role of orchestration tools, and how quality assurance can act as a catalyst for business-wide digital transformation. We’ll cover the intersection of test automation with business rules, decisioning, and process intelligence.
Takeaways from this talk
- Understand Hyperautomation and its impact on test automation maturity models.
- Learn how to integrate AI/ML, RPA, and orchestration into your testing stack.
- Explore real-world use cases where hyperautomation improved testing scalability and efficiency.
- Identify tools and strategies to build resilient and self-healing test automation suites.
- See how QA can drive intelligent automation beyond IT and into business operations.
Breaking the Coding Barrier: Rethinking Software Test Automation Through the End User’s Eyes
Today’s testers are overwhelmed by a flood of automation tools—each promising efficiency yet often requiring coding expertise. But not every tester has an engineering background, nor should they need one. When did the assumption take hold that building automation requires programming knowledge? And why must testers juggle multiple testing frameworks just to cover different technologies?
In this session, we’ll explore how the Keysight Eggplant testing platform challenges these conventions. Unlike traditional tools that are built for programmers, Eggplant takes a fundamentally different approach—focusing on automation from the end user’s perspective, not the programmers.
If you’re ready to rethink what test automation can and should be—and who it’s for—join me for a fresh perspective that could transform the way you approach testing.
Takeaways from this talk
- End User Experience (EUX)
- Machine Learning
- Automation Intelligence
- Exploratory Testing
- Technology Agnostic
- Device Agnostic
- Non-Invasive
- End-to-End Testing
Shifting from Quality Engineering to Quality Intelligence: Why and How?
As software systems grow more complex, traditional Quality Engineering practices — focused on test coverage, automation, and defect detection — are no longer enough to meet business agility and customer expectations. The future belongs to Quality Intelligence (QI): the strategic use of data, AI, and predictive insights to continuously measure, learn, and optimize quality across the entire product lifecycle.
Takeaways from this talk
- Quality Intelligence is the next evolution of Quality Engineering — shifting from reactive testing to proactive, data-driven quality practices.
- Data is the new backbone of quality — use telemetry, production data, and pipeline metrics to drive smarter decisions.
- AI/ML will be critical to predict risks, detect anomalies early, and enable continuous learning in your quality processes.
- Transitioning to QI is a journey — start by improving data literacy, observability, and building a culture that embraces insights, not just test cases.
- Testers and quality professionals will transform from executing tests to becoming quality strategists and data-informed advisors.
From Local to Scalable: Automating API Load Testing with Docker, K6, and GitHub Actions
This talk is a technical walkthrough of building a lightweight, containerized API load testing framework using K6, Docker, and GitHub Actions. You’ll learn how to set up a test harness that runs in CI, validates runtime behavior through smoke tests, and benchmarks API performance across environments. The focus is on practical, reproducible methods. No theory or fluff. Just real lessons and tactics you can apply immediately.
Takeaways from this talk
- How to build a Dockerized K6 test harness for API performance testing
- Using smoke tests to validate your test stack before load execution
- How to benchmark results across dev and staging environments
- Lessons learned integrating this into a GitHub CI workflow
Changing software quality with AI
Technology drives change while simultaneously being the thing that is changed, and the emergence of AI-driven development is no exception. AI-driven development demands AI-powered testing. Otherwise you risk being swept up in the chaos, pushed aside by competitors, or left behind and forgotten. This session proves how leveraging new AI approaches alongside test management best practices can save you time, reduce manual effort, and optimize your resources, all while keeping up with today’s accelerating testing pace.
Takeaways from this talk
- Understand the critical role of AI in modern software development and why AI-powered testing is essential to keep pace.
- Learn how to integrate AI technologies with established test management best practices.
- Gain actionable insights into reducing manual effort and optimizing testing resources using AI.
Panel Discussion Speakers
Valerie Terrell
Experienced Quality Assurance Director & Enterprise Quality Management advocate with a demonstrated history of working in software & product development industry. Skilled in Databases, Management, Large Scale System Integrations, Agile Methodologies, and a leader in Enterprise Quality Assurance program practice & Salesforce. Strong quality assurance professional with a B.S. focused in Mathematics from Tift College of Mercer University. Currently working on Master’s program.
Prasad Banala
Enterprise QA leader with deep expertise in test architecture, automation, performance engineering, and DevOps. Drives CI/CD quality checks, cloud-native testing on GCP, and ML integration. Leads test strategy and governance across domains. Frequent tech speaker and blogger with hands-on experience across diverse tools, platforms, and business verticals.
Bhiku Swami
I am a technology leader with a proven track record of aligning IT and business to achieve strategic objectives. With experience across Healthcare, Banking & Financial Services, and Telecom & Media, I specialize in building technology roadmaps, implementing enterprise-level engineering and DevOps frameworks, and delivering cloud-native platforms.
My expertise includes cloud migration, multi-cloud infrastructure management, and process automation driven by ROI. I have successfully led AI/ML initiatives across the SDLC and ensure delivery excellence through KPIs, SLAs, and performance metrics. I also manage budgets, P&L, and vendor relationships to support scalable, results-oriented technology delivery.
Sanjay Sunkara
Accomplished professional offering rich experience in directing full cycles of complex IT projects in various industries like Banking & Financial Services, Gaming, e-Learning, etc. Experienced in both SDLC and Agile methodologies.
Skilled communicator with excellent interpersonal skills, a keen eye for detail, a strong business sense and proven leadership abilities. Excelled in building, managing, training and motivating high performance teams and interacting with people at all levels of an organization. Resourceful & analytical individual with the ability to work well within aggressive timelines.
Stephen Burlingame
Experienced Director Of Quality Assurance with a demonstrated history of delivering high-quality releases and user-friendly software.
If you want to build a Testing Center of Excellence from the ground up fostering a forward-thinking QA culture, we should talk. If you want to turn testers into Quality Assurance Engineers, we should also talk about that. If you want to create a high-performance team that takes pride in improving quality quarter over quarter, we should definitely talk.
Skilled in building automated software frameworks, exploratory testing methodologies, test plan creation best practices, team mentoring, capturing meaningful metrics, improving software quality & driving out bugs that are hiding in the dark corners of the code.
Passionate about building teams and relationships with other departments (like development, product, and client services). I believe a strong QA team creates a positive reputation for your company in the marketplace and helps keep customer attrition rates low. QA is a long-term investment strategy that pays dividends for your business and its bottom line.
Lots of experience with Python, Perl, PHP, C#, Java, JavaScript, HTML5, CSS, Jenkins, Azure DevOps, Selenium library, Atlassian product suite, Postman, Newman, JMeter (v5), RESTful APIs, Shell scripting, Batch scripting, Git, EDI ANSI X12, and lots more!