Learn from the Testing Experts

1st October, 2025

ATLANTA

>> Home

>> Register

>> Programme Schedule

Keynote Speaker

Avdhesh Kumar Bhardwaj

VP, DevSecOps Engineer
Truist

AI-Assisted Security Testing (AIAST)

As software teams race to deliver faster and more frequently, traditional security testing often gets left behind — too slow, too noisy, and too disconnected from the development process. That’s where AI-Assisted Security Testing (AIAST) comes in. It’s a smarter, faster way to test security by bringing artificial intelligence into the DevSecOps pipeline.

In this talk, we’ll explore how AI is helping security teams detect vulnerabilities earlier, reduce false positives, and even suggest fixes — all in real time. Whether it’s scanning code, watching runtime behavior, or understanding patterns in past incidents, AIAST brings automation and intelligence together to make security testing both effective and developer-friendly.

We’ll walk through real examples of how AI is being used in the field today — from AI-powered code reviews to predictive models that anticipate security issues before they happen. You’ll leave with a clear picture of how AIAST is changing the way we think about application security, and how it can be a game-changer for your organization too.

Takeaways from this talk

  • What is AIAST really about?

    It’s about using AI to make security testing smarter — from spotting issues in code to predicting future risks — and doing it all without slowing down your development cycle.

  • Why do we need AI in security testing?

    Manual reviews and traditional tools can’t keep up. AI helps reduce false alarms, speed up analysis, and prioritize the real threats — saving teams time and headaches.

  • How does it actually work?

    AI is now helping in all major types of testing — static (code), dynamic (runtime), and interactive — by learning patterns, analyzing behaviors, and flagging issues that matter.

  • What are the real benefits?

    Developers get faster, more accurate feedback.

    • Security teams get better visibility and fewer distractions.

    • Organizations ship secure software without delays.

  • Real-life examples you’ll see:

    • AI tools that review code and point out security flaws like a senior engineer.

    • Machine learning models that spot unusual behavior in apps before anyone else does.

    • Systems that learn from past bugs to catch new ones proactively.

  • What’s next for AIAST?

    We’re heading toward even more automation — think AI writing security patches or acting as your real-time security advisor in the IDE. But we’ll also need to stay mindful of explainability, bias, and trust in AI-driven decision

Featured Speakers

Sai Kiran Nandipati

Sai Kiran Nandipati

Solution Architect
EY

Hyperautomation: The Future of Scalable and Intelligent Test Automation

Hyperautomation is no longer just a buzzword—it’s a strategic shift that blends RPA, AI/ML, low-code platforms, and intelligent testing frameworks to achieve end-to-end automation. In the evolving QA landscape, hyperautomation enables teams to extend test coverage, reduce manual effort, and build more adaptive test suites that can evolve with the pace of agile and DevOps practices.

This session explores how test automation fits within a broader hyperautomation strategy, the role of orchestration tools, and how quality assurance can act as a catalyst for business-wide digital transformation. We’ll cover the intersection of test automation with business rules, decisioning, and process intelligence.

Takeaways from this talk

  • Understand Hyperautomation and its impact on test automation maturity models.
  • Learn how to integrate AI/ML, RPA, and orchestration into your testing stack.
  • Explore real-world use cases where hyperautomation improved testing scalability and efficiency.
  • Identify tools and strategies to build resilient and self-healing test automation suites.
  • See how QA can drive intelligent automation beyond IT and into business operations.
Nicholas Armand

Nicholas Armand

Automation Supervisor
Accident Fund

AI Test Case Development

Discuss the use of Artificial Intelligence (AI) for test case development to automate the creation, optimization, and maintenance of test cases for software testing. AI-driven tools analyze requirements, code, and historical data to generate relevant test scenarios, identify edge cases, and prioritize tests based on risk or coverage. AI in test case development enhances efficiency, coverage, and prioritization by automating tasks and learning from data. However, it requires complex setup, high-quality data, and can be costly while sometimes misinterpreting requirements.

Takeaways from this talk

Reimagining QA: Test Case Generation with AI,” I’ll be presenting insights on how AI is reshaping the test case creation process. I’ll walk through the benefits of using AI to accelerate test design, increase coverage, and adapt to changing requirements.

I’ll also share lessons learned from implementation, key limitations to be aware of, and how teams can strike a balance between automation and human expertise. The session will highlight use cases, such as regression, negative and edge cases.

I’ll cover future enhancements on the horizon and offer strategic recommendations for organizations looking to integrate AI into their QA practices. My goal is to give the audience a realistic view of how AI can elevate quality assurance efforts today and in the future.

Steve Barreto

Steve M Barreto

Senior Solutions Architect
Keysight Technologies

Breaking the Coding Barrier: Rethinking Software Test Automation Through the End User’s Eyes

Today’s testers are overwhelmed by a flood of automation tools—each promising efficiency yet often requiring coding expertise. But not every tester has an engineering background, nor should they need one. When did the assumption take hold that building automation requires programming knowledge? And why must testers juggle multiple testing frameworks just to cover different technologies?

In this session, we’ll explore how the Keysight Eggplant testing platform challenges these conventions. Unlike traditional tools that are built for programmers, Eggplant takes a fundamentally different approach—focusing on automation from the end user’s perspective, not the programmers.

If you’re ready to rethink what test automation can and should be—and who it’s for—join me for a fresh perspective that could transform the way you approach testing.

Takeaways from this talk

  • End User Experience (EUX)
  • Machine Learning
  • Automation Intelligence
  • Exploratory Testing
  • Technology Agnostic
  • Device Agnostic
  • Non-Invasive
  • End-to-End Testing
Mhahesh Muraleedhara

Mhahesh Muraleedhara

Head of QE, North America
Zensar Technologies

Shifting from Quality Engineering to Quality Intelligence: Why and How?

As software systems grow more complex, traditional Quality Engineering practices — focused on test coverage, automation, and defect detection — are no longer enough to meet business agility and customer expectations. The future belongs to Quality Intelligence (QI): the strategic use of data, AI, and predictive insights to continuously measure, learn, and optimize quality across the entire product lifecycle.

Takeaways from this talk

  • Quality Intelligence is the next evolution of Quality Engineering — shifting from reactive testing to proactive, data-driven quality practices.
  • Data is the new backbone of quality — use telemetry, production data, and pipeline metrics to drive smarter decisions.
  • AI/ML will be critical to predict risks, detect anomalies early, and enable continuous learning in your quality processes.
  • Transitioning to QI is a journey — start by improving data literacy, observability, and building a culture that embraces insights, not just test cases.
  • Testers and quality professionals will transform from executing tests to becoming quality strategists and data-informed advisors.
Steve Leggett

Steve Leggett

Staff Performance and Tools Engineer
Henry Schein One

From Local to Scalable: Automating API Load Testing with Docker, K6, and GitHub Actions

This talk is a technical walkthrough of building a lightweight, containerized API load testing framework using K6, Docker, and GitHub Actions. You’ll learn how to set up a test harness that runs in CI, validates runtime behavior through smoke tests, and benchmarks API performance across environments. The focus is on practical, reproducible methods. No theory or fluff. Just real lessons and tactics you can apply immediately.

Takeaways from this talk

 

  • How to build a Dockerized K6 test harness for API performance testing
  • Using smoke tests to validate your test stack before load execution
  • How to benchmark results across dev and staging environments
  • Lessons learned integrating this into a GitHub CI workflow

Panel Discussion Speakers

Valerie Terrell

Valerie Terrell

Director of Quality Engineering- Traveler’s Indemnity
Travelers

Valerie Terrell

Experienced Quality Assurance Director & Enterprise Quality Management advocate with a demonstrated history of working in software & product development industry. Skilled in Databases, Management, Large Scale System Integrations, Agile Methodologies, and a leader in Enterprise Quality Assurance program practice & Salesforce. Strong quality assurance professional with a B.S. focused in Mathematics from Tift College of Mercer University. Currently working on Master’s program.

Prasad Banala

Director of Software Engineering
Dollar General

Prasad Banala

Enterprise QA leader with deep expertise in test architecture, automation, performance engineering, and DevOps. Drives CI/CD quality checks, cloud-native testing on GCP, and ML integration. Leads test strategy and governance across domains. Frequent tech speaker and blogger with hands-on experience across diverse tools, platforms, and business verticals.

Bhiku Swami

Senior Director Of Technology Elevance Health

Bhiku Swami

I am a technology leader with a proven track record of aligning IT and business to achieve strategic objectives. With experience across Healthcare, Banking & Financial Services, and Telecom & Media, I specialize in building technology roadmaps, implementing enterprise-level engineering and DevOps frameworks, and delivering cloud-native platforms.

My expertise includes cloud migration, multi-cloud infrastructure management, and process automation driven by ROI. I have successfully led AI/ML initiatives across the SDLC and ensure delivery excellence through KPIs, SLAs, and performance metrics. I also manage budgets, P&L, and vendor relationships to support scalable, results-oriented technology delivery.

Sanjay Sunkara

Sanjay Sunkara

Senior Manager
Capgemini Government Solutions

Sanjay Sunkara

Accomplished professional offering rich experience in directing full cycles of complex IT projects in various industries like Banking & Financial Services, Gaming, e-Learning, etc. Experienced in both SDLC and Agile methodologies.

Skilled communicator with excellent interpersonal skills, a keen eye for detail, a strong business sense and proven leadership abilities. Excelled in building, managing, training and motivating high performance teams and interacting with people at all levels of an organization. Resourceful & analytical individual with the ability to work well within aggressive timelines.

>> Home

>> Register

>> Programme Schedule