Learn from the Testing Experts
19th February, 2026
BANGALORE
Keynote Speaker
AI-Enabled Testing Practice – Reality Check and Lessons Learnt
AI-enabled testing promises faster releases, autonomous test generation, and intelligent defect prediction, but real-world adoption reveals a more grounded picture. AI is not a silver bullet—it augments testers rather than replaces them. Most organizations realize early that AI models depend heavily on clean, labelled, and stable datasets. Without strong data foundations, predictions lack accuracy and auto-generated tests create noise instead of value.
Tool maturity is another challenge. Many AI testing platforms look powerful in demos but struggle with dynamic interfaces, enterprise integrations, and domain-specific complexities. As a result, pilot projects often expose gaps in scalability, explainability, and maintainability.
The biggest hurdle, however, is change management. Testers need new skills in prompt engineering, validating AI outputs, and understanding ML-driven recommendations. Without proper upskilling, AI adoption remains superficial. Governance becomes essential to avoid model drift, hallucinated test cases, or redundant scripts.
Takeaways from this talk
- Start with small, well-defined use cases like regression analysis or log-based anomaly detection.
- Invest early in data cleanliness and structured test repositories.
- Treat AI as an augmentation layer, not a replacement.
- Establish clear guardrails for approving AI-generated changes.
AI-enabled testing succeeds when approached with realism, disciplined adoption, and continuous feedback loops.
Featured Speakers
Trends in Test Automation: Framework, Cloud and Scripting
This talk focuses on how test automation should deliver measurable business value, not just coverage. Using three pillars—frameworks, cloud execution, and modern scripting—it introduces ROI metrics like Reusability Index, Execution Cost per Test, and Maintenance Load Reduction. With real BFSI case studies, the session shows how teams can scale automation strategically and justify investments with financial outcomes.
Takeaways from this talk
- Understand how modern frameworks drive reuse and how to calculate a Reusability Index (RI) that reduces build effort and improves onboarding.
- Discover how cloud execution can be justified with ROI, using Execution Cost per Test (ECT) to optimize parallel runs and infrastructure choices.
- See how AI and scripting trends reduce maintenance costs through Maintenance Load Reduction (MLR) with examples from BFSI programs.
- Gain insights from real BFSI case studies showing how governance, scalability, and intelligent investment transformed automation programs.
From Scripts to Swarms: Testing Enterprise AI with AI
Imagine an assistant that sits on top of thousands of data sources and billions of documents.
Now imagine every question a user asks can fan out across LLMs, agents, actions, and third‐party apps. In that world, one missed edge case isn’t just “a bug” – it’s a trust problem. That’s the reality of the QA team at Glean. We learned very quickly that we couldn’t keep up with traditional test automation alone. So we started doing something different: we asked AI to help us test AI.
In this talk, I’ll share how we’ve shifted from big, brittle test suites to a more agentic, AI‐first QA
stack:
- How we use an in‐house “AI browser agent” to drive end‐to‐end flows across surfaces – especially in places where classic UI automation struggles with flaky locators, MFA, and ever‐changing third‐party UIs.
- How we make LLM behavior testable: keeping a living set of conversations and scenarios that we replay across models, validating not just “did it respond?” but “did it respond with the right content, citations, and billing metadata?”, and feeding real production escalations back into those tests so the suite reflects what customers actually break.
- And how we are slowly agentifying the QA workflow itself – using internal agents to summarize escalations, suggest missing test cases, keep TestRail and metrics up to date, and reduce the manual glue work between “we saw an issue in production” and “we’ll never miss this again.”
You don’t need to be an AI company to use these ideas. If you’re shipping complex systems and feel like your automation is always playing catch‐up, this is a concrete, battle‐tested story of how to let AI share the load without giving up control.
Takeaways from this talk
- What an “AI-first QA stack” actually looks like in practice
Not just a new framework or a fancy agent, but a layered setup where browser automation, API tests, and AI helpers work together instead of living in silos. - Simple ways to make LLM behavior testable and repeatable
How to turn messy chat experiences into checkable scenarios: a stable conversation set, clear expected behaviors, and guardrails for things like permissions, citations, and billing. - How to use agents to clean up the “boring” parts of QA
Examples of where agents already help us today – summarizing Slack escalations, proposing new test cases, enriching TestRail, and keeping quality metrics from going stale. - Patterns for staying sane in a non‐deterministic world
Practical tricks we use so tests stay useful even when AI doesn’t always return the exact same wording – and how we still catch real regressions. - Where humans stay in charge
Clear lines we draw: AI can help generate, execute, and organize tests, but humans still own judgment on UX, accessibility, ambiguous failures, and “is this good enough to ship?”
Engineering Quality and Security at the Source: The Strategic Shift-Left Mindset
Imagine a world where QA stands like a shield around Zion – protecting releases from endless waves of sentinels in the form of bugs and regressions. For years, that was our story: strong automation, exhaustive test cases, and sophisticated frameworks … yet quality still felt fragile.
As we moved from a monolith to microservices, our architecture evolved, but our quality model stayed monolithic – owned solely by QA. Each sprint became survival mode. The team was overworked, spending more time reacting than preventing, and “testing at the end” could no longer keep up with distributed systems, parallel squads, and faster release cycles.
The breakthrough came when we realized that shift-left isn’t a tool, a dashboard, or a framework – it’s a mindset. Quality isn’t a phase; it’s a shared responsibility. We began democratizing quality and security, embedding them directly into CI/CD pipelines, and moving validation much earlier in the lifecycle. Testing became part of development, not something after it. Security scans shifted from annual rituals to continuous checks. Developers got real-time feedback through automated quality gates and early warning signals.
To support this shift-left mindset at scale, we also started leveraging AI and LLM-driven capabilities. AI agents now assist in building automation in parallel with development, allowing test coverage to evolve as features evolve. LLMs generate test plans and cases directly from PRDs, RFCs, and Figma designs, accelerating early test design and enabling a test-first approach. We even use LLMs to debug automation failures, classify root causes, and power self-healing in backend automation, reducing maintenance overhead and keeping pipelines healthy.
Over the past year – across teams with different maturity levels – the impact has been clear: reduced bug leakage, improved build quality, faster feedback cycles, optimized QA bandwidth, and significantly earlier detection of security flaws. We also adopted modern quality metrics such as staging-to-prod defect ratio, test-case effectiveness, DOA (dead-on-arrival) defects, reopened rates, and pipeline stability to pinpoint our true pain points.
In this talk, I’ll share how we engineered quality and security at the source:
- how we broke the old “QA owns quality” mindset,
- how we recalibrated the test pyramid for microservices,
- how CI/CD became the backbone of quality governance,
- how AI and LLMs accelerated shift-left practices,
- and how culture — not tools — became our strongest defense against “bad builds.”
Because when everyone owns quality, Zion doesn’t need a shield – it thrives
Takeaways from this talk
- Shift-Left Is a Mindset, Not a Tool
Understand why true shift-left requires cultural change - How to Engineer Quality and Security Directly Into CI/CD
Learn the practical steps we took to integrate testing, automation, and security scanning into the pipeline, enabling faster feedback and stronger release confidence. - How AI and LLMs Accelerate Shift-Left Quality
How AI agents and LLM-driven test design help build automation in parallel with development, generate test cases directly from PRDs/RFCs/Figma, and self-heal automation failures — enabling faster, earlier, and more reliable quality feedback in CI/CD. - Transforming QA from Monolithic Gatekeeping to Distributed Ownership
See how we dismantled the “QA is the shield” model and drove ownership into squads, developers, and infosec, especially in a microservices environment. - Recalibrating the Test Pyramid for Modern Architectures
Discover how we aligned automation, exploratory testing, API-level validation, and contract testing to support distributed systems and reduce late-cycle failures. - Which Metrics Actually Reveal Quality (Beyond Pass/Fail)
Explore the data we used — staging defects, QA leaks, DOA bugs, test-case effectiveness, leak ratios, pipeline reliability — and how these metrics guided real change. - Real Impact: Better Build Quality, Reduced Bug Leakage, and Optimized QA Bandwidth
Walk away with clear insights into the measurable outcomes of this transformation and what worked (and didn’t) in shifting quality left at scale.
AI-Driven Autonomy: The Next Frontier in elevating Quality across SDLC
AI is transforming Quality Engineering across the SDLC by moving beyond scripted test automation to intelligent, self-directed agents. These agents continuously learn, reason, and act to predict defects, adapt test strategies, assure quality in real time, and enable end-to-end quality at scale. This next frontier elevates quality from a phase-based activity to a continuous, autonomous capability embedded across the software lifecycle.
Takeaways from this talk
- Agentic architectures unlock end-to-end value across SDLC and business assurance.
- Human roles elevate, not disappear—teams move from task execution to oversight, strategy, and creativity.
- Success requires intentional design—strong governance, observability, guardrails, and responsible AI practices.
Fireside Chat Speakers
Bharat Baser
Bharat Baser, PMP, Certified Scrum Master, Certified Product Owner, and ISTQB Advanced Test Manager, is a QA Manager with 18+ years in IT who leads AI driven quality transformation. He embeds AI powered test design, intelligent test selection, autonomous defect classification, and predictive quality analytics to accelerate delivery and raise reliability. He champions measurable value, secure test data, and responsible AI adoption across teams.
Robin Gupta
Robin is the human at the helm for TestZeus, the company behind world’s first open source testing agent. He is a versatile engineering leader with more than 15 years of experience in software delivery across startups, scale-ups and enterprises. He is a System 1 thinker, and has led products contributing to $10M ARR. With a metrics-driven approach, he has elevated engineering maturity of product teams for diverse domains such as BFSI, EdTech, Retail, and Developer Experience. Beyond work, he mentors at ADPList and Plato, contributes to open-source projects like Selenium and has authored books and international courses on software testing. He is also a recognized speaker at international events such as Dreamforce (by Salesforce) and Selenium Conference.
Tutorial Speakers
Session Intro
Modern engineering teams run thousands of API tests, yet production incidents, regressions, and release anxiety still persist. The problem is rarely the absence of tests — it is the absence of intelligent test selection and evidence-driven release decisions.
This session demonstrates how to move beyond traditional API testing into a TestOps-driven quality system, where tests are not only executed but curated, enforced, and monitored to support confident releases. Rather than treating testing as a single phase, we will show how functional, contract, performance, and reliability checks can work together as continuous quality signals, integrated directly into developer workflows and delivery pipelines.
Observability-Driven API Testing Workshop : Build Functional, Contract & Reliability Checks in 60 Minutes based on risk score by TIA (Test Impact Analysis) agent
Kavitha Rajamani is a Technical Director with over two decades of experience in Enterprise Software Engineering. SaaS platform modernization and quality engineering transformation are her forte. She has led large-scale product transitions from monolithic on-premise systems to cloud-native microservices, with a strong focus on embedding quality and automation into the development lifecycle.
She has deep expertise in test automation and has built practical, scalable solutions including a UI automation framework for SAP Fiori applications, a Regression Scope Analyzer to optimize test coverage and semi-automated exploratory testing tools adopted across global engineering teams. Her current focus is on advancing shift-left testing and quality practices that improve reliability, speed of delivery and engineering confidence.
Kavitha is passionate about applying technology to real-world testing challenges and building quality practices that combine automation at scale with strong team ownership and user-focused thinking.
Takeaways from this talk
- Create API functional checks + schema validations
- Add contract-style assertions to prevent breaking changes
- Add latency/error-rate thresholds as pipeline gates
- Package results into actionable reporting
Observability-Driven API Testing Workshop : Build Functional, Contract & Reliability Checks in 60 Minutes based on risk score by TIA (Test Impact Analysis) agent
Senior Manager in Quality Engineering at Manhattan Associates with over 18 years of experience in modern testing practices for API-first and cloud-native platforms. she specializes in transforming traditional QA approaches into TestOps-driven quality frameworks that enable evidence-based release decisions through CI/CD, contract testing, performance validation, and reliability monitoring.
Over the years,she has worked with organizations such as Mercedes-Benz, SAP Labs, and Birlasoft. Prior to Manhattan Associates, she has contributed to engineering excellence at these organizations, focusing on evolving traditional testing into modern quality practices that support true release confidence. She has led initiatives aimed at reducing production incidents, accelerating release cycles, and improving customer trust through scalable quality strategies.
She is particularly passionate about mentoring teams and helping organizations make quality visible, measurable, and actionable for architects, managers, and engineering leaders.
Takeaways from this talk
- Create API functional checks + schema validations
- Add contract-style assertions to prevent breaking changes
- Add latency/error-rate thresholds as pipeline gates
- Package results into actionable reporting
Panel Discussion Speakers
Sureka Jawahar
Surekha is a visionary QA leader with over 20+ years of experience in Software Quality Engineering, AI-based Testing, and Digital Quality Assurance. As the Director & Head of QA at Condé Nast, leads global teams in delivering high-impact digital experiences across platforms, leveraging cutting-edge technologies such as AI/ML, LLMs, Agentic AI, Cloud, APIs, and Chatbots.
Surekha is an engineering graduate with an MBA from Washington University in St. Louis, USA and holds certifications including Google AI Leader, Google Cloud Digital Leader, PRINCE2, and SCRUM. She brings deep expertise in Test Strategy, Automation, AI-driven Testing, Agentic AI, DevOps, and Digital Transformation.
Prabhu Kalaiselvan
As an AI expert specializing in digital transformation and test automation, I bring a strategic lens to how AI accelerates enterprise agility, scalability, and innovation. My session explores how intelligent automation is reshaping software testing, reducing cycle times, and enhancing quality at scale. We’ll delve into real-world case studies showcasing the power of AI in predictive analytics, autonomous testing, and continuous delivery pipelines. Emphasizing ROI and business impact, I will highlight frameworks that blend AI with DevOps to drive resilient and adaptive transformation. Attendees will gain insights into leveraging AI responsibly for scalable, secure, and future-ready test architectures. Whether you’re leading QA, product, or transformation efforts, this session equips you with actionable intelligence to thrive in an AI-augmented testing ecosystem.
“I’m excited by the enthusiasm around responsible AI integration and look forward to continued innovation and collaboration in this space.”As an AI expert specializing in digital transformation and test automation, I bring a strategic lens to how AI accelerates enterprise agility, scalability, and innovation. My session explores how intelligent automation is reshaping software testing, reducing cycle times, and enhancing quality at scale. We’ll delve into real-world case studies showcasing the power of AI in predictive analytics, autonomous testing, and continuous delivery pipelines. Emphasizing ROI and business impact, I will highlight frameworks that blend AI with DevOps to drive resilient and adaptive transformation. Attendees will gain insights into leveraging AI responsibly for scalable, secure, and future-ready test architectures. Whether you’re leading QA, product, or transformation efforts, this session equips you with actionable intelligence to thrive in an AI-augmented testing ecosystem.
Subhash Kotra
Subhash brings with him nearly two decades of experience in the realm of Software Testing & Engineering. His primary focus throughout his career has been on delivering high-quality products to the market punctually. He is committed to not only ensuring the timely release of products but also to their meticulous construction and the automation of repetitive tasks.
Beyond his professional achievements, Subhash has been an engaged participant in diverse community events. His support for open source projects underscores his commitment to collaborative and innovative initiatives. Additionally, he has distinguished himself as an enthusiastic speaker and blogger in various forums.
With a degree in computer science and CTO certification from the prestigious Cambridge University London, Subhash’s affinity for technology has been a driving force. His proactive exploration of emerging technologies reflects his curiosity and adaptability in grasping new environments.
Embracing the ethos of “Technology makes the difference,” Subhash remains steadfast in his dedication to refining his skill sets and staying abreast of dynamic market trends.
Vishnu Mani
Vishnu is a Quality Engineering Leader driving enterprise‑wide quality transformation across global technology organizations. Specializes in building AI‑powered automation, Testing Centers of Excellence, and scalable governance models that accelerate delivery while strengthening product reliability. Leads distributed teams through complex modernization initiatives, implementing forward‑looking quality strategies that measurably improve release velocity, engineering efficiency and overall customer experience.
Sai Sasidhar R.
Sai Sasidhar Rangineni is a Software Quality Engineering leader with over 18 years of experience driving large-scale quality transformation across enterprise SaaS, healthcare technology, and global product organizations. At symplr, he leads the Global Quality Center of Excellence, advancing AI-driven Quality Engineering, scalable automation, and innovation initiatives. His work focuses on elevating quality maturity across complex, multi-product ecosystems. Previously, Sai served as Head of Quality Engineering and Co-Founder of Startup, along with other senior technical leadership roles. He is passionate about engineering excellence, mentoring teams, and building high-performing engineering cultures. Sai actively shapes modern Quality Engineering practices in cloud-native, microservices, and AI-enabled environments. He brings strong exposure to product and program management, aligning quality strategies with business outcomes. He has led innovation programs and automation solutions adopted across global R&D teams. His experience includes supporting major regulatory audits, including ISO and FDA compliance. As a speaker, Sai shares practical insights on the future of AI-enabled Quality Engineering and leadership













