Learn from the Testing Experts

19th February, 2026

BANGALORE

>> Home

>> Register

>> Programme Schedule

Keynote Speaker

KrishnaKumar

Principal / Software Engineering – Commerce + AI
Microsoft

AI-Enabled Testing Practice – Reality Check and Lessons Learnt

AI-enabled testing promises faster releases, autonomous test generation, and intelligent defect prediction, but real-world adoption reveals a more grounded picture. AI is not a silver bullet—it augments testers rather than replaces them. Most organizations realize early that AI models depend heavily on clean, labelled, and stable datasets. Without strong data foundations, predictions lack accuracy and auto-generated tests create noise instead of value.

Tool maturity is another challenge. Many AI testing platforms look powerful in demos but struggle with dynamic interfaces, enterprise integrations, and domain-specific complexities. As a result, pilot projects often expose gaps in scalability, explainability, and maintainability.

The biggest hurdle, however, is change management. Testers need new skills in prompt engineering, validating AI outputs, and understanding ML-driven recommendations. Without proper upskilling, AI adoption remains superficial. Governance becomes essential to avoid model drift, hallucinated test cases, or redundant scripts.

Takeaways from this talk

  • Start with small, well-defined use cases like regression analysis or log-based anomaly detection.
  • Invest early in data cleanliness and structured test repositories.
  • Treat AI as an augmentation layer, not a replacement.
  • Establish clear guardrails for approving AI-generated changes.

AI-enabled testing succeeds when approached with realism, disciplined adoption, and continuous feedback loops.

Featured Speakers

Shweta Ayala

Shweta Ayala

Senior Test Automation Advisor & Architect
TATA Consultancy Services

Trends in Test Automation: Framework, Cloud and Scripting

This talk focuses on how test automation should deliver measurable business value, not just coverage. Using three pillars—frameworks, cloud execution, and modern scripting—it introduces ROI metrics like Reusability Index, Execution Cost per Test, and Maintenance Load Reduction. With real BFSI case studies, the session shows how teams can scale automation strategically and justify investments with financial outcomes.

Takeaways from this talk

  • Understand how modern frameworks drive reuse and how to calculate a Reusability Index (RI) that reduces build effort and improves onboarding.
  • Discover how cloud execution can be justified with ROI, using Execution Cost per Test (ECT) to optimize parallel runs and infrastructure choices.
  • See how AI and scripting trends reduce maintenance costs through Maintenance Load Reduction (MLR) with examples from BFSI programs.
  • Gain insights from real BFSI case studies showing how governance, scalability, and intelligent investment transformed automation programs.
Navaneethakrishnan R

Navaneetha
krishnan R

Head of QA
Glean

From Scripts to Swarms: Testing Enterprise AI with AI

Imagine an assistant that sits on top of thousands of data sources and billions of documents.
Now imagine every question a user asks can fan out across LLMs, agents, actions, and third‐party apps. In that world, one missed edge case isn’t just “a bug” – it’s a trust problem. That’s the reality of the QA team at Glean. We learned very quickly that we couldn’t keep up with traditional test automation alone. So we started doing something different: we asked AI to help us test AI.

In this talk, I’ll share how we’ve shifted from big, brittle test suites to a more agentic, AI‐first QA
stack:

  • How we use an in‐house “AI browser agent” to drive end‐to‐end flows across surfaces – especially in places where classic UI automation struggles with flaky locators, MFA, and ever‐changing third‐party UIs.
  • How we make LLM behavior testable: keeping a living set of conversations and scenarios that we replay across models, validating not just “did it respond?” but “did it respond with the right content, citations, and billing metadata?”, and feeding real production escalations back into those tests so the suite reflects what customers actually break.
  • And how we are slowly agentifying the QA workflow itself – using internal agents to summarize escalations, suggest missing test cases, keep TestRail and metrics up to date, and reduce the manual glue work between “we saw an issue in production” and “we’ll never miss this again.”

You don’t need to be an AI company to use these ideas. If you’re shipping complex systems and feel like your automation is always playing catch‐up, this is a concrete, battle‐tested story of how to let AI share the load without giving up control.

Takeaways from this talk

  • What an “AI-first QA stack” actually looks like in practice
    Not just a new framework or a fancy agent, but a layered setup where browser automation, API tests, and AI helpers work together instead of living in silos.
  • Simple ways to make LLM behavior testable and repeatable
    How to turn messy chat experiences into checkable scenarios: a stable conversation set, clear expected behaviors, and guardrails for things like permissions, citations, and billing.
  • How to use agents to clean up the “boring” parts of QA
    Examples of where agents already help us today – summarizing Slack escalations, proposing new test cases, enriching TestRail, and keeping quality metrics from going stale.
  • Patterns for staying sane in a non‐deterministic world
    Practical tricks we use so tests stay useful even when AI doesn’t always return the exact same wording – and how we still catch real regressions.
  • Where humans stay in charge
    Clear lines we draw: AI can help generate, execute, and organize tests, but humans still own judgment on UX, accessibility, ambiguous failures, and “is this good enough to ship?”
Swapn Sharma

Swapn Sharma

Head of QA Engineering
Kredivo Group

Engineering Quality and Security at the Source: The Strategic Shift-Left Mindset

Imagine a world where QA stands like a shield around Zion – protecting releases from endless waves of sentinels in the form of bugs and regressions. For years, that was our story: strong automation, exhaustive test cases, and sophisticated frameworks … yet quality still felt fragile.
As we moved from a monolith to microservices, our architecture evolved, but our quality model stayed monolithic – owned solely by QA. Each sprint became survival mode. The team was overworked, spending more time reacting than preventing, and “testing at the end” could no longer keep up with distributed systems, parallel squads, and faster release cycles.

The breakthrough came when we realized that shift-left isn’t a tool, a dashboard, or a framework – it’s a mindset. Quality isn’t a phase; it’s a shared responsibility. We began democratizing quality and security, embedding them directly into CI/CD pipelines, and moving validation much earlier in the lifecycle. Testing became part of development, not something after it. Security scans shifted from annual rituals to continuous checks. Developers got real-time feedback through automated quality gates and early warning signals.

To support this shift-left mindset at scale, we also started leveraging AI and LLM-driven capabilities. AI agents now assist in building automation in parallel with development, allowing test coverage to evolve as features evolve. LLMs generate test plans and cases directly from PRDs, RFCs, and Figma designs, accelerating early test design and enabling a test-first approach. We even use LLMs to debug automation failures, classify root causes, and power self-healing in backend automation, reducing maintenance overhead and keeping pipelines healthy.

Over the past year – across teams with different maturity levels – the impact has been clear: reduced bug leakage, improved build quality, faster feedback cycles, optimized QA bandwidth, and significantly earlier detection of security flaws. We also adopted modern quality metrics such as staging-to-prod defect ratio, test-case effectiveness, DOA (dead-on-arrival) defects, reopened rates, and pipeline stability to pinpoint our true pain points.

In this talk, I’ll share how we engineered quality and security at the source:

  • how we broke the old “QA owns quality” mindset,
  • how we recalibrated the test pyramid for microservices,
  • how CI/CD became the backbone of quality governance,
  • how AI and LLMs accelerated shift-left practices,
  • and how culture — not tools — became our strongest defense against “bad builds.”

Because when everyone owns quality, Zion doesn’t need a shield – it thrives

Takeaways from this talk

  • Shift-Left Is a Mindset, Not a Tool
    Understand why true shift-left requires cultural change
  • How to Engineer Quality and Security Directly Into CI/CD
    Learn the practical steps we took to integrate testing, automation, and security scanning into the pipeline, enabling faster feedback and stronger release confidence.
  • How AI and LLMs Accelerate Shift-Left Quality
    How AI agents and LLM-driven test design help build automation in parallel with development, generate test cases directly from PRDs/RFCs/Figma, and self-heal automation failures — enabling faster, earlier, and more reliable quality feedback in CI/CD.
  • Transforming QA from Monolithic Gatekeeping to Distributed Ownership
    See how we dismantled the “QA is the shield” model and drove ownership into squads, developers, and infosec, especially in a microservices environment.
  • Recalibrating the Test Pyramid for Modern Architectures
    Discover how we aligned automation, exploratory testing, API-level validation, and contract testing to support distributed systems and reduce late-cycle failures.
  • Which Metrics Actually Reveal Quality (Beyond Pass/Fail)
    Explore the data we used — staging defects, QA leaks, DOA bugs, test-case effectiveness, leak ratios, pipeline reliability — and how these metrics guided real change.
  • Real Impact: Better Build Quality, Reduced Bug Leakage, and Optimized QA Bandwidth
    Walk away with clear insights into the measurable outcomes of this transformation and what worked (and didn’t) in shifting quality left at scale.
Mallika Fernandes

Mallika Fernandes

Managing Director
Accenture

AI-Driven Autonomy: The Next Frontier in Digital Transformation

AI-Driven Autonomy is redefining how enterprises scale, operate, and innovate. By moving beyond traditional automation to intelligent, self-directed AI agents, organizations can unlock continuous decision-making, adaptive workflows, and end-to-end autonomous operations. This next frontier of digital transformation empowers businesses to achieve velocity, resilience, and transformative outcomes at enterprise scale.

Takeaways from this talk

  • Agentic architectures unlock end-to-end value across SDLC and business assurance.
  • Human roles elevate, not disappear—teams move from task execution to oversight, strategy, and creativity.
  • Success requires intentional design—strong governance, observability, guardrails, and responsible AI practices.

Panel Discussion Speaker

Sureka Jawahar

Director & Head of QA
Conde nast

Sureka Jawahar

Surekha is a visionary QA leader with over 20+ years of experience in Software Quality Engineering, AI-based Testing, and Digital Quality Assurance. As the Director & Head of QA at Condé Nast, leads global teams in delivering high-impact digital experiences across platforms, leveraging cutting-edge technologies such as AI/ML, LLMs, Agentic AI, Cloud, APIs, and Chatbots.

Surekha is an engineering graduate with an MBA from Washington University in St. Louis, USA and holds certifications including Google AI Leader, Google Cloud Digital Leader, PRINCE2, and SCRUM. She brings deep expertise in Test Strategy, Automation, AI-driven Testing, Agentic AI, DevOps, and Digital Transformation.

>> Home

>> Register

>> Programme Schedule