Learn from the Testing Experts

26th February, 2026

HYDERABAD

>> Home

>> Register

>> Programme Schedule

Featured Speakers

Ratnakumar Gorthi

Executive Director
Deloitte South Asia LLP

Sustainable and Responsible AI in Testing – Balancing Innovation with EGS goals

The session explores how Quality Engineering teams can leverage Generative and Agentic AI to accelerate testing while minimizing environmental impact and aligning with ESG principles. The session highlights how to identify the right testing opportunities for GenAI, practice responsible QE in line with sustainability goals, design an energy-efficient and ethically governed Agentic AI framework, and build optimized workflows that reduce computational waste. Attendees will learn practical strategies to innovate intelligently, achieving higher quality and efficiency without compromising environmental responsibility.

Takeaways from this talk

  • Identify the most impactful and sustainable use cases for GenAI in testing.
  • Apply ESG principles to conduct responsible and environmentally conscious QE.
  • Design energy-efficient, well-governed Agentic AI frameworks.
  • Build optimized testing workflows that reduce computational and resource waste.
  • Accelerate testing with AI while maintaining sustainability and quality goals.
Kishore Kandula

Kishore Kandula

Competency Head – Automation
Tech Mahindra

Agentic AI in Testing: Unlocking Autonomous Quality Engineering

Agentic AI is revolutionizing quality engineering by introducing autonomous systems that can learn, reason, and make decisions independently. In software testing, these intelligent agents analyze complex patterns, optimize test coverage, and adapt in real time to evolving requirements.
By automating repetitive tasks and continuously improving through feedback, Agentic AI enhances accuracy, speeds up release cycles, and drives proactive defect prevention—marking a new era of dynamic, self-directed test automation and intelligent quality management.

Takeaways from this talk

  • Understanding Agentic AI in Testing
  • Architecting Autonomous Quality Engineering. Real-World Applications & Strategic Impact
  • Exploring top agentic AI frameworks and differences between them
  • Key use cases / agents in QE
  • Integrating with CI/CD and enterprise workflows
  • Ensuring security, governance, and compliance with advanced agentic platform
Chandrakanth Chavva

Chandrakanth Chavva

Technical Delivery Manager
Cotiviti

Enhance Test Automation Efficiency using AI

Below is an overview of the key stages and the innovations I plan to cover:

1. Test Case Authoring (AI-Driven Test Generation Using RAG)

We developed a RAG-based test case authoring tool that reads user stories from test management tools on a scheduler and ingests documents, PDFs, images, and Confluence pages into a vector database.
Using company-specific prompts and contextual knowledge, it generates high-quality test cases aligned with best practices.

Outcome:

  • Significant reduction in test authoring time
  • Faster onboarding
  • Consistent, context-aware test cases
  • Higher productivity across teams

2. Test Automation (AI-Generated Automation Scripts)

The platform intelligently scans existing automation frameworks, understands page objects, coding patterns, and architecture, and generates new automation scripts directly from manual test cases.

Outcome:

  • Acceleration of automation coverage
  • Reduced dependency on senior automation engineers
  • Maintainable, framework-aligned scripts

3. Automatic XPath Generator Plugin

We built a plugin that automatically generates robust xPaths, reducing manual locator creation.

Outcome:

  • Drastically reduced automation development time
  • Better locator accuracy and reusability

4. DOM Healer – AI-Based Self-Healing Automation

This tool stores alternate selectors for each element and self-heals broken locators. If all selectors fail, it sends the latest DOM to the LLM to generate new selectors on the fly—improving reliability during execution.

Outcome:

  • Increased stability of test execution
  • Minimal failures due to UI changes
  • Lower maintenance overhead

5. AI Test Data Generator

This tool interprets English statements from test cases, interacts with databases, and generates both test data and DB queries. A manual validation checkpoint ensures accuracy before execution.

Outcome:

  • Faster and more reliable test data creation
  • Reduced dependency on DB experts
  • Consistent data across test cycles

6. Report Analyzer – AI-Driven Failure Analysis

We developed a report-analyzer that reads execution reports, clusters failures, identifies patterns, and suggests potential causes and fixes. It continuously improves using user feedback.

Outcome:

  • 30–50% reduction in root cause analysis time
  • Faster feedback loops
  • Higher pass rates across cycles

7. AI-Enabled Performance Testing Portal

Our performance testing tool integrates with JMeter and Jenkins. It provides a UI for selecting parameters, executing tests on demand, and comparing results across past runs and across teams. It also analyzes logs, server metrics, and client-side behavior to give engineering recommendations.

Outcome:

  • Democratized performance testing (non-experts can run tests)
  • Faster bottleneck identification
  • Organization-wide performance insights

8. AI Plugin for Performance Scripting

We built an AI plugin that accelerates the creation and optimization of performance test scripts.

Outcome:

  • Faster scripting
  • More accurate correlation and parameterization
  • Standardized, scalable performance scripts

Takeaways from this talk

Audience members will learn:

  1. How AI can be integrated at each QA lifecycle stage
  2. Practical examples of AI tools that accelerate testing and boost quality
  3. How to design scalable AI-driven QA tools using RAG, LLMs, and automation frameworks
  4. Real-world efficiency gains including reduced test creation time, faster automation coverage, self-healing execution, and quicker performance analysis
  5. A roadmap for transforming a traditional QA practice into an AI-augmented QA organization
Ankit Jain

Ankit Jain

Founder
Beeceptor

API Virtualization in the Age of AI: Building Smarter Test Doubles

As systems become increasingly distributed, external dependencies can stall development and testing. This talk demonstrates how Service (API) virtualization, with modern mock servers, contract-first design, and contract activation—removes friction and speeds up software delivery. We’ll break down traditional stubbing versus stateful, production-like simulations and highlight how AI is accelerating, test doubles, test-data creation and early validation.

Takeaways from this talk

  • Learn about the best practices when adopting service virtualization, actionable shift-left techniques, how to increase test coverage, and cost-efficient patterns that help teams ship sofwtare with confidence and speed.

Panel Discussion Speakers

Niranjan Keshavan

Program Director – Quality Engineering
Coforge

Niranjan Keshavan

I am a quality engineering professional with 18+ years of experience in product assurance, validation, and digital transformation. I’ve led QA teams, scaled quality functions, and now focus on driving AI-led quality strategies that elevate how organizations build and test software.

AI-Led Quality Focus
I design intelligent QA ecosystems using RAG pipelines, vector databases, agentic layers, and model context protocols. With strong skills in prompt engineering and AI pipeline architecture, I help enterprises shift from traditional QA to scalable, autonomous, AI-driven assurance.

Core Strengths

Scaled QA business from $50K to $5M

Delivered large, complex programs with customer-centric execution

Built and led teams from 10 to 100+

Fast, clear, narrative-driven decision-making

AI & Tech Expertise

AI orchestration and agentic QA layers

Prompt engineering and vector search optimization

Turning AI innovation into practical QA strategy

Philosophy
Lifelong learner, focused on high-impact decisions, team success, and meaningful connections across tech, quality, and music.

Maneesh Agarwal

Director of QA
Forsys Inc

Maneesh Agarwal

With 20 years of experience in Project, Program, and Delivery Management, I have led large-scale QA, Test Automation, and Digital Transformation initiatives across BFSI and Telecom. I’ve partnered with global organizations including BNY Mellon, BMO, Citibank, DTCC, Regions Bank, and Telenet to deliver high-impact solutions that improve quality, reduce costs, and drive business outcomes. I’ve also played a key role in winning major testing engagements through proposals, demos, and stakeholder leadership.

An IIM Calcutta alumnus with an MBA from SMU, I hold PMP, SAFe Agilist, Scrum Master, ISTQB Test Manager, GenAI, and Data Analytics certifications. My expertise spans TestOps and Automation Strategy, managing 25K+ test cases, 100+ automation pipelines, and containerized test environments, while leading and mentoring teams of 50+ QA professionals.

I remain focused on continuous innovation, actively building expertise in AI/ML, Generative AI, Cloud (AWS & GCP), and RPA to drive future-ready quality engineering and delivery excellence.

Archana Agarwal

Associate Director
Verizon

Archana Agarwal

Archana Agarwal is a Quality Engineering and Delivery leader with over 20 years of experience driving large-scale test automation, CI/CD modernization, and enterprise quality transformation. She specializes in automation strategy, SDET transformation, cloud and microservices quality, and AI/GenAI-driven defect analytics. Archana is passionate about evolving test automation into a strategic engineering capability that accelerates delivery, improves production stability, and aligns technology with business outcomes. She is a certified Agile practitioner and has helped organizations successfully evolve and scale their Agile journeys.

Geetha Pavani

Associate Director
Innova Solutions

Geetha Pavani

A seasoned testing professional with indepth experience in software test automation using UFT/QTP, Selenium, UIPath, JMeter.
• A Certified Software Quality Analyst (CSQA), HP Accredited Integration Specialist and ISTQB professional with a deep working experience in Test automation including Agile (Scrum) projects.
• Implemented Software Quality Processes, Methodologies, and Automation Frameworks for clients.
• Experience in creating testing practice, setting up the team, initial management, best practice adoption
• Driving QA CoE: Evaluate QA tools and processes; provide suggestions, training for changes to existing products, process or services
• Monitoring & review of metrics e.g. Defect detection rate, Test Case Effectiveness, Test Effective, Total Test Coverage, Defect Trend Analysis etc.
• Developed, managed, and executed test process and plans for small, medium and large projects.
• Independently managed Test Automation programs for large customers including client communication
• Significant contribution for software testing and capable of defining/refining/tailoring testing process to be best suitable for the application under test.
• Managed/Lead all types of software and systems testing: functional, non-functional, integration, acceptance, regression, localization, installation, and performance.
• Had published articles at international testing conferences.
• Strong knowledge of CMMi, ISO 9000, 7QC tools, PDCA, BS7799
• Worked in Agile projects with onshore offsite model
• Plan and publish Training plans

>> Home

>> Register

>> Programme Schedule