Learn from the Testing Experts
26th February, 2026
PUNE
Keynote Speaker
Agentic AI Ready Test Automation Solutions (AI-TAS) – A Paradigm Shift!
The Agentic AI Test Automation Solutions (AI-TAS) is engineered to revolutionize Quality Engineering by embedding agentic AI into the heart of assurance workflows. It automates critical QA—to deliver end-to-end automation of user story testing. Its agentic workflow ensures reusability across teams, enabling consistent, scalable, and intelligent test coverage.
AI-TAS empowers QA teams to focus on innovation, risk remediation, and quality excellence, ensuring faster and reliable releases across the enterprise.
Takeaways from this talk
- End-to-End Automation: Agentic workflows automate user story testing from scripting to reporting.
- Reusable Across Teams: Standardized test assets ensure consistency and scalability across QA groups.
- Seamless Integration: Connects easily with existing CI/CD pipelines and enterprise datasets.
- Self-Healing Capabilities: Automated recovery from test failures reduces maintenance costs.
- Scalability & Continuous Improvement: Evolves with enterprise needs, supporting complex datasets and advanced integrations.
- Accelerated Release Cycles: Streamlined QA processes shorten time-to-market while maintaining reliability.
- Enterprise-Wide Impact: Ensures faster, more reliable releases, strengthening customer trust and satisfaction.
Featured Speakers
Container-First Testing: Bringing Production to JUnit for Microservices at Scale
Modern microservice architectures demand testing against real infrastructure — yet most teams still rely on brittle mock-based setups that hide failures until late stages. TestContainers brings production-like infrastructure inside JUnit, spinning up real databases, messaging systems, and cloud services on demand for every test run. This container-first approach delivers deterministic, CI-friendly, and infinitely scalable microservice testing without shared environment dependencies. The session will highlight proven patterns, pitfalls to avoid, and a practical migration roadmap for teams moving from mocks to containerized testing. Attendees will walk away with techniques to boost reliability and release velocity without compromising innovation.
Takeaways from this talk
- Realistic API Testing: Utilizing real databases and cloud services for testing, rather than mocks or in-memory alternatives.
- Ephemeral Environments: Creating clean, throwaway, and containerized test environments, ensuring test isolation and repeatability.
- Integration: Seamlessly integrating Testcontainers with popular testing frameworks and CI/CD tools like JUnit, Spring Boot Test, RestAssured, GitHub Actions, and Jenkins.
- Service Communication Testing: Addressing testing for components like Kafka and RabbitMQ within the microservice landscape.
Quality Assurance for AI Agents: Context-Driven Evaluation at Scale
Introduction
AI agents are increasingly autonomous and entrusted to function effectively within complex environments. This growing reliance on AI systems necessitates a transition from traditional output-focused evaluation methods to adaptive testing approaches. Adaptive testing is designed to capture how AI agents behave and respond under dynamic, real-world conditions, thereby providing a more holistic understanding of their operational reliability.
Context-Aware Evaluation Approach
To address the evolving challenges of AI agent assessment, we advocate for a context-aware evaluation methodology grounded in Responsible AI principles. This approach emphasises rigorous evaluation standards, contextual validation, and robust safety assurance measures. It is not limited to assessing the final outputs of AI agents; instead, it also scrutinises their reasoning processes, the effectiveness of prompting techniques, and cost-related parameters such as token-efficient structured outputs. The evaluation further considers scenarios in which agents must make principled refusals—particularly when faced with uncertainty or policy constraints.
Unified Evaluation Paradigms
Our proposed approach integrates human-in-the-loop reviews, LLM-as-a-judge scoring, and coded evaluation techniques into scalable testing paradigms. This unified approach enables testing teams to systematically mitigate risks commonly encountered in AI systems, including hallucinations, conflicting instructions, and context loss. Through these combined strategies, teams can deliver measurable outcomes such as accuracy, robustness, appropriate refusal behaviour, and regulatory compliance.
Empowering Global Testing Organisations
This strategic reorientation empowers global testing organisations to certify the autonomy of AI agents with confidence. By implementing these context-driven evaluation practices, organisations can ensure that agent behaviour remains predictable, governable, and ready for production—meeting the evolving needs of enterprises worldwide.
Takeaways from this talk
- Transition from output-focused testing to comprehensive context-driven evaluation of agent reasoning, prompt efficacy, and principled refusal mechanisms.
- Identify model limitations early, including reasoning deficiencies and token inefficiency in structured outputs.
- Leverage hybrid evaluation paradigms—human-in-the-loop, LLM-as-a-judge, and coded benchmarks—for scalable, quantifiable quality assurance.
- Empower testing teams with proven methodologies to deliver reliable, cost-optimized, production-ready AI agents.
Testing Enough in GenAI: Balancing Automation Velocity with Human Judgment
With GenAI reshaping software testing, automation coverage is accelerating at unprecedented speed. Yet, as AI takes on more responsibility for generating test data, writing scripts, creating scenarios, and even deciding what to test, a fundamental question becomes more important than ever: What does “enough testing” mean when AI is helping decide? This session challenges the traditional coverage-centric mindset and introduces a human-centric framework for determining sufficiency in AI-powered automation. Instead of measuring success only by execution volume, it emphasizes risk, user intent, ethical considerations, business value, security exposure, model-drift sensitivity, and explainability. Through real examples and decision heuristics, participants will learn how to strike the right balance between AI-driven automation and purposeful exploratory and experiential testing performed by humans. The talk highlights how testers can evolve from “test executors” to “quality strategists” in the GenAI era ensuring not just fast releases, but safe, responsible, and meaningful releases.
Takeaways from this talk
- Re-defining sufficiency in AI-assisted testing
- Decision matrix for balancing automated vs human testing
- Metrics for value-oriented rather than volume-oriented coverage
- How testers can evolve into human-centric quality strategists
TRUST a new model for “Human + AI Partnership in Testing
As GenAI becomes integral to modern products and workflows, traditional testing approaches are no longer sufficient to ensure reliability and trust. This session introduces TRUST — Threat Ready, Risk-Driven, User-Centric, Self-Improving, and Traceable — a practical framework for testing in the age of GenAI.
- Threat Ready focusses on the need to protect our systems and products from internal as well as external threats.
- Risk-Driven ensures testing focuses on areas of highest business and user impact.
- User-Centric focusses on testing towards usability, perception, and confidence, not just accuracy.
- Self-Improving / healing emphasizes how AI can enhance the testing process through intelligent test generation, pattern recognition, and continuous learning.
- Traceable reinforces explainability and auditability so teams can clearly connect model outputs to data and decisions.
This framework blends human judgment with AI-assisted capabilities, helping teams validate not just performance, but responsibility, reliability, and long-term trust in AI-generated systems.
Takeaways from this talk
- A Framework for Testing GenAI Systems
Participants will learn how the TRUST model provides a structured, practical approach to validate GenAI systems—covering data quality, model behavior, risks, and continuous improvement. - How to Shift from Traditional Testing to Intelligence-Aware Quality?
Attendees will understand why conventional test practices fall short for AI models and how to evolve toward training-aware, risk-focused, and human-centered testing strategies. - Leveraging GEN AI to Strengthen the Testing Lifecycle
The session will demonstrate how GenAI can augment testers through autonomous test generation, pattern detection, defect prediction, and self-improving validation loops.
Tutorial Speaker
Step by step guide on laying out good design for Automation with Playwright Typescript
1. Introduction to sample app (https://flights.sedinqa.com/)
2. What we are going to automate ( defining scope )
3. Let’s Setup basic skeleton for Technical solution
4. Let’s add our first test with naive approach
5. Concept of hierarchical design
6. Let’s reuse locators
7. Reusability across UI layer and Simplified Page object model with help of typescript types without classes
8. Composition over inheritance a pattern
9. More refactoring for reusability across flows
10. templatized parameterized tests
Takeaways from this talk
- Learn some design patterns
- Learn how to build page object models using composition
- Learn how to maximize reusability and maintainability
- Learn how to progressively evolve design
- Learn how to do it practically with live code session
Panel Discussion Speakers
Sandeep Sudame
An accomplished IT professional with over 20 years of experience in Quality Engineering, working across IT products and services organizations in domains such as Finance, Insurance, Telecom, Healthcare, Tax, and Audits.
Extensive experience spans diverse leadership and specialist roles including Digital Transformation & Innovation Leader, NFT Manager, and Test Architect, supporting organizations and global clients in their end-to-end transformation journeys across multiple geographies.
Core expertise lies in Quality Engineering, with strong hands-on experience in AI in QA, Intelligent Automation, Performance Engineering, Data Testing, and API Testing. Known for combining strategic vision with practical execution, this experience has consistently helped organizations modernize testing practices and deliver high-quality, scalable solutions.
Alpesh Vala
A seasoned Quality Assurance and Test Automation professional with over 19 years of experience in both automated and manual testing across complex enterprise systems. Core technical expertise includes Selenium, QTP, SQL, and non-UI automation using BASH scripting, with a strong focus on building scalable and maintainable automation solutions.
Highly experienced in automation framework development, test planning and strategy, effort estimation, delivery scheduling, and team leadership, including team allocation, coordination, and mentoring. Adept at stakeholder communication through clear status reporting and delivery governance.
Demonstrates deep proficiency in a wide range of testing methodologies, including Black Box Testing, System Integration Testing (SIT), Compatibility Testing, Regression Testing, Parallel Testing, Database Testing, and Alpha/UAT validation, ensuring robust quality across the software lifecycle.
Possesses strong product and domain knowledge within capital markets, trading, and banking platforms.
Daniel Mascarenhas
A skilled Test Automation Architect and QA Leader with 20 + years of expertise on delivering high-quality software faster through strategic automation. He has a proven track record in designing and implementing test automation solutions, building scalable test frameworks, and developing end-to-end CI/CD pipelines. He is experienced in defining QA processes that enhance quality, efficiency, and reliability across the software development lifecycle.
He is also adept at mentoring teams, aligning QA practices with business objectives, and fostering a culture of innovation, accountability, and continuous improvement.
Suganthi Kanagavel
Seasoned QA Architect with over 19 years of expertise in lifecycle automation and service virtualization for banking and insurance sectors.
Heading the QET Gen AI COE, responsible for creation and implementation of advanced generative AI solutions to fulfill business requirements.
Solution Architect for various internal IP tools.
Proficiency in developing scalable test solutions and setting up TCOE model for various client.









