Learn from the Testing Experts
24th July, 2025
CHENNAI
Keynotes
Redefining Testing Efficiency Through Generative Intelligence
Generative AI (GenAI) is rapidly transforming the software testing landscape by offering intelligent, scalable solutions to longstanding challenges such as data complexity, limited coverage, and time-consuming manual efforts. This session explores the practical and strategic role of GenAI in reshaping the Software Testing Life Cycle (STLC), highlighting key challenges in adopting GenAI-driven solutions, and introducing a structured framework to integrate GenAI across test generation, execution, maintenance, and defect triage. Real-world examples will demonstrate how GenAI improves efficiency, enhances test quality, and accelerates delivery. Attendees will leave with actionable insights into how GenAI can be effectively leveraged to enable smarter, more adaptive, and future-ready testing practices.
Reimagining Quality: The Crucial Role of Artificial Intelligence and Machine Learning in Enhancing Modern Testing Tools
As software systems grow in complexity and speed, traditional testing approaches struggle to keep pace with the demands of modern development cycles. In this keynote, we explore how Artificial Intelligence (AI) and Machine Learning (ML) are transforming the software testing landscape—ushering in an era of intelligent, adaptive, and highly efficient testing solutions.
The session will begin by highlighting the importance of AI in modern testing tools, setting the stage for an in-depth look at the key benefits these technologies bring—from accelerating release cycles to improving defect detection and enhancing test coverage. We will examine the competitive landscape, providing an overview of leading AI-driven testing platforms and successful real-world implementations that illustrate measurable impact.
Attendees will gain insights into AI-powered test automation techniques and their role in overcoming traditional bottlenecks. We will explore how AI augments test strategy, facilitates deeper coverage with fewer resources, and adapts to changing codebases in real time. Additionally, we’ll address the integration of AI into existing workflows, and how collaboration between human testers and AI can create more resilient and intelligent testing ecosystems.
The keynote will also consider challenges such as algorithmic transparency, data bias, and the security of AI testing environments. Best practices for safe, ethical, and effective adoption will be discussed, along with tools for monitoring and analyzing AI-driven testing processes.
Finally, we will look ahead to future trends and innovations, including self-healing test suites, continuous learning systems, and the evolving role of ethical AI in quality assurance.
Whether you’re a QA leader, developer, or tech strategist, this session will equip you with the vision and practical knowledge to harness AI’s full potential in your testing practices.
Takeaways from this talk
- AI Transforms Testing – Enables smarter, faster, and more adaptive testing processes
- Boosts Coverage & Accuracy – Improves defect detection and reduces false positives
- Speeds Up Releases – Supports rapid testing for continuous integration and delivery
- Proven Impact – Real-world success stories show strong ROI from AI testing adoption
- Seamless Integration – AI can be embedded into existing workflows and tools
- Human + AI = Better Testing – Collaboration outperforms automation alone
- Security & Ethics Matter – Responsible AI usage is critical in testing environments
- Challenges Remain – Data quality, scalability, and transparency must be addressed
- Future is Autonomous – Trends point to self-healing and continuously learning systems
- Follow Best Practices – Strategy, governance, and skilled teams ensure success
Features Speakers
Smoke to Avoid Fire
Knowing that your test environments are stable is key to testers being able to start testing and feel confident that their valuable time is used well. When those environments are constantly changing due to a high frequency of deployments, this can make the applications unstable and difficult to test. You never know if they will change, or go down, in the middle of your work. Having mechanisms in place to continuously monitor the stability, with alerts that notify you when any system goes down, is essential in order to avoid wasting precious testing effort and time.
In agile teams, this is even more critical as changes are more frequent and the probability of systems being unavailable is also higher. There is a need for a nimble and efficient framework for smoke testing applications, enabling fast automated feedback on their stability. Testers are already struggling with insufficient time for testing within sprints. Having to juggle systems going down or not behaving as intended due to breaking changes will further push those timelines and cause teams to miss their sprint goals.
This was most definitely true for us. We needed an effective smoke testing strategy for our entire test environment ecosystem. We needed these checks to run fast and be atomic to give us rapid feedback. Looking at where we are currently, we see increased confidence in the ability of our teams to release changes to environments. We are able to identify issues with the stability of the applications much earlier. This has helped our teams to get more stable environments, do better testing, and meet sprint and release goals consistently.
In this session, I will present the work we did around building a robust smoke-testing architecture by reusing existing automation frameworks for various applications. I will also talk about the risk of bloated smoke test suites, the problems we ran into with smoke testing infrastructure, and the need for speed when executing smoke tests. You will hear about our learning, practical solutions that have worked well for us, and where we are planning to move forward with this in the future.
Takeaways from this talk
- Challenges with environment stability
- Effective Smoke testing Strategies
- Benefits of automated smoke testing
AI-Driven API Test Automation: From Prompt to Script
I have built a custom AI Agent which converts an user story to generate API Tests with the selected framework pytest.
Takeaways from this talk
- How the AI agent generates API test scripts from plain English.
- Demo Video of converting a user story into an automated API test using AI with the Custom AI Agent APP.
The Zero-Documentation Circus: Taming Agile Testing in Salesforce
Working in a pure agile setup with zero documentation and only exploratory testing can feel like running a circus – fast, unpredictable, and a little chaotic. In this session, I’ll be sharing on how we built a focused regression suite for a Salesforce application by narrowing our exploratory testing and automation efforts specifically to customizations – since standard platform behavior is already well-covered.
I’ll walk through how we used the Copado Robotic Testing Tool to automate the right areas, and how we defined a standard operating procedure (SOP) to run the suite at meaningful intervals – even with ad-hoc deployments happening throughout the day.
It’s about finding the right balance between agility, control, and platform awareness.
Takeaways from this talk
-
Exploratory testing has its limits on a platform like Salesforce – regression efforts should focus on custom-built features, not standard platform functionality, to avoid an ever-expanding test suite
-
Documentation isn’t always necessary to build a solid regression suite – but clarity on scope and ownership is critical
-
An overview of Copado Robotic Testing (CRT): An Intelligent RPA Tool
-
Defining an SOP to run the tests: Schedule test runs smartly, not just frequently
Modernizing Salesforce QA: Selenium and the Rise of AI-Powered Test Platforms
As the landscape of software testing evolves, modernizing Quality Assurance (QA) practices for Salesforce is essential. This session will delve into the role of Selenium in traditional testing approaches and highlight the emergence of AI-powered test platforms that are revolutionizing the QA process for Salesforce
Takeaways from this talk
Attendees will learn about the advantages of AI in automating complex test scenarios, reducing manual intervention, and enhancing testing coverage in Salesforce applications
Test Automation Trends in 2025
Test Automation Trends in 2025
Takeaways from this talk
Latest tools and technologies of test automation
Panel Discussion Speaker
Dr. Sakthi Kumar Subramanian
Data and AI leader with 25+ years’ experience in engineering, cloud, and analytics across healthcare, telecom, automotive, and aerospace. Expert in GCP, AI/ML, data governance, and large-scale program management. Proven track record in driving innovation, automation, and strategic data initiatives across global enterprises.
Hemanth Kumar Reddy Dandu
With 13+ years in project management, business analysis, and test management, I drive software quality through strategic testing, team leadership, and process optimization. Passionate about delivering robust solutions on time and within budget, I thrive on collaboration and continuous innovation in the evolving world of software development.
Bhaskar Karuppiah
Bhaskar Karuppiah, VP of Engineering at Worldpay, leads India-based teams to innovate payment tech, streamline merchant onboarding, enhance checkout, and strengthen POS systems—advancing global commerce with agile, compliant, and cutting-edge solutions.
Ashok Annadurai
Ashok Annadurai is a results-driven AVP at Citibank with 15+ years in data warehousing and banking. He specializes in AML monitoring, ETL (Ab Initio, Pentaho), PL/SQL, Unix scripting, and Tableau, with award-winning leadership in optimizing processes and driving cross-functional success.
Rajarajeswari Rangasamy
- Roles Performed – Test Architect, Test Solutioning & Consultancy, Test Program Management, Client
- Relationship, Capability Building, People Management, Professional Mentoring
- Hands on experience on Test Strategy & Planning, Data & Test Environment Management, Functional& Non-Functional
- Experienced in working Agile and Waterfall methods
- IBM Architectural Thinking Practitioner