SUMIT KUMAR – CTO, WTAnow
TESTING TODAY’S APPLICATION – EXPLORING TEST AUTOMATION TOOLS FOR EFFICIENT SOFTWARE TESTING
In the ever-evolving landscape of software development, the need for robust and efficient testing methodologies is paramount. As applications become more complex and feature-rich, manual testing alone is no longer sufficient to ensure the delivery of high-quality software within tight timelines. This abstract delves into the world of test automation tools, providing an overview of their significance in today’s application testing landscape.
The abstract begins by highlighting the challenges faced by modern software development teams, emphasizing the demand for rapid and reliable testing processes. It then introduces various test automation tools currently available in the market, showcasing their unique features, capabilities, and suitability for different testing scenarios.
The abstract further explores key considerations for selecting the appropriate automation tool, including compatibility with diverse application architectures, ease of integration into existing workflows, and scalability to accommodate evolving project requirements. Real-world case studies and success stories are presented to illustrate the tangible benefits and efficiencies gained through the adoption of test automation.
Moreover, the abstract addresses common misconceptions and challenges associated with test automation, offering insights into best practices and strategies for overcoming these hurdles. It also touches upon the importance of a balanced approach, where manual and automated testing complement each other to achieve comprehensive test coverage.
ADAM SANDMAN – CEO, Inflectra
MANAGING RISK-BASED TESTING IN THE AGE OF AI
Artificial Intelligence (AI) is revolutionizing how software testing is performed, allowing for faster and more accurate detection of defects. However, with this new technology comes new risks that must be managed to ensure the quality and reliability of the software being developed. In this talk, Adam Sandman will explore the concept of Risk-Based Testing (RBT) and how it can be applied in the age of AI. He will discuss the challenges that arise when implementing RBT, including the need to balance the benefits of AI with the risks of false positives and false negatives. Adam will also examine the various techniques and tools that can be used to manage these risks, such as model-based testing, exploratory testing, and risk analysis.
DAVID ISAAC – Managing Partner, Business Performance Systems
AUTONOMOUS TESTING: PROMISES AND PITFALLS
Autonomous testing leveraging AI promises to make automated testing easier, reducing the need to write and maintain test scripts. But can it deliver on this promise? In this session, we will explore various approaches and tools for autonomous testing to understand their current capabilities, limitations, promises, and pitfalls. We will cut through marketing hype, so you have a practical understanding of what autonomous testing offers you as a tester or QA manager.
MARTIJN GOOSSENS – Agile Quality Consultant, Xebia
BECOME A PERFORMANCE TEST MASTER – THE NON-TECHNICAL SIDE OF THINGS
Stress tests, load tests, spike tests, soak tests. He has done them all, from setting up the test scripts to reporting the results and findings. While there is plenty of documentation around the technical side, He wants to share with you his takeaways on running a successful test and analyzing the results. There wouldn’t be good results without a good test plan, so in this talk, we will start and look at how to set up a proper performance test for a web application. A recipe that, when followed, will give you pretty results to dive into. As somewhat of a performance test expert, he finds that there are a few recurring signals he looks out for in performance test results. He will share these key indicators and give you the tools to spot a server in trouble and when it might be OK to end a test prematurely. After we’ve discussed the indicators, we will look at three real-life scenarios and check if you can spot the details when we analyze them together.
HAL DERANEK – Principal of Quality Engineering, Slalom
USING TEST CODE SCAFFOLDING TO JUMPSTART YOUR QUALITY
The work-life of an automated tester often seems to be an endless game of catch-up. A sprint starts and developers begin their work, submitting their code somewhere in the middle to the end of the sprint. The tester then has to scramble to write their automation ASAP.
This situation is less than ideal. Deadlines are missed, stories roll into the next sprint, or even work – automation is relegated as “nice to have”. So what is one to do? How can test automation keep up with development while maintaining a high standard of quality? The answer is simple:
Test code scaffolding
In this presentation, Hal will describe what test code scaffolding is and how to best implement it. He will walk through a hypothetical example of it in practice using a user story and code examples.
BILL GOLEMAN – STSM and Manager of Test Automation, IBM
TESTING THE IBM CLOUD – BUILDING THE ULTIMATE TEST MACHINE
Description of the journey of the next-generation cloud team at IBM as we started as a new test team and built what we originally envisioned as The Ultimate Automated Test Machine. The phases we went through, starting by deploying new cloud infrastructure software from scratch and running tests by hand, to where we are today, executing over 140,000 tests a day, post processes failed test results, performing initial triage, automatically identifying an existing bug to duplicate to, or open a new bug, dynamically display results in real-time, all completely automated. How we started, how we made steady progress, where we are now, where we are going next.
ARTEM GOLUBEV – CEO & Co-Founder, testRigor
REVOLUTIONIZING TEST AUTOMATION: HARNESSING GENERATIVE AI TO EMPOWER PRODUCT MANAGEMENT AND ACCELERATE SDLC
In this session, you’ll learn how to save 95% of your time on test automation by leveraging Generative AI breakthroughs and moving 90%-100% of the work to the product management organization. We’ll cover how to improve SDLC to allow you to move up to 30% faster with up to 90% fewer bugs in production, and how Generative AI can be helpful to eliminate 98% of the effort on maintaining end-to-end tests.
PETER KIM – Directory of Quality Engineering, Kinetica DB
CASE STUDY OF AUTOMATION
Driving quality throughout the SDLC by managing a near self-driving CI/CD pipeline, while leveraging the hottest new test automation technologies, strategies, and tools, with the goal of maximized ROI, including nice visual test reports to boot .. ahh.. this is the goal of many engineering teams. Automation has become (again) a hot topic on many fronts, for example, what are the real gains of test automation vs. manual testing? How can AI help us? Are SDETs automating just for the sake of ‘automating’? “We have thousands of automated tests! Our automation is great! We can conduct release acceptance tests in 10 minutes” .. only to find out that a customer crashed the system within 30 seconds of the release.
After so many dedicated resources, cross-collaboration across multiple respected companies, and attending numerous conferences, this presentation shares the numerous poor decisions, and unexpected outcomes, then finally how implementing a simple approach to design and build a high-functioning automation strategy really works.