TEST AUTOMATION SUMMIT | WASHINGTON DC – May 10, 2024

SPEAKERS

SUMIT KUMAR – CTO, WTAnow

TESTING TODAY’S APPLICATION – EXPLORING TEST AUTOMATION TOOLS FOR EFFICIENT SOFTWARE TESTING

In the ever-evolving landscape of software development, the need for robust and efficient testing methodologies is paramount. As applications become more complex and feature-rich, manual testing alone is no longer sufficient to ensure the delivery of high-quality software within tight timelines. This abstract delves into the world of test automation tools, providing an overview of their significance in today’s application testing landscape.

The abstract begins by highlighting the challenges faced by modern software development teams, emphasizing the demand for rapid and reliable testing processes. It then introduces various test automation tools currently available in the market, showcasing their unique features, capabilities, and suitability for different testing scenarios.

The abstract further explores key considerations for selecting the appropriate automation tool, including compatibility with diverse application architectures, ease of integration into existing workflows, and scalability to accommodate evolving project requirements. Real-world case studies and success stories are presented to illustrate the tangible benefits and efficiencies gained through the adoption of test automation.

Moreover, the abstract addresses common misconceptions and challenges associated with test automation, offering insights into best practices and strategies for overcoming these hurdles. It also touches upon the importance of a balanced approach, where manual and automated testing complement each other to achieve comprehensive test coverage.

ADAM SANDMAN – CEO, Inflectra

MANAGING RISK-BASED TESTING IN THE AGE OF AI

Artificial Intelligence (AI) is revolutionizing how software testing is performed, allowing for faster and more accurate detection of defects. However, with this new technology comes new risks that must be managed to ensure the quality and reliability of the software being developed. In this talk, Adam Sandman will explore the concept of Risk-Based Testing (RBT) and how it can be applied in the age of AI. He will discuss the challenges that arise when implementing RBT, including the need to balance the benefits of AI with the risks of false positives and false negatives. Adam will also examine the various techniques and tools that can be used to manage these risks, such as model-based testing, exploratory testing, and risk analysis.

JAY WALTERS – Founder, Lead Consultant, Constantly Testing

DEVTESTSECOPS: TESTING AND SECURITY INTEGRATED WITHIN DEVOPS

In today’s rapidly evolving software development landscape, it is more important than ever to streamline and enhance collaboration among Development (Dev), Testing (Test), Security (Sec), and Operations (Ops) professionals. Producing software that users love and rely on can provide businesses with significant competitive advantages, driving substantial business growth. Enabling high-quality, secure software delivery capabilities in teams is increasingly vital, albeit growing in complexity.

While the term “DevOps” explicitly acknowledges the roles of Developers (Dev) and Operations (Ops) in building and delivering software, its primarily designed to support the Software Development Life Cycle (SDLC). However, it may not fully champion the Software Testing Life Cycle (STLC) which is needed to address the growing industry demands for rapid and comprehensive feature testing.

This presentation on DevTestSecOps explores the evolving paradigm where diverse software manufacturing skills intersect with unified approaches to collaboratively crafting outstanding software. DevTestSecOps recognizes that confining quality and security practices to specific development stages is inefficient. Instead, it champions the integration of entire teams in the continuous process of verification and validation (V&V), incorporating holistic testing disciplines that begin prior to development and extend seamlessly throughout the entire lifecycle.

Delving into the core concepts of DevTestSecOps, we look at the importance of moving away from outdated assembly line strategies for software delivery. By exploring how antiquated approaches often marginalize functional and non-functional testing experts, we delve into the long-term consequences of the ‘throw-it-over-the-wall’ mentality. We will look at progressive new ways to foster genuine collaboration and teamwork, rallying entire teams around the reliable and trustworthy software they consistently produce.

DAVID ISAAC – Managing Partner, Business Performance Systems

AUTONOMOUS TESTING: PROMISES AND PITFALLS

Autonomous testing leveraging AI promises to make automated testing easier, reducing the need to write and maintain test scripts. But can it deliver on this promise? In this session, we will explore various approaches and tools for autonomous testing to understand their current capabilities, limitations, promises, and pitfalls. We will cut through marketing hype, so you have a practical understanding of what autonomous testing offers you as a tester or QA manager.

MARTIJN GOOSSENS – Agile Quality Consultant, Xebia

BECOME A PERFORMANCE TEST MASTER – THE NON-TECHNICAL SIDE OF THINGS

Stress tests, load tests, spike tests, soak tests. He has done them all, from setting up the test scripts to reporting the results and findings. While there is plenty of documentation around the technical side, He wants to share with you his takeaways on running a successful test and analyzing the results. There wouldn’t be good results without a good test plan, so in this talk, we will start and look at how to set up a proper performance test for a web application. A recipe that, when followed, will give you pretty results to dive into. As somewhat of a performance test expert, he finds that there are a few recurring signals he looks out for in performance test results. He will share these key indicators and give you the tools to spot a server in trouble and when it might be OK to end a test prematurely. After we’ve discussed the indicators, we will look at three real-life scenarios and check if you can spot the details when we analyze them together.

ANURAG SHARMA – Vice President of Engineering, LambdaTest

HOW WE SYNCHRONIZED AND UPDATED 3000+ ENVIRONMENTS IN A MULTI-DATA CENTER OPERATION

At LambdaTest, Anurag and his team are responsible for maintaining and scaling up more than 3000 different types of environments that are used for testing by customers. In this talk, He will highlight challenges faced by LambdaTest in maintaining a consistent and updated environment across multiple data centers, housing various machines like Mac Minis and Linux machines. He will introduce ‘Reconciler’, an internal project developed to address issues of desynchronization and outdated configurations in our production machines.

This talk will cover how Reconciler uses a ‘spec’ or ‘master sheet’ as a source of truth to maintain an updated inventory, along with an observability tool for real-time health snapshots.

The talk will also delve into how customizations in the spec helped in deciding the downtime strategies for updates and ultimately enhanced customer experience by resolving intermittent issues.

HAL DERANEK – Principal of Quality Engineering, Slalom

USING TEST CODE SCAFFOLDING TO JUMPSTART YOUR QUALITY

The work-life of an automated tester often seems to be an endless game of catch-up. A sprint starts and developers begin their work, submitting their code somewhere in the middle to the end of the sprint. The tester then has to scramble to write their automation ASAP.

This situation is less than ideal. Deadlines are missed, stories roll into the next sprint, or even work – automation is relegated as “nice to have”. So what is one to do? How can test automation keep up with development while maintaining a high standard of quality? The answer is simple:

Test code scaffolding

In this presentation, Hal will describe what test code scaffolding is and how to best implement it. He will walk through a hypothetical example of it in practice using a user story and code examples.