JEFF SING – Director of Engineering, Iterable
CONNECTING THE DOTS: HOW SERVICE DELIVERY REVIEWS LEAD TO EFFECTIVE QUALITY ROAD MAPPING
Do you ever feel in your role as a QA leader or Testing Engineer that you get assigned tasks that seem more like stop gaps than actual work that will drive lasting quality improvement? Does it ever feel challenging to get other engineering leaders to align on what projects will be more impactful, rather than reacting to what’s currently blocking them? Do you find it hard to express what quality really looks like to your senior leadership team?
In the last half-decade of running Quality Engineering Programs, he often had these challenges in determining what work we should be delivering. One of the tools I utilized to help navigate and establish direction was running a Quality Service Delivery Review which helped establish :
- What does healthy quality look like in your engineering organization (code quality, engineering process, deliverables?)
- Is the Quality Organization successful in delivering this (what KPIs and how is this consumed)?
- Is our overall engineering output actually delivering with quality (what happens if it’s not)?
- Are our customers satisfied with our product and how can we determine this?
Being able to answer the questions above will allow you to align with the engineering leadership team on what quality initiatives should be worked on quarter to quarter.
DILRUBA MALIK – Senior SQA Manager, Palo Alto Networks
DIGITAL TRANSFORMATION IN QUALITY ASSURANCE – FOLLOWING THE CHANGE IN QA
Future Trends in Quality Assurance.
RAVI PULLE – Principal Member of Technical Staff, Salesforce
PERFORMANCE TESTING IN PRODUCTION AND CONTINUOUS PROFILING
Performance testing is an essential aspect of software development, ensuring that applications can handle the demands of real-world scenarios. Traditionally, performance testing has been conducted in isolated environments, detached from the production environment, leading to potential discrepancies between test results and actual performance. However, with the evolution of technology and methodologies, the paradigm of performance testing has shifted towards conducting tests directly in production environments. This paradigm shift, combined with the practice of continuous profiling, has revolutionized the way organizations approach performance optimization and stability.
In this conference talk, he will delve into the fascinating world of performance testing in production and low-overhead continuous contextual profiling, exploring the benefits, challenges, and best practices associated with this innovative approach. He will discuss the fundamental concepts, tools, and techniques that enable developers and performance engineers to evaluate and enhance the performance of their applications in real-time production scenarios.
Key Points to Be Covered:
Understanding the Importance of Performance Testing in Production:
- The limitations of traditional isolated performance testing environments.
- The significance of assessing performance in real-world conditions.
- The impact of performance issues on user experience, business reputation, and revenue.
- The Role of Continuous Profiling in Performance Optimization:
- Introduction to continuous profiling and its relationship with performance testing.
- Leveraging profiling tools and techniques to capture real-time performance data.
- Analyzing profiling data to identify performance bottlenecks and areas for improvement.
Overcoming Challenges in Performance Testing in Production:
- Addressing concerns related to security, data privacy, and user experience.
- Minimizing the impact on production environments during performance testing.
- Strategies for mitigating risks and ensuring seamless performance testing in live systems.
- Implementing Best Practices for Performance Testing in Production:
- Defining relevant performance metrics and goals for production testing.
- Leveraging A/B testing and canary deployments for controlled performance testing.
- Integrating performance testing into the CI/CD pipeline for continuous improvement.
Real-World Case Studies and Success Stories:
- Showcasing examples of organizations that have successfully adopted performance testing in production and continuous profiling.
- Highlighting the performance gains, stability improvements, and cost savings achieved through this approach.
HEATH HOWE – Manager, QA Engineering, Sureify
A PICTURE’S WORTH A THOUSAND LINES OF CODE: SNAPSHOTS FOR FUNCTIONAL TESTING
Heath has always found writing UI automated tests to be fun, at least relative to writing other kinds of tests. However, he always struggled with just how much to test beyond the data itself…the font? The color? The size? The position? How many browsers? How many screen resolutions?
Snapshot testing (aka visual testing) doesn’t address all of these but addresses a great many. At Sureify, Heath is working on a greenfield project with no existing UI tests and a couple of manual testers being brought into test automation. This was a great opportunity to implement snapshot testing.
Heath’s team undertook a comparison of a few of the paid and open-source tools for snapshot testing, eventually adopting Percy from Browserstack for the balance of features and pricing options. Once it was figured out how to manage data, snapshot tests using Percy are proving to be a very effective way of testing thousands of data points for each screen captured.
NICHOLAS WESTRUM – QA Automation Engineer, Clockwise
HOW TO MAKE YOUR USERS AND DEVELOPERS HAPPY BY LEARNING TO LOVE AUTOMATION
Software Deployment has many different types of tests that are run on them throughout its journey from a Developer opening a Pull Request to Merging the code into Production. These tests ensure that the quality of the code is up to standards and the core product does not fail when deployed to your End Users. From Unit tests to Integration tests, many different frameworks can be used to test the quality of the code. The results of these tests can be located in different areas of your pipeline and it can be very helpful to be able to view the metrics of your test results over time to both prove that your product is of high quality and be able to track performance over time.
ANJALI SHINDE – Software Manager II, QA, Samsung Research America
MOBILE APPLICATION AUTOMATION TESTING
In this exploration of mobile application automated testing, we delve into unique challenges and opportunities that arise when ensuring the quality of mobile apps through automation.
From selecting the right testing tools and frameworks to adapting strategies for diverse platforms/OS, different testing coverage, and devices, we’ll uncover the intricacies of a successful mobile automated testing approach.
Join us as we discuss best practices, expanding automation coverage, and finding solutions for challenging automation.
ROBIN GUPTA – VP- Engineering, Provar
DO YOU USE SELENIUM ONLY FOR TEST AUTOMATION?
Selenium’s official site states, “Selenium automates browsers. That’s it!” But there’s more – Selenium and the web driver API can handle repetitive tasks involving human-system interaction. We’ll explore common use cases for Selenium, followed by its integration into performance testing. Beyond testing, can Selenium automate tasks like Admin updates, maintenance, reporting, or even shopping? From personal experience, he will outline automating desktop apps and addressing challenges.
We’ll go further by showcasing how we used Selenium to create a DIY RPA tool, automating the Oracle ADFDI plugin for Excel.
SCOTT GLASER – Senior Director, Software Engineering, Salesforce
IMPACTFUL DEFECT METRICS FOR PRODUCTIVITY
Defects are a key outcome of testing. Tracking and measuring defects can be done in many different ways to guide your testing effort. Using defect metrics effectively can improve the quality and productivity of your engineering team. Having these metrics keeps you on target for your team goals and ultimately customer satisfaction.