SPEAKERS
CHRISTINE BIRD – VP of Quality Management, Brown Brothers Harriman
GUIDING AUTOMATION STRATEGY: KNOWING WHERE AND WHEN TO AUTOMATE OR NOT
Digital Transformation is the use of new, fast and frequently changing digital technology to solve problems. It often utilizes cloud computing, reducing reliance on user-owned hardware but increasing reliance on subscription-based cloud services. With its very definition digital transformation invokes frequent change, so how do we guide automation were platforms, software and the technology we use to automate are changing? This along with the push to ‘automate everything’ with business leaders jumping on that bandwagon, eager engineers that will automate anything that moves, Where and when do we say yes to automation?, More importantly, where and when do we say ‘no’ to automation?, Some QA guiding principles: let’s consider Risk, Reliability, Reputation, Reusability
NITSAN BLEIBERG – App Automation Lead, Wayfair
DELIVERING A SCALABLE AUTOMATION FRAMEWORK IN A HIGHLY DISTRIBUTED ORGANIZATION
As a QA architect, you are expected to deliver a test automation framework that meets the requirements of many engineering teams for their QA and developers. To reap its benefits, the teams will need to dedicate precious resources for training and automating tests. A good architect is responsible not only for building the framework but also for planning its rollout and helping teams strategize how to adopt it. Thus demonstrating a return on investment for the organization.
FRANCIS ERDMAN – Software Development Engineer in Test, Insight Global
API TESTING IN THE TEST PYRAMID
Testing of modern N-tiered applications need to follow the test pyarmid model of test priority, which is unit tests forming the basis of testing, followed by API testing, followed by integration testing (which includes UI testing). API tests are faster to develop, maintain, and execute than more complex integration tests, therefore, they can find bugs more “upstream” (earlier in development cycles) than integration tests can, forming a middle ground between unit tests done by developers or SDET’s and integration tests done by various testing stakeholders and business analysts.
GEORGE HAMBLEN – Sr QA Architect, UST Global
SYNTHETIC TEST DATA
Test Data has become the most critical part of the testing process. Surveys have shown that up to 40% of a projects time is spent in test data related issues. Early on data was created for testing purposes. As systems become more complex, the data needed to keep referential integrity for end to end purposes. The industry shifted from creating data to taking production data and masking it for testing. Synthetic data was put on the shelf. Now breakthroughs in Synthetic data have solved the problem of keeping referential integrity. Synthetic data also has the advantage of not touching production data, which helps secure your customers privacy. It’s time to give synthetic data another look. In this talk, I’ll cover the history of test data, highlight the advantages of a strong test data management process and how synthetic data is changing the testing game.
DARIN KALASHIAN – Principal, DSK Consulting
DRIVING TEST AUTOMATION STRATEGY WITHIN A SCRUM MODEL
Developing within a Scrum framework brings extra challenges in that all development and testing should be completed within the iteration, delivering a potentially shippable product. Deferring the development of test automation into a later iteration only opens the door to technical debt and quality creep. For teams to be successful, they need to develop a good test strategy, clearly identifying where to focus on automation development and/or execution that will safeguard the delivery of a high quality product. Developing in an incremental manner, teams often lose perspective of positioning automation to drive customer critical areas, including use case scenarios, end-to-end integration; performance, scale; stress; security; installation and upgrade. Through the utilization of readily available technology, such as containerization, virtualization, and data lakes, automation can drive a multi-faceted qualification strategy. Lastly, as organizations face fiercer competition with a real need to shrink time-to-market deliveries, organizations need to find the delicate balance between having enough automation to drive customer quality versus having too much, which could slow down delivery times and/or steal valuable resources to maintain existing automation.
MANIVANNAN GAJENDRAN – Salesforce QA Lead, Canada Drives
RAPID API AUTOMATION USING POSTMAN
Join us to find out how one of Canada’s fastest-growing companies has overcome the hurdle of continuous testing in Rapid Development Model and successfully implemented API Automation using Postman. API automation is not only quicker to build but also can runs faster and much earlier than GUI Automation.
SHAUVIK ROY CHOUDHARY – CEO & Co-Founder, MoQuality
MOBILE TEST AUTOMATION MADE EASY WITH AI
MoQuality was founded with the idea of using AI for mobile app testing. We started with an AI that was able to play mobile games on real devices. But soon after talking with real quality engineers, we understood the real challenges in app testing that most of teams are facing today. Mobile app testing can be daunting with so many devices and OS versions in the market. Test-automation using frameworks such as Appium and Espresso can be challenging to testers who are new to this game. Can AI help? In this talk, we will introduce you to new approaches that can be used for automated test creation and how you can run these tests on any real device available on public clouds. In this talk you will learn about the intricacies of test automation frameworks and tools, and see a new approach to app testing.
HEEMENG FOO – Sr Engineering Manager, Climate Corp
TESTING IN DIGITAL AGRICULTURE
Building mobile applications for framers/growers presents a unique set of challenges in today’s world of highly automated farming. It is a nascent field where software technology is just getting its feet wet. The problem is further magnified by the lack of mobile networks in farms, technology adoption and a completely new area where software is making inroads. At Climate Corporation we are the pioneer in this field and delivering our software products with high quality has new meaning as it significantly impacts farmers’ livelihoods. This talk focuses on a number of key challenges we are addressing in the area of test engineering in this domain of digital agriculture.
JAY NEWLIN – Quality Assurance Manager, PromptWorks
TESTING TODAY’S APPLICATION – TEST AUTOMATION TOOLS YOU CAN USE
I will report on first-hand research into various tools available for test automation across multiple types (UI layer, visual diffing, scripted/unscripted, Paas/SaaS and open-source, etc.). The information will include “class” of tool, potential use cases, publicly available pricing information, and strengths and weaknesses (pros and cons) of each tool or class.
LAGAN KHARE – Manager Quality Engineering, Elsevier
INTEGRATING AUTOMATED TESTING INTO DEVOPS AND AGILE
Elsevier builds healthcare education and clinical solutions-based products, our products are used daily by medical students, nurses, clinicians, practitioners, and healthcare professionals who rely on latest, up-to-the-moment information. To deliver new features and contents faster to our customers we wanted to steer away from doing traditional monthly releases to be able to release on demand . The bottleneck was running more than 3000 automated regression scripts in a short time. We worked on condensing 5-hour regression run into 30 mins by breaking the big Release on Demand challenge into smaller problems and addressed each one of them separately, Moving to current best DevOps practices like moving automation infrastructure to Docker, Kubernetes, infrastructure and configuration as code, managing environments, so that we can scale our tests and get fast feedback to developers to produce fixes faster. Faster fixes means quicker delivery of value to customers. Automation test results integration in JIRA using (ZAPI APIs), Improving pass percentage to avoid tedious manual verification for script failures, Implementing risk based separate UI and API test coverage for releases by categorizing and prioritizing tests
SRIKANTH RAMACHANDRAN – Sr. Test Manager, CSL Behring
TESTING IN A REGULATED ENVIRONMENT IN THE AGE OF DIGITAL DISRUPTION – CHALLENGES, SOLUTIONS AND BEST PRACTICES
In a regulated industry such as life sciences adapting latest digital trends is a huge challenge, I want to share some of the testing specific challenges, its solution and the best practices that eventually helped us adapt latest technologies with faster turnaround times. We are bound by FDA regulations and compliance requirements that prevent us from adopting new trends and technologies without going through due process. Such requirements pose a multitude of challenges within an application’s SDLC process, especially in the testing phase. Some of these challenges included inefficiencies in testing due to the use of multiple tools for end-to-end validation and complex approval workflows therein. We addressed these inefficiencies by the implementation of an integrated Application Lifecycle Management (ALM) toolset in combination with a flexible organizational QMS process. Another challenge we faced was when moving from waterfall to agile, we noticed a significant increase in testing effort and delays in projects. One of the steps to resolve this was by customizing our agile process and breaking the validation cycle into separate sprints within the release cycle. I will highlight some more challenges and how we resolved them with the help of technology and processes. In summary, a conference attendee will take away multiple best practices and solutions that can enable them to implement new trends and technologies with faster turnaround times in a regulated environment.
DHRUV MALIK – Sr Mgr Qlty Engineering, United Health Care
INTEGRATING YOUR PERFORMANCE TESTING INTO JENKINS RELEASE PIPELINE
How to shift-left your performance testing in your delivery cycle, with the benefits of scalability, distributed testing, and real-time and comprehensive test results with jmeter. Embedding performance tests within continuous delivery pipelines drives feedback to developers and the line of business
ADAM SANDMAN – Director, Inflectra
TESTING THE MOZ500 TOP WEBSITES
As a research project to see why test automation of web applications is so hard, and why Selenium scripts seem to break so frequently, we ran an experiment to analyze the top 500 (ranked by Moz) web sites to see what patterns we would find that we could use to help automation engineers succeed more easily. In this talk we will present some background on the problem, then detail our findings from the Moz 500 research experiment, showing how we crunched the data regarding which attributes, CSS, ARIA, IDs sites are using to see how best practices developed in theory will work in practice. The talk provides suggestions and ideas derived from the data about how we can create more reliable tests. It discusses tools and techniques that can be employed to make automation scripts easier to create and maintain. Finally, it suggests ways in which we can use a combination of Artificial Intelligence and Machine Learning to automate some of these solutions and make tests self-healing.
ERIC MARTIN – Lead QA Engineer, WizeHive Inc.
QUALITY ASSURANCE VS. QUALITY ENGINEERING – THE NEW AGE OF SOFTWARE QA
How do we test software in the new age? The era of groups of manual testers executing pre-defined test cases is long gone. Today’s world of frequent releases and faster development pipelines necessitates an entirely different mindset where software testing is concerned. Testers need to adopt a multi-disciplinary approach that builds quality into the product via a blend of process management, risk mitigation, and skillful use of automation tools. Quality analysts are now quality engineers. In the age of QA 2.0, those who don’t adapt will be left behind.
JOHN SAVATH – Quality Engineer, Elsevier
TEST AUTOMATION IN QUALITY ASSURANCE – LEVERAGING ANALYTICS AND INTELLIGENCE TESTING
The complexity of analytics and its importance for any company to gain the competitive edge in learning what makes their product most valuable from a customer’s perspective as a user. It is important that we validate the quality of our analytics data through sophisticated technologies that can automate the testing of their product with analytics from all layers of testing. We want to ensure that our data is giving the most accurate story possible.
RICHARD KNASTER – SAFe® Fellow and Principal Consultant, Scaled Agile, Inc.
ROLE OF QA IN SCALED AGILE FRAMEWORK
As one of our product teams started the modernization of the monolithic application to a micro-services architecture, we realized that the automation test suite needed improvement. What we had was too lengthy and unstable to allow for multiple deployments a day. Our goal was to be able to run the automation suite fast enough initially to meet multiple daily deliveries and require no manual intervention unless a defect was found. We needed to do this without throwing more resources at it and creating waste.
PHILIP DAYE – Software Test Lead, Ultimate Software
OPTIMIZING TEST CASE DESIGN WITH DOMAIN ANALYSIS
In this era of Agile and DevOps, code is delivered at an ever increasing pace. Are our applications less complex than before? Is less testing required? No, the environment our code is deployed in is more heterogeneous than ever and the applications are built in ever more complex ways in order to scale to meet the needs of the business. Within the ever tightening time constraints we work under, testing, and especially automated testing, has never been more important. We need to become craftsmen, like the master swordsmith, who folded and layered steel to create swords that were hard, held their edge, and were not brittle. For his clients, quality was literally life and death! In this session, we’ll discuss combining fault models and risk-based testing with domain analysis to create a solid core, and around it we’ll layer various techniques to forge test cases that are more likely to expose faults in our application.
DUSHYANT ACHARYA – Sr. Manager – Test Automation, Ripple
TEST AUTOMATION FOR AI/ML APPLICATION
Machine Learning (ML) is already changing applications as we knew. In a few years, we have already seen used in many of our daily use products. We also have a lot of research and articles claiming, for good reason, that in the next few years – ‘most of the applications will have at least one AI/ML module’. Technologist in me is excited about this future and at the same time, Test engineer in me was curious and asking a lot of questions when I got introduced to AI/ML. What exactly is the ML application or module? How is it different than traditional applications? Is it true that ML application is always changing and improving? What does this mean to Quality? How do I automate something with unpredicted behavior? What does this mean for overall product quality for end sure? Thankfully, we have few answers now. At high-level ML applications can be seen as multiple building blocks or components like – training data, algorithm, test data, pipeline infrastructure, feedback system, and retraining. An algorithm is the core ML part of the application. Training data is the data that we need to feed into the algorithm to generate intents or output which will be consumed by further pipeline. Test data is actual data against which algorithm will be used. The pipeline is data/flow infrastructure including scripts, data storage/data lake, and actual applications. A feedback system is where we can get qualitative feedback on algorithm output. And last, retraining core algorithm based on this feedback. As you can see each tier or component has a specific role, and we can talk about quality checks and automation effectively now.
MONIKA BUDHIRAJA – Sr. Quality Lead Engineer, Grubhub
7 STRATEGIES TO FASTER AND PEACEFUL PRODUCT RELEASES
In Agile environment, release engineers, test engineers and other responsible IT professionals are challenged with releasing new features , products and enhancements with the speed of light. Businesses expect them to transform test timelines from months to weeks, weeks to days, days to hours, hours to minutes and minutes to seconds. As an engineer, how would you validate a build in this ever decreasing timelines?, How would you feel confident in what is going out the door with little to no time? What tools and techniques would you choose and why?, What policies and procedures would you implement and why?, Is there a magic bullet that can speed up our application quality in a given space and time? Join us as we uncover our journey of evaluating and choosing tools and techniques, inventing new processes, policies and methods to lead a peaceful, faster and reliable quality release processes. But wait, were we perfect? What are our learnings along the way? How did we fill-up the communication gap when there were breakdowns in the application quality? How did we manage to receive approvals on expensive tools and resources? How did leaders showcase Return on Investments?
DINESH RAISINGHANI – Principal Consultant, Capco
IS YOUR AUTOMATION INTELLIGENT ENOUGH TO ADDRESS UAT REQUIREMENTS?
Even though Test Automation has been around for many years now, the adoption rate is still on the lower end of the spectrum. That’s primarily because whenever firms discuss about Test Automation, the focus is primarily on Test Execution and its well-known pitfalls like high up-front cost, reduced ROI, maintenance issues etc. And hence the business doesn’t see the value in investing in automation. But with technology advancements, Test automation needs to be approached with a different mindset. I would be discussing about ways to successfully automate UAT with the help of Model based testing and how can business derive maximum value out of it. I would also be talking about how adoption of RPA (Robotic Process Automation) can be used to automate UAT, specially for RTB (Run The Bank) applications. Finally, we would highlight risks that the firms should be aware of and ways to mitigate them.
MIKHAIL DAVYDOV – Senior Software Development Engineer Test, InRhythm
EVOLUTION OF QUALITY ASSURANCE IN THE AGE OF DAILY PRODUCTION RELEASES: THE RISE OF THE SDET ROLE
EVOLUTION OF QUALITY ASSURANCE IN TFrom Testing to QA Engineering : a retrospective outlook, Waterfall, RUP, Independent Testing department: a golden age of testing, TDD, BDD, Agile: QA at the crossroads, SAFE, DevOps, Shift Left: The rise of SDET role. SDET versus TE: different names for the same or completely different roles? The age of daily production releases: the rise of the sdet role
ADAM SANDMAN – Director, Inflectra
HOW TO HELP TESTERS AND DEVELOPERS WORK TOGETHER IN HARMONY
One of the biggest challenges for test managers and project leadership is how to avoid the typical miscommunications between testers and developers. Despite best efforts, we often find that assumptions, team processes and even choice of language can create friction and frustration between developers and testers. This session will discuss the roots of the communication disconnects and provide practical strategies for enabling harmonious, productive teams, with examples taken from actual client situations. The session will include a discussion of what ‘done’ and ‘happy path’ really means.
LEE BARNES – CTO, Utopia Solutions
EFFECTIVE TEST AUTOMATION IN DEVOPS
Effective test automation is a key factor in achieving continuous integration, and ultimately, success with a DevOps development approach. However, many organizations continue to point to testing as a bottleneck in their pipeline. The culprit is often the inability to incorporate stable automation in their testing practices. They fall victim to common traps including misunderstanding what test automation is, automating the wrong tests, focusing UI level automation and failing to provide a stable execution environment. In this talk, Lee will discuss how organizations can overcome these obstacles, and move towards continuous testing within their DevOps practices. Specifically, the discussion will touch on key practices and methods for implementing effective test automation in a DevOps pipeline including test suite & test scope, automation approach/methods, and test environment and data management.
ADAM SANDMAN – Director, Inflectra
API TESTING
This session will outline the different types of Application Programming Interface (API) in use today (GraphQL, REST, SOAP, RSS) as well as brief historical perspective on legacy API technologies (ActiveX, CORBA, MSMQ). It will explain why it is important to make sure you have a sound API testing strategy, and how it relates to the critical operation of today’s connected businesses. The session will cover API design patterns such as endpoint versioning, self-describing data formats, authentication, authorization, and mocking. The session will provide practical techniques for how to ensure you have sufficient test coverage of your API endpoints, how to leverage realistic test data, and how to integrate API tests into your DevOps toolchain and overall test reporting environment. The audience will learn how to use a variety of different tools and frameworks to simplify API testing, with practical samples testing real applications, plus some horror stories from when ‘good APIs’ turn bad.
GREG SYPOLT – VP/Quality Assurance, EverFi
BUILD A CULTURE OF QUALITY EXCELLENCE
Building a culture of quality excellence in your organization is absolutely necessary to compete in 2020. Every member must be accountable for the quality of the application and understand his/her role in delivering it. We need our teams to emphasize quality over quantity, work smarter, and set the right expectations. In a world of agile development and even more so for those of us moving to CI-CD, testing all the things in a fast-paced development environment is challenging if not impossible. What to do about it? One foundational aspect of 2020 success is to acknowledge where you are on the quality maturity curve. By understanding your maturity curve, you will then know how to best to set quality goals, diagnose needs, and identify interests for 2020 and beyond. My talk will share how I measure the quality maturity curve to set the right execution path and set expectations in the quality journey for huge enterprises with dynamic, mission-critical applications. We will explore my experiences in building a quality platform that helped the company achieve elite quality status and drive more business value, quality accountability, and shared responsibility with clear ownership.You’re in the right place to set executive advice and set a path for quality in 2020 and beyond.
JON SZYMANSKI – VP of Software Quality Assurance, Beacon Street Services
BEYOND TEST CASES: USING DATA QUALITY REPORTING TO SUPPLEMENT YOUR TESTING STRATEGY
Traditional testing focuses on writing test cases with known data sets, steps, and expected results, in a controlled test environment. When executions pass, we promote the code to production. But what if the test data and environment are out of sync with production? How do we know we are testing the right use cases before deployment? What if we have highly complex business rules that preclude managing clear and easily executable test cases? How do we know the code will behave the same in the production environment and how can we identify issues before our customers do? As testers, our goal is to assess software quality, and thus, provide confidence with our customers that the systems will perform to expectations. In our testing strategy, we must perform a risk analysis that includes identifying the highest value scenarios to test, our test data, and maintainability. We want to ensure our efforts in the test environment translate into the production environment. But we don’t usually have the time and the means to test everything, and sometimes we just don’t know what we don’t know. With the rise of big data, data science, and the C-suites obsessive attention to using analytics for decision making, as testers, we too can leverage data to assess system behaviors without having to replicate the end-to-end business scenarios in production. Are we just testers, or are we truly engaged in quality assurance soup-to-nuts?
PETER KIM – Director of Quality Engineering, Kinetica
ADVANCING GUI TESTING WITH APPLICATION MODELING
UI Test Automation, with Selenium, has become the core QA strategy for functional testing. Most QA/QE teams continue leveraging automation designs and patterns following “best practices” that were adopted 10+ years ago. Yes, there are new technologies, some that have come and gone, as well as all the promises of continuous integration with continuous testing .. however the main contributing factor to a failed UI automation strategy is it’s core design. Despite all those cloud solution providers, advanced image detection solutions, and abundance of automation products – the truth is … design matters, or in this case – let’s discuss Application Modeling. First, the understanding of why those 10+ year old designs and patterns contribute to a losing automation strategy, that most team’s continue to adopt (at least as their starting point) , is crucial to helping QA leadership avoid making those mistakes that so many QA teams continue to hit. Second, let’s discuss how to implement an Application Model based automation framework. It’s a simple yet powerful strategy to ensure scalable, reliable, and ‘inclusive’ strategy. Application Modeling leverages meta-data programming and dynamic code generators to “model” the application with rulesets. This means more concise test coverage while spending less resources on writing mundane, copy/paste, .. otherwise throw-away code.
MAX SAPERSTONE – Director of Test and Automation, TestPros
TEST LIKE A SCIENTIST
Testing should be fun, but it needs to be rigorous with clear results. In the field, it’s typical to find testers only going through the motions of testing, without putting much thought into their tests. Testing requires critical thinking, both in creating and exercising the test. Without starting from good requirements, determining the best way to verify the requirement, examining the outcome found and understanding its implications, testing loses most of its value. To this end, consider using the scientific method to drive creating a testing strategy and plan, along with designing your test cases. The scientific method is a method of research where a problem is identified, data is gathered, a hypothesis is formulated, and then the hypothesis is tested. These actions in the scientific method align well with activities that testers should be performing. Max will walk through the stages of the scientific method, expanding on each one, talking a bit about each stage, and then how it relates to testing and the activities that can be drawn from it. He’ll discuss how these patterns can be used to create a good overarching test strategy, with test plans resulting from it as well. Max will then talk about treating requirements as hypotheses, and how good test cases can be created from them, along with how identifying good test data is imperative to creating effective tests. Finally, he’ll discuss the benefits to following a more rigorous process like the scientific method, and the benefits that come with it: improved quality, and confidence in what was testing, and that it was tested properly.
ANASTASIOS DASKALOPOULOS – Quality Assurance Specialist, Unleashed Technologies
THE RIGHT TIME AND PLACE FOR EFFECTIVE TEST AUTOMATION
Everyone here will agree that Test Automation has an important place in Software Testing, but how carefully do we really think about when Test Automation should take place and where it fits in the Quality Assurance process? Test Automation is very important, of course, but executing Test Automation can be a force multiplier when it is done in the right place and at the right time. Timing is a crucial factor for effective Test Automation: when done too early, Test Automation simply automates bugs. Too often, I have seen the test automation process begin too early before effective manual test case creation and execution with exploratory testing eliminates easy to find bugs. The inevitable result is that bugs get automated and routinely get passed as a regular, functional aspect of the system. Test Automation itself cannot find the bugs; a good tester must find bugs first and then write detailed tests that will report bugs that contravene both design and business requirements. However, when done too late, Test Automation does not provide enough benefit because there is not enough time to write and execute many good tests.
NIKO MANGAHAS – Director Enterprise Quality, RCG Global Services
WHEN IS QA ENOUGH? UNDERSTANDING THE TOTAL COST OF QUALITY
It is a tremendous challenge to find a good answer to how much testing we need. Do we not do testing enough or are we doing too much testing? How do we justify the time and effort that goes into testing? The answer lies in understanding the “Total Cost of Quality” which is using metrics and data to understand how much is being spent in ‘quality activities’ and compare that to how much value it is presently or potentially generating across various aspects and areas.
BRIAN SAYLOR – Automation Architect, Discovery
TRIALS AND TRIBULATIONS OF TEST TESTING
A reliable selenium automated test suite, absolutely requires that the tests themselves be validated. That seems obvious, and it seems simple. But in practice this is the one of the things that engineers and teams skip over or perform poorly. Why is that? Perhaps testing tests seems redundant or maybe it is not clear what validating an automated test really means. Perhaps it is because engineers just lack guidance and practical tips on how to best go about validating automated tests. Could your own automated testing endeavor benefit from guidance on validating your tests? Or maybe some tips for helping other members of you team? Then join Brian for a dive into automated test testing and learn how much testing a test tester can test when a test tester tests tests.
JOSE ADOLFO LOZANO MARIN – ETL Test Lead, Accenture
INDUSTRIALIZATION AND AUTOMATION – HOW TO BRING THEM TOGETHER TO SUCCESS IN THE BI WORLD
Industrialization a word that sometimes doesn’t match with the typical profile for an Automation resource, sometimes is driven by the industry or from the resource. Today I want to talk about the importance of the industrialization in the automation of BI application. If you did have the opportunity to work in a BI project, let me give you an overview of the phases of this kind of project: Ingestion of the data into a Datamart or big data environment , Transformation, Reporting or Analytics. Those automation flows can contain multiple subject areas needed to certify that the transformation is accurate and identify the potential opportunities areas during the transformation process, before even test the reporting part.
JOHN ARROWWOOD – SDET, FINRA
ASK THE RIGHT QUESTIONS
The point of Testing is always one thing: to answer a question. But not all questions are created equal. The questions we ask ourselves (or are asked of us) can literally create the box we want so desperately to think outside of.
ROBERTO CARDENAS ISLA – Tech Architecture Delivery Specialist, Accenture
WRITING BDD/TDD WITH PESTER FOR POWERSHELL SCRIPTS
The configuration as code is a huge deal today, and everything needs to be automated in order to roll out new releases quickly or roll back in case of issues. In the Windows case, PowerShell is widely used and new scripts are daily created by the community to automate processes like tools installation and configuration, monitoring services status, send a notification, generating specific files in a specific path as well as populating dynamically content. With all these features out there, it is mandatory to generate components with high quality. Therefore, how to generate Powershell code with high quality, how to help DevOps to design solutions focus on the expected behavior before start coding, how to generate good reports like Code-Coverage? Well, we can use the Pester. Pester is a Behavior-Driven Development (BDD) open-source framework with powerful features like: Unit Tests framework to perform and verify PowerShell commands. Easy and clear file naming convention for naming tests to be discovered under your projects. It follows the same concepts used by other TDD/BDD frameworks like Describe, Context, It. The DSL language contains Mocking functions that allow mimic 3rd party tools or any command inside the PowerShell code. Generates Test Results artifacts that can be used in Test Code Coverage metrics reports. Integration with CI/CD tools like TeamCity, Azure DevOps, and others. In this presentation, I will provide real examples of these features and how to do the integration with Azure DevOps Pipelines.
ANNA COUCH – Director Of Quality Assurance, Aaron’s Tech
CAN I BUILD AUTOMATION INTEGRITY INTO MY LEGACY PRODUCT CI/CD PIPELINE WHILE I’M TRANSITIONING TO A DIGITAL SOLUTION WITHOUT INCREASING RESOURCES?
As one of our product teams started the modernization of the monolithic application to a micro-services architecture, we realized that the automation test suite needed improvement. What we had was too lengthy and unstable to allow for multiple deployments a day. Our goal was to be able to run the automation suite fast enough initially to meet multiple daily deliveries and require no manual intervention unless a defect was found. We needed to do this without throwing more resources at it and creating waste.
COSTA AVRADOPOULOS – Principal – QA Practice Leader, Improving
TESTING THE NEXT GENERATION OF TECHNOLOGIES: IOT
Everyone here will agree that Test Automation has an important place in Software Testing, but how carefully do we really think about when Test Automation should take place and where it fits in the Quality Assurance process? Test Automation is very important, of course, but executing Test Automation can be a force multiplier when it is done in the right place and at the right time. Timing is a crucial factor for effective Test Automation: when done too early, Test Automation simply automates bugs. Too often, I have seen the test automation process begin too early before effective manual test case creation and execution with exploratory testing eliminates easy to find bugs. The inevitable result is that bugs get automated and routinely get passed as a regular, functional aspect of the system. Test Automation itself cannot find the bugs; a good tester must find bugs first and then write detailed tests that will report bugs that contravene both design and business requirements. However, when done too late, Test Automation does not provide enough benefit because there is not enough time to write and execute many good tests.