Traditional testing focuses on writing test cases with known data sets, steps, and expected results, in a controlled test environment. When executions pass, we promote the code to production. But what if the test data and environment are out of sync with production? How do we know we are testing the right use cases before deployment? What if we have highly complex business rules that preclude managing clear and easily executable test cases? How do we know the code will behave the same in the production environment and how can we identify issues before our customers do? As testers, our goal is to assess software quality, and thus, provide confidence with our customers that the systems will perform to expectations. In our testing strategy, we must perform a risk analysis that includes identifying the highest value scenarios to test, our test data, and maintainability. We want to ensure our efforts in the test environment translate into the production environment. But we don’t usually have the time and the means to test everything, and sometimes we just don’t know what we don’t know. With the rise of big data, data science, and the C-suites obsessive attention to using analytics for decision making, as testers, we too can leverage data to assess system behaviors without having to replicate the end-to-end business scenarios in production. Are we just testers, or are we truly engaged in quality assurance soup-to-nuts?

April 30 @ 09:55
09:55 — 10:35 (40′)

Jon Szymanski