Traditional testing focuses on writing test cases with known data sets, steps, and expected results, in a controlled test environment. But how do we know we are testing the right use cases before deployment? What if we have highly complex business rules that preclude managing clear and easily executable test cases? As testers, our goal is to assess software quality, and thus, provide confidence to our customers that the systems will perform to expectations. But we don’t usually have the time and the means to test everything, and sometimes we just don’t know what we don’t know. With the rise of big data and the push to use analytics for decision-making, as testers, we too can leverage data to assess system behaviors without having to replicate the end-to-end business scenarios.
Takeaways from the talk:
- In this talk, you will learn about specific tactics for mitigating risk, not by executing test cases, but instead by supplementing your testing strategy by intelligently leveraging your organization’s data and using reporting tools and data to assess system behavior. We call it Data Quality Reporting.
October 20 @ 09:00
09:00 — 09:45 (45′)
Jon Szymanski