It’s a fact of life that we often have to write automated tests for features that have defects, that interact with 3rd party APIs that aren’t returning the right responses, or for items that we know aren’t working right. When the team has decided that the behavior isn’t going to be fixed, what’s an automation engineer to do? Let the tests fail? Not write them? Champion harder for the defects?
Jenny will suggest writing your tests to pass and setting them up to fail.
By creating tests that pass on the current expected behavior (the defect), we are in a perfect position to tell when the defect is resolved, or the API is returning the correct information, or any of the other error cases we may be encountering. This prevents failure fatigue (from seeing a test ‘always fail), while still providing meaningful, actionable information out of our test suite.
We will discuss several cases that she has experienced that this method has worked for as well as how to keep the rest of the team informed through TODOs, Jira stories, and documentation. And–of course!–what to do when your test finally fails.
Takeaways from the talk:
- When we release software that defects we decline to fix, we still need to verify that behavior
- Failing tests contribute to failure fatigue
- Automation is designed to verify the states of systems, not to verify objectively correct behavior
Jenny Bramble