In a typical QA cycle, hundreds or thousands of issues can be opened. Most of these are new and valid issues that need to be debugged, worked, and coded into a solution. Others, however, drain test and development time and resources only to be found as a duplicate of another issue, a known bug with an existing workaround, or as already fixed in the latest version. Using a combination of Python, Machine Learning, Natural Language Processing, and analytics, my team developed a tool that helps validate new issues by comparing human-readable input against the entire data set of historical issues. The solution is data source agnostic and accessed via cloud micro-services, making it easy to re-use and re-deploy across a variety of teams and organizations.
September 25 @ 11:40
11:40 — 12:20 (40′)