The automation delivered by RPA BOTS is routinely deployed in a live business and production environment. From performing reconciliations, to preparing reports, sending statement, and even interacting with customers, BOTS are becoming increasingly sophisticated. In many cases a BOT is akin to a human employee or more realistically its several human employees combined into a BOT. For a piece of code or a BOT to run in production, perform important business operations, interact with customer etc. requires a very high degree of validation rigor and risk containment strategy. However, is such a strategy commonplace? Do organizations have a rigorous mechanism to validate and certify BOTS? Are there robust checks and control to identify and contain risks of BOTS deployed in live business environment? While coding or building a BOT is slightly simpler than developing software from scratch, is the BOT code held to the same high standard as a software code that is deployed in live business operations? is it subjected to the same battery of verification and validation to contain risk and ensure quality? Moreover, for humans, there are tests, training, background checks, ongoing appraisal, coaching guidance, and none of it for BOTS. So, are the BOTS a risk in waiting? And can we contain this risk with a quality and testing framework that provides highest levels of confidence in the RPA BOTS and our new high performing digital employees.

April 3 @ 13:30
13:30 — 14:15 (45′)

Darshan Dave