AARTI SURESH - Senior QA Engineer, eyos


Implementing integration test automation to run after the development pipelines ensures defects are found earlier in the lifecycle and regression of primary functionality is always satisfied with incremental code delivery. The specific case study for setting up the infrastructure necessary for running integration test suites as part of the pipeline would be using the following tech stack - Bitbucket for code hosting and collaboration, AWS as the cloud platform, and SmartBear TestComplete as the functional test automation tool.

Once certain standards are laid out, for example, which test environment to perform integration tests on, the continuous execution of a regression/ sanity suite in the pre-production integration environment would greatly increase the confidence in the quality of the outcome. Since the run reports can be set visible to all stakeholders, it promotes transparency in the team.

Bulkier test suites can be run as part of nightly executions and smoke testing suites can be run as part of the pipeline. The infrastructure could also be set up so that the cost of utilization can be easily tracked in the cloud platform and optimized for regular usage. Triggers for execution can be Slack, a scheduled trigger, or a pipeline deployment.

In summary, I am spending time upfront and setting up the infrastructure once can reap rewards long-term in reducing the release cycle without compromising on quality.

DANNY TOURGEMAN - VP | Quality Guild, AppsFlyer


Testing at scale - challenges. Why | How | Collect | Alert

  • Why do we run testing in production?
  • How do we execute millions of tests over production and test envs
  • How do we collect all data from production?
  • How do we alert testing from production to RND?

BRYNLEY SCULLY - CTO, Tescom Singapore


Many automation test teams start their work organically, automating a few test cases, and then slowly building up to a suite of automated test cases. Few teams start with an automation test strategy. Some don't even have one. As a result, such teams have a lot of untapped value in their automation testing.

This talk focuses on these teams, who already have started their automation test journey, without a defined test strategy, to do a self-examination (and hence the topic “My Automated Tests…”) of where they are today, and look for areas to expand their scope to bring added value to their projects and business.

With the maturity of automation testing in approaches, tools, and expectations, there is much that can be gained from exploring and expanding your automation test strategy.

ANUCAMPA SINGH - Automation Test Engineer, Visa


  • Need for Mobile app testing
  • Framework selection and platform considerations
  • How mobile app testing is implemented
  • Challenges faced

KEN S'NG WONG - Senior Software QA Engineer, Autodesk Asia


It is well-known that there are many go-to test automation frameworks when automating test cases for web applications (web apps). The latest entry is Playwright. Like any typical web-app test framework, Playwright takes advantage of the HTML/DOM UI layout, performs validation on the UI elements, then compares the responses, and behaviors, against the passing criteria, provided. However, it does not do the same with the HTML Canvas element. With a web app that only renders its UI in a Canvas, this meant that testing required the user to click, drag and scroll all while observing the rendered content. Typical visual regression and pixel-to-pixel matching techniques incur higher maintenance costs to the team as a series of golden images are required for each test case. Furthermore, this test methodology does not scale well with many test configurations.

Thus, the team decided to complement Playwright to automate most of the test cases. All test cases use Playwright as the entry point to launch the web app per the required steps. Once the web app is launched, through Playwright, the buffer of the Canvas is extracted with JavaScript injection, while various actionable events like mouse clicks, drags and scrolls are emitted. The extracted buffer is used for actionability checks at the pixel level, while also validating the rendered scenes on the Canvas with re-purposed OpenCV algorithms at the end of each test execution. The actionability checks are crucial before actionable events are emitted to the Canvas, as the scene is rendered while the pixel data are streamed into the web browser. If the action is emitted before the pixel is available, or rendered, the web app will behave in unexpected ways and distort all the results for the rest of the test steps, giving false negatives, or false failures.

With the combination of Playwright, OpenCV, and other libraries, the team is able to put focus on adding more meaningful test cases to the test automation, while fixing any regression bugs reported by it, thereby delivering a high-quality Minimum Viable Product (MVP) from sprint to sprint.

GANESH NEELAKANTA IYER - Lecture, National University of Singapore


Artificial Intelligence has touched all parts of human life including medical applications, the business world, government sector to quote a few. One part of this session will focus on testing AI-based solutions and AI models - The challenges and opportunities for testing AI solutions. Another part of the session will focus on how AI is used for testing general software solutions.