Learn from the Testing Experts

31st March, 2026

ISTANBUL

>> Home

>> Register

>> Programme Schedule

Featured Speakers

Alper Donmez

Alper Donmez

VP of Quality
Jotform

How to create an environment to test your own AI

This topic explains how we created a controlled and reliable environment to test our AI system. Based on our experience, it covers environment isolation, data-focused testing, and continuous validation to ensure consistent model behavior, reliable results, and safe deployment.

Takeaways from this talk

  • AI testing requires isolated and production-like environments
  • Data pipelines and datasets must be tested and versioned, not just models
  • Model behavior should be evaluated with clear metrics and logging
  • Reproducibility is critical due to non-deterministic AI outputs
  • Continuous testing integrated into CI/CD reduces production risk
Cemile Elif Top

Cemile Elif Top

Software Test Engineering Manager
BosphorusISS

Testing in the Wild: QA when the Embedded Team Grows Faster Than Testers

Embedded product teams often scale quickly — more firmware variants, more features, more field deployments — while the QA team stays the same size. The result is a familiar chaos: unclear testing priorities, unpredictable release confidence, and field bugs that only show up under real-world conditions.

This presentation is a practical survival guide for QA engineers working in any field or with any technology, designed to help them navigate situations like these. We’ll cover how to define what not to test, how to build a reliable acceptance test core, and how to set up a lightweight triage flow for field issues.

The focus is not theory and not tooling hype — just repeatable patterns that allow small QA teams to maintain quality in fast-moving environments for any type of domain and project.

Takeaways from this talk

  • How to choose what not to test: Attendees will learn a practical way to set testing priorities when time and people are limited. The focus is on identifying the critical paths in the system and letting those guide the testing scope, instead of trying to test everything and ending up testing nothing well.
  • How to handle field bugs quickly, even with almost no data: I’ll share a simple approach for breaking down field issues The method focuses on reproducibility steps, collecting the minimum data, and avoiding guesswork.
  • How to create an acceptance test core that actually protects releases: Instead of chasing full coverage, we’ll look at building a lean set of high-value tests that run on every build. This core is small enough to maintain, yet strong enough to give the team real confidence before shipping.

>> Home

>> Register

>> Programme Schedule