Learn from the Testing Experts

31st March, 2026

ISTANBUL

>> Home

>> Register

>> Programme Schedule

Featured Speakers

Alper Donmez

Alper Donmez

VP of Quality
Jotform

How to create an environment to test your own AI

This topic explains how we created a controlled and reliable environment to test our AI system. Based on our experience, it covers environment isolation, data-focused testing, and continuous validation to ensure consistent model behavior, reliable results, and safe deployment.

Takeaways from this talk

  • AI testing requires isolated and production-like environments
  • Data pipelines and datasets must be tested and versioned, not just models
  • Model behavior should be evaluated with clear metrics and logging
  • Reproducibility is critical due to non-deterministic AI outputs
  • Continuous testing integrated into CI/CD reduces production risk
Cemile Elif Top

Cemile Elif Top

Software Test Engineering Manager
BosphorusISS

Testing in the Wild: QA when the Embedded Team Grows Faster Than Testers

Embedded product teams often scale quickly — more firmware variants, more features, more field deployments — while the QA team stays the same size. The result is a familiar chaos: unclear testing priorities, unpredictable release confidence, and field bugs that only show up under real-world conditions.

This presentation is a practical survival guide for QA engineers working in any field or with any technology, designed to help them navigate situations like these. We’ll cover how to define what not to test, how to build a reliable acceptance test core, and how to set up a lightweight triage flow for field issues.

The focus is not theory and not tooling hype — just repeatable patterns that allow small QA teams to maintain quality in fast-moving environments for any type of domain and project.

Takeaways from this talk

  • How to choose what not to test: Attendees will learn a practical way to set testing priorities when time and people are limited. The focus is on identifying the critical paths in the system and letting those guide the testing scope, instead of trying to test everything and ending up testing nothing well.
  • How to handle field bugs quickly, even with almost no data: I’ll share a simple approach for breaking down field issues The method focuses on reproducibility steps, collecting the minimum data, and avoiding guesswork.
  • How to create an acceptance test core that actually protects releases: Instead of chasing full coverage, we’ll look at building a lean set of high-value tests that run on every build. This core is small enough to maintain, yet strong enough to give the team real confidence before shipping.

Tutorial Speaker

Alper Donmez

Cagin Uludamar

Founder
Assertify

The AI-Augmented QE: "Human-in-the-Loop" Contract Testing

The Concept: Contract testing is a powerful but often misunderstood practice in the industry. This session serves two purposes:

  1. Education: Introducing the fundamentals of contract testing and why it's critical for microservices.
  2. AI-Human Collaboration: I will demonstrate a workflow where AI generates the initial scenarios, and the audience acts as the "Human Layer" to review and correct the logic.

Why it fits: It shows that while AI can handle the "heavy lifting" of a complex topic like contract testing, human judgment remains the non-negotiable final gatekeeper for quality.

Panel Discussion Speakers

Mehmet Gok

Principal Test Engineer
Türksat

Mehmet Gok

Mehmet Gök is a Principal Test Engineer with 16+ years of experience in software quality, test automation, and verification across aerospace, satellite communications, and large-scale consumer platforms.

He works on In-Flight Connectivity (IFC) and satellite-based aviation systems, focusing on reliability, performance, and regulatory compliance, including DO-178C-style verification, API-driven automation, and system and integration testing.

Previously, he held senior and leadership roles at Turkish Aerospace, Udemy, and Türksat, combining experience from highly regulated avionics programs with high-traffic consumer systems. His current interests include AI-assisted testing, large-scale API and interface test automation, service virtualization, and AI-driven MC/DC test case generation for safety-critical software.

Pinar Tatlisu Morali

Test Automation Team Leader
Airties

Pinar Tatlisu Morali

QA Test Automation Team Leader with embedded software development background. 17
years of working history in the information technology and consumer electronics industry.
Self-driven, energetic and customer-oriented. Passionate about continuous
learning and professional development.

>> Home

>> Register

>> Programme Schedule