Learn from the Testing Experts

18th  June, 2026

MELBOURNE

>> Home

>> Register

>> Programme Schedule

Featured Speaker

Renard Vardy

Principal Consultant
KJR

The Rise of Test Data Management

Test data management can make or break performance and automation testing. Drawing on over 25 years of experience across HR, finance, government and large enterprises, Renard shares real-world lessons from the evolution of test data with stories from the wild times when it was common practice to use production copies in testing, large scale data obfuscation programs through to generating 100% synthetic data and exposing test data to the enterprise via a self-service portal.

Takeaways from this talk

  • These sessions take us on a practical journey through the evolution of test data management, what worked, what failed and why. Attendees will gain fresh insights into a proven approach to building faster, safer, and more agile test data strategies to support modern quality engineering.

Aarti Suresh

Solutions Engineer
Postman

MCP and AI Agents in QA: A Strategic Implementation Framework

As AI-driven code generation accelerates software development velocity, the quality assurance field faces an imperative to strategically evaluate AI agent implementation. Critical decisions include determining optimal use cases for AI-powered testing, identifying appropriate points for human intervention, and establishing governance frameworks for agent deployment. Success in this domain requires a comprehensive understanding of the rapidly evolving agentic AI landscape and its implications for QA practices.

Organizations are establishing AI maturity models that span the spectrum from basic code completion assistance to fully autonomous multi-agent systems. Concurrently, they are developing sophisticated approaches to building task-specific AI agents. The Model Context Protocol (MCP), introduced in November 2024, has experienced remarkable adoption rates and emerged as a critical integration layer, enabling enterprises to connect proprietary systems directly to large language models and deliver measurable business value.

This presentation equips QA professionals with a strategic decision framework for AI agent adoption, structured around three fundamental pillars: effectiveness, security, and sustainability. Any discussion of agent effectiveness must acknowledge MCP as a key enabling technology. Understanding MCP’s capabilities and limitations is essential for informed deployment decisions.

MCPs deliver distinct value across the QA organizational hierarchy—from heads of quality to QA leads and individual contributors. I will provide role-specific agent implementations leveraging MCP, including Test Planning Agents for strategic roles and QA Health Monitoring Agents for operational teams. Given the security implications inherent in AI agent architectures(remote vs local), the presentation will provide a comparative analysis of MCP server types, evaluating their respective security profiles.

Equally important is understanding when MCP implementations are contraindicated. Scenarios warranting caution include heightened security requirements, latency-sensitive applications, and situations where context provision creates unnecessary overhead or risk.

Security considerations merit dedicated attention. The convergence of three risk factors—access to sensitive data, connectivity to external systems, and exposure to untrusted content—creates substantial vulnerabilities in AI agent deployments. Analyzing how MCP integration affects this risk triad is essential for sound architectural decisions.

Finally, sustainability must guide deployment strategies. Decision-makers should exercise judicious discretion in AI agent utilization. GPU-intensive runtime agents incur significant computational and energy costs, whereas generated code offers efficiency advantages. While MCPs provide cost-effective context mechanisms, organizations must establish longitudinal monitoring of token consumption, accuracy metrics, and overall sustainability to ensure responsible and effective implementation.

.

Takeaways from this talk

  • Contextual understanding of why it is important to design AI agents that are effective, secure and sustainable.
  • Understanding exactly what MCP is, why the buzz, architecture and types of servers.
  • Examples of how MCPs can be used for roles – heads of QA, QA leads and engineers
  • A framework to rate AI agents from a security risk angle.
  • Decision – makers would understand when to look for a local MCP setup versus a remote, when to get the ‘human in the loop’, and how to take sustainable and secure decisions at the ground level of AI design.

Vinay Madan

Technical Lead
MVSI

DevTestSecOps – Secure Shift-Left Testing (Embed Security Testing in CI/CD Pipelines)

Description: DevTestSecOps integrates security testing into CI/CD pipelines, embedding SAST scans, container vulnerability checks, and E2E functional tests before code reaches production. This talk demonstrates how to shift security left using real tools (SonarQube, Trivy, Playwright, OWASP ZAP) and provides working code examples that reduce vulnerability detection time by 10 weeks and remediation costs by 90%..

Takeaways from this talk

  • Automate security gates (SAST, container, E2E, DAST) in CI/CD for fail-fast feedback
  • Use industry tools: Trivy, SonarQube, Playwright, GitHub Actions—tested & proven
    Shift vulnerabilities LEFT: catch in weeks 1-2, not weeks 7-8

>> Home

>> Register

>> Programme Schedule