Learn from the Testing Experts

7th – 8th NOVEMBER, 2024



Tariq King

Test IO

Tariq King is a recognized thought-leader in software testing, engineering, DevOps, and AI/ML. He is currently the CEO and Head of Test IO, an EPAM company. Tariq has over fifteen years’ professional experience in the software industry. He has published over 40 research articles in peer-reviewed IEEE and ACM journals, conferences, and workshops, and has written book chapters and technical reports for Springer, O’Reilly, Capgemini, Sogeti, IGI Global, and more. Tariq has been an international keynote speaker and trainer at leading software conferences in industry and academia, and serves on multiple conference boards and program committees.

Talk: The Rise of Generative AI: Judgment Day

It’s been over 70 years since Alan Turing defined what many still consider to be the ultimate test for a computer system — Can a machine exhibit intelligent behavior that is indistinguishable from that of a human? Originally coined the imitation game, the Turing test involves having someone evaluate text conversations between a human and a machine designed to respond like a human. The machine passes the test if the evaluator cannot reliably tell the difference between the human versus machine-generated text. Although the Turing test generally serves as a starting point for discussing AI advances, some question its validity as a test of intelligence. After all, the results do not require the machine to be correct, only for its answers to resemble those of a human.

Whether it’s due to artificial “intelligence” or imitation, we live in an age where machines are capable of generating convincingly realistic content. Generative AI does more than answer questions, it writes articles and poetry, synthesizes human faces and voices, creates music and artwork, and even develops and tests software. But what are the implications of these machine-based imitation games? Are they a glimpse into a future where AI reaches general or super intelligence? Or is it simply a matter of revisiting or redefining the Turing test? Join Tariq King as he leverages a live audience of software testing professionals to probe everything from generative adversarial networks (GANs) to generative pre-trained transformers (GPT). Let’s critically examine the Turing test and more because it’s judgment day — and this time, we are the judges!

Tutorial: An Introduction to AI-Driven Test Automation

Conventional test automation approaches are time-consuming and can produce scripts that are fragile and overly sensitive to change. The rise of AI-driven test automation tools is promising more robust and resilient test scripts that are able to self-heal as the application evolves. But what exactly is this technology all about and how do you get started? Does it require learning new skills and technologies? What tools are immediately available for beginners? Join Tariq King as he introduces you to the world of AI-driven test automation. Learn the fundamentals of AI and ML and how you can apply it to software testing problems. Discover where you can find freely available, open-source tools to support AI-driven test automation. Using a step-by-step approach, Tariq guides you through the basics of what is needed to help you get started with AI-driven testing. No prior programming or AI/ML experience needed!

Janet Gregory

Testing and Process Consultant, Author
DragonFire Inc.

Janet is an author, international keynote speaker, workshop facilitator, and an agile testing coach and process consultant with DragonFire Inc. She is the co-author with Lisa Crispin of Agile Testing Condensed: A Brief Introduction, More Agile Testing: Learning Journeys for the Whole Team, and Agile Testing: A Practical Guide for Testers and Agile Teams. She has more than 23 years of experience in software development and specializes in showing agile teams how testing activities are necessary for the whole team to develop good quality products. She works with teams to transition to agile development and teaches agile testing courses worldwide.

Talk: Elevating the Testing function to Deliver Business Outcomes

Tutorial: Quality Practices Assessment model for Agile teams

It is easy to tunnel-vision on day-to-day work while working on an agile team. Teams can lose sight of where they are and where they want to grow to (their ideal state). They may sense what they are doing well but are unsure how to quantify that; or they might be struggling but can’t clearly articulate the issues or what to do to change it.

There is an objective way to look at team and organizational practices and contrast them with behavioural patterns. In this workshop, Janet introduces a quality practices assessment model that can be used in agile teams. This model introduces a practical means to reflect, assess, and adapt both behaviours and practices within a team and an organization. It includes: (a) Questions to reflect on specific behavioural patterns and capabilities, (b) A tool to assess and map the current state of being, and (c) Methods to focus and shift behaviours and practices to a more supportive state. In this workshop, you will apply the model to a singular context in your work environment, then practice using the entire model with a real case study. You will leave the workshop with insight and experience to assess, model, and recommend specific improvements for an agile team.

Key Takeaways:

  • Discover a set of quality practices and proficiencies for different behavioural patterns
  • Identify where you are now and how to transition to another stage
  • Practice assessing, modelling, and recommending specific improvements for an agile team

>> Home

>> Register

>> Programme Schedule