Learn from the Testing Experts
17th OCTOBER, 2024
VANCOUVER
Keynotes
Tariq King is a recognized thought-leader in software testing, engineering, DevOps, and AI/ML. He is currently the CEO and Head of Test IO, an EPAM company. Tariq has over fifteen years’ professional experience in the software industry. He has published over 40 research articles in peer-reviewed IEEE and ACM journals, conferences, and workshops, and has written book chapters and technical reports for Springer, O’Reilly, Capgemini, Sogeti, IGI Global, and more. Tariq has been an international keynote speaker and trainer at leading software conferences in industry and academia, and serves on multiple conference boards and program committees.
Talk: The Rise of Generative AI: Judgment Day
It’s been over 70 years since Alan Turing defined what many still consider to be the ultimate test for a computer system — Can a machine exhibit intelligent behavior that is indistinguishable from that of a human? Originally coined the imitation game, the Turing test involves having someone evaluate text conversations between a human and a machine designed to respond like a human. The machine passes the test if the evaluator cannot reliably tell the difference between the human versus machine-generated text. Although the Turing test generally serves as a starting point for discussing AI advances, some question its validity as a test of intelligence. After all, the results do not require the machine to be correct, only for its answers to resemble those of a human.
Whether it’s due to artificial “intelligence” or imitation, we live in an age where machines are capable of generating convincingly realistic content. Generative AI does more than answer questions, it writes articles and poetry, synthesizes human faces and voices, creates music and artwork, and even develops and tests software. But what are the implications of these machine-based imitation games? Are they a glimpse into a future where AI reaches general or super intelligence? Or is it simply a matter of revisiting or redefining the Turing test? Join Tariq King as he leverages a live audience of software testing professionals to probe everything from generative adversarial networks (GANs) to generative pre-trained transformers (GPT). Let’s critically examine the Turing test and more because it’s judgment day — and this time, we are the judges!
Tutorial: An Introduction to AI-Driven Test Automation
Conventional test automation approaches are time-consuming and can produce scripts that are fragile and overly sensitive to change. The rise of AI-driven test automation tools is promising more robust and resilient test scripts that are able to self-heal as the application evolves. But what exactly is this technology all about and how do you get started? Does it require learning new skills and technologies? What tools are immediately available for beginners? Join Tariq King as he introduces you to the world of AI-driven test automation. Learn the fundamentals of AI and ML and how you can apply it to software testing problems. Discover where you can find freely available, open-source tools to support AI-driven test automation. Using a step-by-step approach, Tariq guides you through the basics of what is needed to help you get started with AI-driven testing. No prior programming or AI/ML experience needed!
Don has over 20 years of experience in Application Delivery Management, Lifecycle management (Agile, Iterative, and Waterfall), Functional Automation, and Performance Engineering. He is a seasoned speaker at various conferences and webinars, where he shares his insights and expertise.
Talk: Not All AI is Created Equal
Join Don in this engaging session as he delves into the intricate world of Artificial Intelligence (AI). Don, our expert speaker, will explore the multiple areas of AI research. We’ll discuss how various AI paradigms differ, and why treating them equally can be problematic. We’ll also discuss the impact AI’s role in testing and beyond. Prepare to be inspired by specific examples showcasing AI’s potential.
Features Speakers
Talk: Have You Shifted Only Halfway Left
While many companies are adopting the concept of shifting QA left, only a few are truly embracing the full extent of this approach. Some organizations believe they have successfully “shifted left” but fail to recognize the potential for even greater speed and efficiency in delivering value to customers.
Takeaways from this talk
- How to implement Specification by Example/BDD as a development methodology, not as a tool.
- If you are writing test scripts you have already failed.
- How to write – and automate – requirements.
- Zero test cases, zero test scripts, and lots of automation is the right way to go.
- How to reuse your automation across interfaces – use at least half of the automation code for the web, the API, and the mobile apps.
- Cut the cost of maintaining your automation by 50% to 80%.
Talk: The Lazy Tester: Musings on being Efficient
There are many mis-conceptions and perceptions around automation as it pertains to software testing, so in this talk, we will discuss those and set the record straight on what automation is and, more importantly, what it is NOT.
Takeaways from this talk
The term automation as it pertains to software testing has been a driving force in defining the software testing industry. For many years, we’ve used it as a catch-all to determine whether a tester, testing team, or IT organization is successful. In this talk, we will discuss the five misconceptions that are pervasive within companies – including using a percentage or number of test cases to define success and specifying a title/role for those who automate and matrixing that role across teams versus embedding it within the project they are supporting. Melissa will discuss re-booting our current thinking of automation and show tactics to address the five misconceptions that have contributed to what many of us would consider one of the wedges that divide the testing industry. In addition, she will share practical and proven approaches you can take back with you to show immediate and valuable results.
How AI can increase test efficiency
IT leaders are under pressure to incorporate Artificial Intelligence and generative AI into various business operations such as software delivery, application modernization, and cloud migration. According to 40% of executives, failing to adopt AI could jeopardize their organizations’ economic viability in the next few years and decade.
In today’s complex IT environment, where even minor changes can impact quality, utilizing AI for quality assurance can provide a competitive advantage. Implementing AI-driven quality initiatives intelligently can accelerate software delivery by improving efficiency, productivity, and accuracy in testing processes.
AI serves to enhance human productivity rather than replace it. We need to integrate AI into our products to empower users to perform their roles more intelligently; hence, enabling organizations to improve quality and reduce operational costs.
AI will foster a more interactive, iterative, and collaborative relationship between machines and testers. It’s essential to recognize that humans will always be indispensable for tasks requiring decision-making, ethical judgment, and understanding nuanced contexts. Quality engineering teams must anticipate AI transforming their methods but should rely on their creativity, strategic thinking, and problem-solving abilities, as these are indispensable for success alongside AI.
Takeaways from this talk
AI can empower QA teams to streamline the process of test case creation and test planning
- AI enables improvement of test case resilience
- How AI can help QA teams gain intelligent user experience insights
Implementing Testing Centers of Excellence – Demystifying the Impossible
One of the core goals of testing and quality assurance is producing the best possible product, every time. Repeatable, reproducible success within teams is the best path to making this happen. There is this belief amongst executives that you can just make this happen by producing a simple blueprint map of how to do everything perfectly. We all know that it’s not that simple. You can’t just look up how to test for success in a textbook and cut and paste the instructions for your organization. Mike has a phrase fo this type of process approach, “One size fits none”. A testing center of excellence needs to be custom built for your org taking into account the uniqueness of your situation and needs. Which good practices are right for you, how do they need to be tailored to be effective? A solution that considers the Big Three – people, process and technology – as they relate to or organization. In this presentation, Mike Hrycyk will lead you through an exercise of determining a path to implementing an effective, usable, Testing Center of Excellence that will work for you and your teams.
Takeaways from this talk
What questions do you ask when you’re figuring out what to implement
Tips for deciding which ‘good’ practices will work for your organization
Which assets and artifacts will work best for you
How do you help the rest of the org engage and adopt your Test Center of Excellence
M.A.C.H. speed
Today, businesses and organizations are facing the mounting pressure to compete in an ecosystem of accelerated innovation and competition. To meet this challenge, new architectures and technologies emerged for building services that are composable and scalable in order to support digital transformations. The 4 key components of these composable, modular, & scalable services have been boiled down to the acronym MACH (Microservices, APIs, Cloud-native SaaS and Headless).
This talk will cover the why, what, and how MACH fits in the Quality Engineering paradigm and how to effectively implement grow the QA practice in a MACH based environment.
Takeaways from this talk
WHAT and WHY of the MACH based architecture
- How to effectively implement a QA practice in a MACH environment
Mechanics of Data testing
The road to AI-driven innovation starts with data readiness. In today’s world, data is at the heart of everything we do. But how can we ensure that this data is centralized, relevant, and of high quality? That’s where data testing comes in. Unlike traditional web testing, data testing requires unique approaches to verify accuracy and context. Join us for a discussion on effective data testing strategies and practical techniques to help you become more data aware.
Takeaways from this talk
- Master essential data testing terminology
- Understand effective data testing strategies
- Explore key verification points in a holistic data pipeline
- Discover the tools and technologies for data testing
- Identify the skills you need to develop
Beyond the Code: Mastering the Non-Technical Side of Performance Testing
Performance testing is often viewed as technical, but its success hinges on more than just running tests. Many teams struggle with aligning testing efforts with business goals, collaborating effectively, and communicating results in a way that drives action. This disconnect can lead to missed opportunities for improvement and inefficient use of resources.
In this session, we’ll explore the non-technical side of performance testing, focusing on strategy, collaboration, and communication. You’ll learn how to align performance testing with business objectives, foster better teamwork, and report results effectively to make a real impact. This approach will help you turn performance data into actionable insights and drive meaningful improvements in your software.
Panel Discussion Speakers
Peter Budden
Peter is a leader in the Deloitte Quality Engineering practice with experience delivering projects ranging from complex systems integrations to startup tech. In current projects spanning advisory and delivery leadership, Peter brings lessons learned and realistic perspectives on what strategies and tactics that are achieving impact for clients in 2024.
James Coles-Nash
I am a proven tech leader with a solid track record of developing large infrastructure, releasing high impact features and developing high performing teams for nearly 20 years. I enjoy the thrill of new product development and the unique challenges inherent in improving large-scale existing efforts. I am equally comfortable at getting my hands dirty being an individual contributor as I am building and leading teams.
Eugene Jiang
Resourceful and meticulous senior leader in software development and quality engineering and experienced in diverse environments/products and outstanding record of building, mentoring and retaining strong team, creating an open culture, cultivating the innovation, improving performance and increasing productivity.
Don Jackson
20+ Years in Application Delivery Management Experienced in lifecycle management (agile, iterative, and waterfall), functional automation, and performance engineering.