SPEAKERS

BHAUMIK SHROFF - Sr QA - Test Automation Architect, OTC Global Holdings

AI INNOVATE TEST AUTOMATION FOR THE FUTURE

In today’s era where more and more organizations are making the shift from manual to automated testing to incorporating testing earlier in the software development lifecycle. Automation testing plays a key role in continuous delivery. The role of testing with in the SDLC seems to be undergone with an evaluation over the time. Moving shift from Test Driven Testing (TDD) to Continuous Testing (CT) paradigm, test team face substantial challenges in maintaining a stable test environment, developing test automation, test execution & orchestration, and test report accuracy. The presentation will emphasize how to use Artificial Intelligence (AI) to overcome these challenges. Machine Learning (ML) is core of AI, which uses pattern identification by machine learning algorithms on tons of complex information to predict the future trend. AI is going to take testing to the next level by starch limit of testing in many ways like visual validation, API testing, risk analysis of test coverage, automatically create smart self-healing scripts using spidering. With this shift, there is a need for testing team need to know not only how to automate, but also analyze and understand complex data structures, statics and machine learning algorithms.

DR. DILHAN MANAWADU - Director, Head of Agile Delivery, Technology Quality Engineering, and BT Studio, Sysco Corporation

INTEGRATING AUTOMATED TESTING INTO DEVOPS AND AGILE

In the present environment, to succeed in the age of software and digital, enterprises need to mature their software delivery capability, enable business agility, and become continuous learning organizations. To enable these capabilities, enterprises would need to start by becoming Agile and adopt DevOps practices to trigger the shift. Automated testing is a technique that enables a DevOps culture in an enterprise and help software developers to create a safety net for their experimentations to learn about their products. Adopting such techniques makes teams more agile and obtains feedback on produced software sooner, rather than later. Further automated testing enables us to generate better quality software products and provides engineering productivity for the whole team. In this presentation, the speaker would define the various types of automated testing approaches that an enterprise could adopt. Further, the speaker will demonstrate how automated testing integrates into a DevOps mindset, culture, and CI/CD pipeline and benefit a software development team to be more agile. Finally, the author would provide pragmatic recommendations an enterprise could consider in adopting and championing these concepts into daily execution, such as various tools, processes, people and governance.

LEE BARNES - CTO, Utopia Solutions

CONTINUOUS PERFORMANCE TESTING IN DEVOPS

Performance testing isn’t the first thing organizations think about when moving to DevOps – in fact, it’s often ignored as traditional approaches don’t jive well with the fast and nimble world of DevOps. However, performance is still a critical part of the user experience, and poor performance and outages will quickly negate the value of the features you’re delivering. Lee believes that organizations don’t have to gamble with application performance. He will discuss techniques for implementing continuous performance testing in your pipeline, so you don’t have to roll the dice on your user experience. Topics will include performance testing activities at each stage of the pipeline – from the unit level through testing in production. You will also learn how to rethink your approach to performance testing and work with your DevOps practices instead of against them in areas like test environment and data management. You’ll walk away with a new outlook on performance testing in DevOps and ideas you can begin to implement in your pipeline.

LAGAN KHARE - Manager Quality Engineering, Elsevier

INTEGRATING AUTOMATED TESTING INTO DEVOPS AND AGILE

Elsevier builds healthcare education and clinical solutions-based products, our products are used daily by medical students, nurses, clinicians, practitioners, and healthcare professionals who rely on latest, up-to-the-moment information. To deliver new features and contents faster to our customers we wanted to steer away from doing traditional monthly releases to be able to release on demand . The bottleneck was running more than 3000 automated regression scripts in a short time. We worked on condensing 5-hour regression run into 30 mins by breaking the big Release on Demand challenge into smaller problems and addressed each one of them separately, Moving to current best DevOps practices like moving automation infrastructure to Docker, Kubernetes, infrastructure and configuration as code, managing environments, so that we can scale our tests and get fast feedback to developers to produce fixes faster. Faster fixes means quicker delivery of value to customers. Automation test results integration in JIRA using (ZAPI APIs), Improving pass percentage to avoid tedious manual verification for script failures, Implementing risk based separate UI and API test coverage for releases by categorizing and prioritizing tests

TREVOR CHANDLER - Private Researcher, Artificially Intelligent

USING ARTIFICIAL INTELLIGENCE WITH SELENIUM TO GENERATE AUTOMATION SCRIPTS

We will be looking at Artificial Intelligence in the context of how we can use it in QA to achieve our next set of advances in the global world of technology. With the capabilities currently achieved in Artificial Intelligence, their use in a software quality setting is above and beyond anything we've been able to do in the past. We are going to look at a real strategy to use AI as one of the tools in QA, and talk about other evolutions we stand on the edge of. In fact, the future for QA, once it adopts the newer capabilities of AI and other related technologies, will be a driving force and no longer live just on the side of quality. Instead, we will become world leaders, in our roles, that will start to include solving problems, not just dealing with quality and efficiency.Learn why and how this will happen, and how you can help advance us to our destiny as a charging force forward into problem-solving and efficiency for all aspects of software and hardware in the global world of technology.

DINESH RAISINGHANI - Principal Consultant, Capco

IS YOUR AUTOMATION INTELLIGENT ENOUGH TO ADDRESS UAT REQUIREMENTS?

Even though Test Automation has been around for many years now, the adoption rate is still on the lower end of the spectrum. That’s primarily because whenever firms discuss Test Automation, the focus is primarily on Test Execution and its well-known pitfalls like high up-front cost, reduced ROI, maintenance issues, etc. And hence the business doesn’t see the value in investing in automation. But with technology advancements, Test automation needs to be approached with a different mindset. I would be discussing ways to successfully automate UAT with the help of Model-based testing and how can businesses derive maximum value out of it. I would also be talking about how the adoption of RPA (Robotic Process Automation) can be used to automate UAT, especially for RTB (Run The Bank) applications. Finally, we would highlight risks that the firms should be aware of and ways to mitigate them.

DUSHYANT ACHARYA -Sr. Manager, QA, Directly Inc

TEST AUTOMATION FOR AI/ML APPLICATION

Machine Learning (ML) is already changing applications as we knew. In a few years, we have already seen used in many of our daily use products. We also have a lot of research and articles claiming, for good reason, that in the next few years – ‘most of the applications will have at least one AI/ML module’.Technologist in me is excited about this future and at the same time, Test engineer in me was curious and asking a lot of questions when I got introduced to AI/ML. What exactly is the ML application or module? How is it different than traditional applications? Is it true that ML application is always changing and improving? What does this mean to Quality? How do I automate something with unpredicted behavior? What does this mean for overall product quality for end sure? Thankfully, we have a few answers now. At high-level ML applications can be seen as multiple building blocks or components like - training data, algorithm, test data, pipeline infrastructure, feedback system, and retraining. The algorithm is the core ML part of the application. Training data is the data that we need to feed into an algorithm to generate intents or output which will be consumed by further pipeline. Test data is actual data against which algorithm will be used. The pipeline is data/flow infrastructure including scripts, data storage/data lake, and actual applications. The feedback system is where we can get qualitative feedback on algorithm output. And last, retraining the core algorithm based on this feedback. As you can see each tier or component has a specific role, and we can talk about quality checks and automation effectively now.

ANASTASIOS DASKALOPOULOS - Quality Assurance Specialist, Unleashed Technologies

THE RIGHT TIME AND PLACE FOR EFFECTIVE TEST AUTOMATION

Everyone here will agree that Test Automation has an important place in Software Testing, but how carefully do we really think about when Test Automation should take place and where it fits in the Quality Assurance process? Test Automation is very important, of course, but executing Test Automation can be a force multiplier when it is done in the right place and at the right time. Timing is a crucial factor for effective Test Automation: when done too early, Test Automation simply automates bugs. Too often, I have seen the test automation process begin too early before effective manual test case creation and execution with exploratory testing eliminates easy to find bugs. The inevitable result is that bugs get automated and routinely get passed as a regular, functional aspect of the system. Test Automation itself cannot find the bugs; a good tester must find bugs first and then write detailed tests that will report bugs that contravene both design and business requirements. However, when done too late, Test Automation does not provide enough benefit because there is not enough time to write and execute many good tests.

ZAINAB UMAR - Test Engineering Lead, ExxonMobil

EFFECTIVE TEST AUTOMATION STRATEGIES AND THE ROLE OF "TOOLS" IN IT

Automation is a key player in delivering quality software in today’s fast-paced software development environment. A decision to kick-off automation in a software development team starts usually with a choice of tool and eventually the business process starts to fit into the tool instead of vice-versa. Join me to learn my perspective and a successful automation journey that starts off discussing robust Automation frameworks, Automation considerations, and maintainable design, where tools are one element in this overall arena. Effective Automation development strategies incorporate risk-mitigation principles in their onset. The design starts by focusing on what truly is a “regression set” and what is highly critical for a business function. The manual testing costs are compared with those of automation development and maintenance and at the same time, the teams need to prioritize what and where should the manual testing be focused on, could there be some easy-wins and low-hanging fruit? The development stage should be the stage where a minimum comparable amount of time is spent since this is the “tool” part. This does not negate the fact that effective and maintainable development principles like reusable coding elements, effective and realistic wait times; easy test data update approach, etc. should be incorporated. The stage of automation, which I am usually most, concerned with and focused on is maintenance and support. If the automation is reused iteratively and with the least amount of cost then the goals of automation are achieved, otherwise, automation just becomes a burden and eventually loses all the ROI.

NIKO MANGAHAS - Director Enterprise Quality, RCG Global Services

ROAD TO HYPER AUTOMATION

It is a tremendous challenge to find a good answer to how much testing we need. Do we not do testing enough or are we doing too much testing? How do we justify the time and effort that goes into testing? The answer lies in understanding the "Total Cost of Quality" which is using metrics and data to understand how much is being spent in 'quality activities' and compare that to how much value it is presently or potentially generating across various aspects and areas.

PAUL HERZOG - Principal Consultant, West Monroe Partners

SIMPLICITY: THE PATH TO ACHIEVING AGILE TESTING EFFICIENCY

The pace of an Agile project creates demands efficiency in all testing techniques and processes. Traditional approaches run counter to this efficiency and must be modified or outright dismissed as a tester on an Agile team. Join Paul as he applies the Agile Manifesto Principle #10: “Simplicity – the art of maximizing the amount of work not done - is essential” to distinguish what past testing practices work and don’t work to clean out the clutter of the non-working. He will identify common process traps when adopting Agile methodologies, suggest priorities that are most important to Agile testing, and present ways to remove testing inefficiencies. Determine how to include process evaluation in regular Sprint Retrospectives or other checkpoints and implement continuous improvement focused on simplicity.

GLEN ACCARDO - HPTC Test Automation Lead, Schlumberger

IT'S NOT RANDOM. HOW TO DEAL WITH INTERMITTENT AUTOMATED TEST FAILURES

We've all been there: tests fail for no obvious reason. The team says that it is random or that it cannot be reproduced and the related bugs get closed with no fix. Over time, the failures continue, sometimes mutating and sometimes multiplying. While not prioritized by developers, these intermittent failures are often felt acutely while testing and will ultimately be painful for customers. These bugs are fixable if you take the correct approach.
These are the techniques I will demonstrate to overcome this problem:
- How to change the language used to report and discuss bugs.
- How to refocus the testing effort to tease out details of the failures
- How to apply visualization techniques to clearly demonstrate the impact of the issue and possible root causes.

AAN CHIEN TAN - Lead Quality Engineer, Vivid Seats LLC

THE FINAL FRONTIER: HUMANIZING SELENIUM USING MACHINE LEARNING

Machine Learning is revolutionizing all aspects of engineering including quality. By coupling Selenium with machine learning, we are able to open many new frontiers within UI Automation.Imagine an application that contains legacy and many third-party integrations. Also, imagine an application that undergoes heavy UI A/B Testing. All barriers towards stable and reliable tests.
This presentation will show how to use image processing and detection to improve test automation stability for complex applications with legacy and third-party integrations. The presentation will also touch on the basics of Machine Learning; data collection, model training, and model evaluation. And how we integrate the machine learning model into a Selenium UI Automation framework. The goal is to humanize the Selenium framework: “Able to detect between a QR code and barcode” or “Able to continue clicking on a submit button with different names and designs”.

WES MALLETT - Software Engineering Manager, VMWare

QA...WHAT ARE YOU WAITING FOR?

I have often been asked by QA Engineers, what is the best thing for QA to do on the first day of newsprint? Some common answers may be to finish up test automation from the last sprint or to review requirements. Unfortunately, those answers don’t help resolve the root concern, “what should I do while I wait for development to finish writing code?” I build QA teams that don’t wait because there is no reason to wait. Testing in parallel as development codes creates shorter feedback loops, a better understanding of the feature, builds greater trust, creates an opportunity to make the code more testable, and strengthens the team. I will be discussing this process and how QA engineers can better work with developers to test early and test often and achieve these results, so when a developer finishes writing code, you already know it works.

RICHARD KNASTER - SAFe® Fellow and Principal Consultant, Scaled Agile, Inc.

ROLE OF QA IN SCALED AGILE FRAMEWORK

As one of our product teams started the modernization of the monolithic application to a micro-services architecture, we realized that the automation test suite needed improvement. What we had was too lengthy and unstable to allow for multiple deployments a day. Our goal was to be able to run the automation suite fast enough initially to meet multiple daily deliveries and require no manual intervention unless a defect was found. We needed to do this without throwing more resources at it and creating waste.

SAI NAVEEN LINGAM - Program Manager, Dish Network

WORLD CLASS ENGINEERING FUTURE OF TESTING IN CROSS FUNCTIONAL TEAMS/ CUSTOMER JOURNEY TEAMS

With changes in the way we deliver value faster to customers and more frequently; the scope of testing has changed dramatically. Old ways of testing is no longer an option. In this digital era, frequent deliveries require more automation to ensure faster release cycles without sacrificing application quality.

PAUL GROSSMAN - Tech Lead SDET, Independent Consultant

SECRETS OF TEST AUTOMATION: ON THE FLY LOCATORS WITH THE MAGIC OBJECT MODEL

Maintenance is the biggest challenge every Automation Architect must learn to manage on a daily basis. Small updates in a few xPath or CSS locator properties is just part of the job. Now imagine this scenario: The latest application release has a new underlying architecture, resulting in 50 known Link elements to change to a Button class. Each element had at least four code references distributed in over 100 test cases. Only the login test is working!
The client is accustomed to getting regression test results in two hours. Historical maintenance has been, on average, just three element modification. How could we say the maintenance time to resolve 400 element references this round would take about two days?
Taking a chance we implement Dynamic Class Switching by extending the .Click method for every Link element. If the Link does not exist, we create a parameterized Button element description on-the-fly in code. In 90 minutes only three test cases are failing because of a newly detected defect.
Then we asked ourselves: How many of our repository elements could we identify programmatically? And could we also increase our execution speed? Thus starts the journey into the new Magic Object Model design pattern. If we could do that in VBScript, what other programming languages and tools could benefit? Selenium and Java? Pylenium? Cypress? See how any functional testing tool framework can lower maintenance time and increase execution speed with The Magic Object Model.

TOM SPIRITO- Director of Quality Assurance, Finastra, Financial Software Solutions

CASE STUDY: TECHNIQUES FOR DELIVERING HIGH QUALITY FINANCIAL SOFTWARE IN THE CLOUD

In this session, we will examine how a team of testers have delivered Financial software with no critical/high customer visible defects over several releases. We will review their keys to success, the implementation of a formalized quality checkpoint process, and the outstanding test results that followed.
This session is geared towards for both Quality Assurance Managers and Testers, whom can benefit from applying the techniques described in this session to enhance their own testing process. The focus of this session will be reducing the cost associated with fixing bugs by using the "shift-left" mentality.

GEORGE HAMBLEN - Sr QA Architect, Quality Consultant

SYNTHETIC TEST DATA

Test Data has become the most critical part of the testing process. Surveys have shown that up to 40% of a projects time is spent in test data related issues. Early on data was created for testing purposes. As systems become more complex, the data needed to keep referential integrity for end to end purposes. The industry shifted from creating data to taking production data and masking it for testing. Synthetic data was put on the shelf. Now breakthroughs in Synthetic data have solved the problem of keeping referential integrity. Synthetic data also has the advantage of not touching production data, which helps secure your customers privacy. It’s time to give synthetic data another look. In this talk, I’ll cover the history of test data, highlight the advantages of a strong test data management process and how synthetic data is changing the testing game.

ANAND SAHU - VP, QA Business Development and Customer Success, Cigniti Technologies, Inc

SYNTHETIC TEST DATA

Test Data has become the most critical part of the testing process. Surveys have shown that up to 40% of a projects time is spent in test data related issues. Early on data was created for testing purposes. As systems become more complex, the data needed to keep referential integrity for end to end purposes. The industry shifted from creating data to taking production data and masking it for testing. Synthetic data was put on the shelf. Now breakthroughs in Synthetic data have solved the problem of keeping referential integrity. Synthetic data also has the advantage of not touching production data, which helps secure your customers privacy. It’s time to give synthetic data another look. In this talk, I’ll cover the history of test data, highlight the advantages of a strong test data management process and how synthetic data is changing the testing game.

GANESH KUMAR ESWARAN - Sr. QA Analyst, CDW

GETTING STARTED WITH JEST/PUPPETEER UI AUTOMATION FRAMEWORK

Introduction to Jest/Puppeteer (by Google and Facebook) free open source tools for UI automation Show how to create a simple script Discuss advance topics such as screenshot testing, visual compares, etc.
Basic knowledge of Jest/Puppeteer for someone who has not used it before How to create simple script and get started Knowledge of what other capabilities the tools have Pros and cons compared to other professional tools such as UFT

ARINDAM KARMAKAR - Leader @ DevOps, Discover Financial Services

TEST DATA AND CI/CD PIPELINE

AJAY CHANKRAMATH - Principal Technologist, ThoughtWorks

OBSERVABILITY DRIVEN DESIGN PARADIGMS


The journey from reactive monitoring to proactive observability has been one of the key aspects of DevOps transformation organizations have been going through. Will talk about the relevance of automation in this context.
- Understand how ODD works in conjunction with TDD
- The role of Product Owners in driving observability tests
- How to instrument your tests, not just your code, events, and traces

SHREED PANT - Finance Program Test and Release Manager, HP Inc

TESTING ENABLES BI TO DELIVER HIGH-QUALITY ENTERPRISE SOLUTIONS

Test data: setting up the right data
Test Automation and seamless Integration with enterprise tools
Regression Testing
Functional Testing
Security testing
Performance testing
Reporting, Analytics, and AI

MASA MAEDA - Agile Coach, UST-Global

QUALITY-DRIVEN TESTING

In the realm of software testing, quality is considered by many professionals to be synonyms. A proof of that is the fact that what most “QA” organizations do, exclusively or primarily, is testing. That is as wrong. Driving a car is less about pressing pedals and steering than it is about awareness of the surroundings and decision-making. This talk is about an ecosystem perspective of quality and how applying that perspective increases your test automation ROI significantly.

TONY LAU - Staff Engineer, USAF

TRANSITION TO TEST AUTOMATION IN SECURED ENVIRONMENT

As the release of the National Defense Authorization Act (NDAA) each year pushing programs in the DoD toward adopting the use of agile software development methods. Test Automation plays a key role in completing the DevOps pipeline. The nature of our work will have to account for security into major consideration. How do all these moving pieces fit together?

SENYO AFFRAM- Founder & Consultant, Fofx Solutions Inc.

NOCODE TEST AUTOMATION

Test automation has a lot of potential benefits. However, some of the technical challenges of designing and implementing a test automation solution could render the effort totally futile. The ease of test creation, the effectiveness of tests and the cost of maintenance should be taken into consideration when designing a test automation framework.
Most organizations write too many lines of code to automate anything. The test automation scripts in most cases take a significant time to develop and are usually difficult to maintain. Automated test cases are not always easy to read and understand. Some automation solutions are change-intolerant. Developers write code that needs to be tested and QAs write more code to test the developer’s code. This coding-affinity approach to automation yields little dividends since the test framework developed is often not very stable.
The “no-code test automation” concept encourages the design of a test automation framework that requires little or no coding efforts using the Robot Framework. A loosely coupled design with various levels of abstraction is explored. Effective automated tests are created in no time. Tests are classified and grouped meaningfully. The results are easy to understand. Test cases are more expressive and easier to maintain. CI/CD integration is straightforward. The test framework is robust and adopts to changes quickly.

RANDY WOODING- Senior Project Manager, University of Pennsylvania

SERVICE VIRTUALIZATION TESTING: THE CHALLENGES YOU NEED TO OVERCOME.

Service Virtualization is becoming an attractive option for IT Directors, Project, and QA Managers to reduce cost and increase efficiencies in their development efforts. In this rapid and constantly changing environment, this session will provide strategies to overcome challenges with Service Virtualization, how to establish a strong business case to adopt in your testing paradigm, and avoid pitfalls in using this incredible option in your organization!

TOMMY ADAMS- Systems Assurance Tools and Automation Architect, IBM

SMARTER DEFECT CREATION THROUGH MACHINE LEARNING

In a typical QA cycle, hundreds or thousands of issues can be opened. Most of these are new and valid issues that need to be debugged, worked, and coded into a solution. Others, however, drain test and development time and resources only to be found as a duplicate of another issue, a known bug with an existing workaround, or as already fixed in the latest version. Using a combination of Python, Machine Learning, Natural Language Processing, and analytics, my team developed a tool that helps validate new issues by comparing human readable input against the entire data set of historical issues. The solution is data source agnostic and accessed via cloud micro-services, making it easy to re-use and re-deploy across a variety of teams and organizations.

YINKA ADESANYA- IT Senior Performance Architect/Consultant, IBM

BRINGING VALUE TO AN ORGANIZATION THROUGH PERFORMANCE TESTING

As testing and QA take aspects of every IT, performance is very vital and need to start early at the beginning of the project

EVAN NIEDOJADLO- Site Reliability Engineer at Peddle, Co-organizer Austin Automation Professionals meetup

TESTING IN SITE RELIABILITY ENGINEERING: APPLYING A TESTING MINDSET TO OPERATIONS PROBLEMS

Have you ever accidentally deleted a production instance or set up a cluster of servers only to find that it wasn't highly available? Have you created a production Kubernetes cluster only to find that it wasn't secure by default?
Let's spend some time together and dive into these scenarios where a test-first mindset can help you avoid these horrors in your operations journey. You'll get to hear some spooky stories from my personal experience in dealing with chaos and what I learned from it.

PRIYANKA SHARMA- Director, QA, Davita

COVERAGE: VOLUME TO VALUE BASED SHIFT

Our belief is that Test coverage, manual or automated, assists with measuring the quality of test cycles from the way it chalks out a determined quantitative assurance.
This assurance, being quantitative, sometimes gives us a false hope that high volume means better quality promise. Another metrics is Percent of Automated Coverage of Manual Tests. That determines how much test coverage is automated testing actually achieving. The common focus on this metric is volume and not how much of the product’s functionality & processes are covered.
Being in this fast-paced agile or ‘customized’ agile world, we need value-based coverage which gives us a cutting edge in these construction cycles.
Value-based coverage emphasizes automation coverage more on the actual product>module>processes/functional flows coverage rather than just the quantity of manual test cases.

JON ROBINSON- Sr Director, Customer Experience / Head of North America, Provar Testing

AUTOMATING END-TO-END TESTING ACROSS MULTIPLE CLOUD APPLICATIONS

Building a good automation suite for your application can be hard in and of itself, but what if that application is just one of many that sit in the 'Cloud'. Whether you already have experience with building test cases across multiple platforms, or are looking for some insight on how to get started, this presentation will look at some of the key aspects you need to consider, the common pitfalls you will likely encounter, and some best practices to ensure that you are getting the best possible value out of your time and resources.

SUMATHI CHINNASAMY- QA Automation Manager, MST Solutions

TESTING TODAY’S APPLICATION – TEST AUTOMATION TOOLS YOU CAN USE

Automation tools like Selenium, Java, Protractor, Typescript, Cucumber, Appium, Android Studio, Karate ...
Participants get to know about Automation testing for Salesforce, Angular JS Application/web-based application, API testing, and mobile testing.

JARED MEREDITH- Executive Director, Architecture, Strategy & Governance, TeamHealth

THE POWER OF CROSS-PLATFORM AUTOMATED WEB-BASED TESTING IN CICD PIPELINES

With 30+ public websites that had a need for testing across multiple platforms and devices we had to evolve our QA practices from manually ran scripts on local computers and personally owned devices. Through this innovation our IT teams were able to deliver a run book to automate QA testing across multiple devices and browsers while invoking it in an automated CICD pipeline. Learn more about the power of this automation and the value it can bring to your IT organization.

PRIYA ZAVERI- Automation Tech Resource Lead, Kelly

IS YOUR ORGANIZATION READY FOR DIGITAL TRANSFORMATION?

Digital disruption is inevitable in today's IT landscape.However, is your organization ready to adapt the change quickly and flexibly?The session will cover some important aspects to lead the transformation effectively.What are the key digital technologies - Automation, AI, advanced analytics, artificial intelligence, machine learning What is the scope and maturity of digital transformation efforts? How can organizations navigate the challenges and complexities? What is required to deliver the transformation? How can organizations deliver value at scale?

MELISSA MONTANEZ- Software Development Engineer in Test, Docusign

6 REASONS TO START TEST AUTOMATION WITH ROBOTFRAMEWORK

In this topic, I will present Robotframework and several of its libraries, and explain why it's a great tool to get started with test automation for both new and experienced QA engineers.

PRASANTH MALLA- Principal Consultant, Atos Syntel

AUTOMATION ENGINEERING

How Automation Engineering is different from Automation scripting and drawing the Parallels between Application Development & Automation.
Helps Automation teams in building Automation suite with high resilience to address maintenance overheads with Incremental developments of test app or ongoing changes and more

RAMESH BOMMARAJU- Test Automation Architect, Qentelli Inc

TEST AUTOMATION AFTER THE HONEYMOON: LESSONS LEARNED FROM THE TRENCHES

Test Automation has moved beyond the hype and differentiation – its now hygiene, especially in the age of DevOps and HyperAgile. However, ask any seasoned QA Professional and they can tell you many stories of Automation project failures that we can relate and sympathize with – from not defining the right objectives, having/setting too-high expectations to selecting the wrong tools/frameworks, the points of failure are many.

Ramesh Bommaraju, a Test Architect with Qentelli, has extensive experience in delivering “eventually successful” engagements at multiple enterprises ranging from airlines to fast food restaurant chains.
In this presentation, he will share some valuable lessons learned from his battles in the trenches which can help prevent some of those failures. History should not repeat itself!

ANGELICA BETSY- Lead Technical Consultant, Maximus

HOW EFFECTIVE IS YOUR TEST AUTOMATION SCRIPTING?

Presenting solutions to aid evaluation of effectiveness of your automation suite.
Insights into the following pointers could help you gauge effectiveness of your Automation Test Suite
>Foundations of establishing an automation framework
>Selecting which architecture best suits your framework
>Building a comprehensive page object model
>Scripting your tests- Best Practices
>Report Generation