Culture of QA...

One of the key insights the survey unveils is the undeniable impact of organizational culture on the quality assurance (QA) process.

Resource Allocation

Quality of digital experience is becoming more and more important in today’s digital-first economy. This is also evident in the fact that more and more organizations are spending significant resources on quality assurance of their digital experience.

Percentage of Development Budget Allocated to Testing

  • 40% of large scale companies are spending more than 25% of their budget for testing, with nearly 10% of enterprises spending more than 50% of their budget on testing. This showcases how important quality is for all organizations.
  • The interesting finding, however, is that about 26.2% of Mid-scale organizations do not know how much budget is allocated for their testing needs. This may either be a lack of interest in knowing about budget allocation, a lack of transparency in the organization or could also be a process issue. All three are important issues.


AI-powered code generation is a powerful ally for testers, enhancing efficiency and allowing us to focus on strategic aspects of testing rather than mundane tasks.

Senior Test Engineer                                                                                                                                                                                                   More on AI : AI/ML in TestingArrow

Ratio of Testers and Developers in a Project

  • The majority of organizations, especially medium and large ones (58.30% and 55.20%, respectively), report having 1-3 QA Engineers per 10 developers. This indicates a standard industry practice of maintaining a moderate number of QAs in proportion to developers.
  • Smaller organizations tend to have a higher ratio of developers to QAs, with 25.60% reporting less than 1 QA per 10 developers. Consistent with the fact that smaller organizations have resource constraints or differing operational scales where there are more generalist roles rather than specialist roles.

DevOps Team Allocation for Testing Infrastructure 

  • 79% of respondents say they have a team of up to 5 DevOps/Infrastructure members to set up and maintain test infrastructure. It signifies the effort for all organizations to maintain a stable testing infrastructure.
  • 11% of large organizations have dedicated 10+ DevOps/Infrastructure team members to set up and maintain testing infrastructure due to complex and multi-environment set-up for their testing needs. The consideration of cost versus resources plays a pivotal role in this strategic decision-making.

How many devops/infrastructure team members are allocated to setup and maintain testing infrastructure?

Time Spent by Testers 

Time Spent on Test Activities 

  • The survey shows that teams spend an inordinate amount of time on test execution monitoring. Which is even greater than test authoring. In addition, more than 10% of their time is spent on test infrastructure management and maintenance. Both of these challenges can be mitigated through the right tooling like LambdaTest, which helps cut down test execution times thereby reducing test monitoring requirements, and eliminating time spent on maintaining and scaling up test infrastructure.
  • Teams are struggling with flaky tests even at the enterprise level, where they spend more than 8% of their time fixing flaky tests. This is where AI-based tooling like LambdaTest’s flaky test detection can help out and save valuable time.

Culture of Testing

Test Engineers involved in Sprint Planning

  • 70.5% of the organizations actively involve testers in every sprint planning. In large enterprises, this is even higher with 74.4% of organizations involving testers in every sprint. This high percentage is a good sign indicating the importance and need for quality and quality assurance processes, especially among enterprises.
  • However, there are still Around 7% of organizations where testers are never involved in sprint planning. This number is higher in small enterprises with 10.4% of organizations not involving QA teams in sprint planning.

How often are testers in your organization involved in sprint planning?

Contribution to Automation Tests

  • As long as automation testing has been around, there has been debates on who writes automation tests, developers or dedicated automation testers. Some time back there were even calls to make developers the sole writers for automation tests. However, the data does not say the same. In most companies, both small and large, there are either dedicated SDETs who write tests, with 39.3% of organizations having dedicated SDETs, or there is a collaboration between devs and testers to write tests, with 38.6% of organizations.
  • In 13% of smaller organizations, developers are solely responsible for writing automation tests. In addition, we also saw that smaller organizations tend to have a lower number of testers per 10 developers as well. Combining these insights, it becomes apparent that in smaller organizations developers wear multiple hats, with more generalizations and fewer specializations.

Release Cycles

Release Frequency

  • Our previous data shows that over 88% orgs have adopted CI/CD tools. This has enabled them to release fast with 20%+ organizations releasing every day and 40% releasing weekly. This is especially true for small and medium companies which naturally have more agile teams.
  • Large-scale enterprises, on the other hand, are still slow even though they have adopted CI/CD more. With nearly half of the organizations releasing Monthly or Quarterly. While release cycles are highly subjective from product to product, long release cycles are indicative of a large turnaround time for fixing bugs.

Test Case Execution

  • Even though we see a lot of claims about cloud adoption in testing, we see about 48% of the organizations still prefer Local Machines or self-hosted In-house grids to execute test automation. Which leads to a lot of challenges like high flakiness, scalability issues, and a lot of time spent on test infra maintenance.

State of Testing

State of Test Infrastructure

Multiple Framework Strategy
  • There have been a lot of debates and dissertations circulating on what framework to choose for test automation or why some frameworks are better than others. However, data shows that most organizations do not prefer only a single framework strategy. Choosing the right framework is dependent on a lot of factors and it’s not necessary to stick to one tooling only. It’s visible in our survey as well with more than 74.6% of organizations using 2 or more frameworks for their automation, with 38.6% of organizations using more than 3 frameworks. These organizations likely recognize the complexity and diversity of their testing requirements, warranting a varied toolkit to effectively address each testing challenge.
  • Another Interesting data however was that 23.5% of the organizations are using Selenium, Cypress, and Playwright at the same time.

* We had asked them to pick from multiple frameworks including Selenium, Cypress, TestNG, Cucumber, JUnit, Appium, Cypress, WebDriverIO, PlayWright, Mocha, Jest, XCUITest, Espresso, Puppeteer, Selendroid, Robotium and option to add others.

App Testing 
Legacy Browser Testing 
  • 19.30% adopt a testing strategy covering the five most recent browser versions and legacy ones. This comprehensive approach aims for a consistent user experience, recognizing the diversity of the user base.
Mobile Device Testing 
  • 33% of organizations said they use both Emulators/simulators and real device when testing for handheld devices.
  • 25% of companies still use Browser mobile viewports to conduct App testing for their Mobile apps. Relying solely on browser viewports may result in a higher risk of missed issues, especially device-specific bugs.


“In the era of AI, testers become orchestrators, guiding the machine to generate code, optimize tests, and enhance quality. It’s a collaboration that propels us into the future.”

Test Architect                                                                                                                                                                                                                      More on AI : AI/ML in Testing


Handheld Device Diversity

  • About 81.1% of the respondents state they use less than 10 devices to test their applications. There is a potential for organizations to utilize cloud-based device platforms to perform tests on multiple handheld devices to test their applications.

State of Continuous Testing 

Testing in Concurrency 
  • Around 44.1% of organizations are running at least 5 parallels to meet their test execution needs. This is a benchmark metric for organizations to consider while choosing parallel testing for their needs.
  • What is surprising to see is that 32% of organizations are not running tests in parallel. They can improve their test execution time just by running more parallel tests.
Test Execution Time 
  • 28% of large organizations in particular and 26% of mid-sized organizations spend more than 60 minutes executing their test builds. This can be mitigated through more parallelization or through the adoption of smarter tooling with features like smart waits and test orchestration to minimize this test execution time.
  • This also serves as a benchmark. If you are wondering how fast your test execution times should be, then it has to be under 60 minutes as most organizations are running tests under this time limit. If you are taking more time to execute tests, then you may need to rethink your test execution strategy and try to cut down the time by either smarter test execution or scaling infrastructure.


While the possibilities of AI in code generation are speculative, its continued role in technological advancements and ethical considerations ensures its significant presence in the future of QA.

Director, QA                                                                                                                                                                                                                      More on AI : AI/ML in TestingArrow

Adoption of CI/CD Tools 
  • 88.9% organizations use CI/CD tools to test or deploy their apps which is even higher in large scale organizations reaching upto 92.6% denoting high adoption of CI/CD tools.
Automation Trigger Mechanisms 
  • Although around 88% of organizations say they use CI/CD tools in their organization, about 45% trigger tests manually. Continuous testing is not just writing automation tests, it is the automation of the complete process. That means less human intervention. So manual triggering of tests can be eliminated by these organizations as well for more efficiency.
Responsibility for CI/CD Testing Integration 
  • 46.4% of the SDET’s and QA engineers are involved in integrating automation tests on CI/CD pipeline which means they should be upskilled to handle these integrations and should be provided with the right toolset for the same.
Test Orchestration 
  • Running tests brute force on a first-come-first-serve basis is not the most ideal way to run tests. Smart test orchestration is required to run tests most efficiently for faster execution times and for better developer feedback times. Yet 36.5% of organizations are not orchestrating tests in any way.
  • In addition, 44% of organizations are orchestrating tests via CI/CD tools or frameworks themselves. Which can be made even better through dedicated test orchestration and execution platforms like HyperExecute.
Test Case Prioritization 
  • 52.5% of organizations prioritize their testing based on the criticality of the feature/functionality and hardly 5.5% prioritize test cases based on past test runs and customer feedback. Organizations prioritize testing based on perceived critical features and only a smaller percentage actively incorporates insights from past testing experiences and direct customer feedback into their testing prioritization.
  • 21.5% of organizations run tests without any prioritization which means there is a scope for optimizing test execution for faster results and faster developer feedback.

State of Test Analytics 

Test Failure Resolution Timeframe

Flaky Test Detection 

  • Flaky tests continue to be a challenge for organizations with 58% teams getting more than 1% flaky test runs, with more than 24% large organizations getting more than 5% of their tests as flaky. Better tooling for identifying flaky tests is required.

State of Test Intelligence Toolset 

Test Intelligence Tools 
  • 71.4% of the organizations said they have either in-house tools, open-source tools, or commercially licensed tools they use for test intelligence and analytics.
  • 28.60% of organizations lack a setup for test intelligence and analytics.

AI/ML in Testing

State of AI/ML in Testing 

Gen-AI Tools 
  • 80.2% of organizations use text generation tools like ChatGPT, BingChat indicating a widespread adoption of GenAI platforms.
  • After text generation, code gen tools are the most favored GenAI tools among the people surveyed, with 44% having used some form of code-gen tool like GitHub Copilot, OpenAI Codex, AlphaCode, and other code-generating tools. This reflects the popularity towards leveraging AI for coding tasks, potentially aiding in faster development and requirements for faster testing.

Adoption of AI 

Future of AI in Testing
  • 60.60% of organizations believe that AI will improve the productivity of teams, and humans will continue to play a major role in testing. This suggests a widespread view that AI will be an enhancer rather than a full replacement in the testing process.
Written by LambdaTest