Performance testing is such a vital part of the software development life cycle (SDLC). This testing is performed to ascertain how the components of a system are performing under a certain given situation. Non-performing (i.e., badly performing) applications generally don’t deliver their intended benefit to an organization;they create a net cost of time and money, and a loss of kudos from the application users, and therefore can’t be considered reliable assets. The terms performance testing, load testing, and stress testing are often used interchangeably, but measuring the speed of a service is not the same as measuring how much load the service can handle, and confirming the ability of a service to handle normal expected activity is different from seeing how that service responds to a very high load. Why Performance Testing is important:- Only a 5-minute downtime of Google.com (19-Aug-13) is estimated to cost the search giant as much as $545,000. It’s estimated that companies lost sales worth $1100 per second due to a recent Amazon Web Service Outage. Performance testing involves the knowledge of not only about the application to be tested, its usage, and the execution infrastructure; it also requires understanding of the performance test automation tools employed – scripting, monitoring and configuration details. Performance Testing Scope -Performance testing is different from functional testing. Only the most exposed user scenarios should be identified and performance tested.Creating a useful performance script requires a lot of research and discussion with your team and stakeholders. You can create a test set that will ultimately result in a stronger, more reliable service in production. How do we measure the Performance? We must take into account certain key Performance Indicators(KPI)- 1. Service Oriented 2.- Efficiency Oriented. Service-oriented indicators are availability and response time; they measure how well (or not) an application is providing a service to the end users. Efficiency-oriented indicators are throughput and capacity; they measure how well (or not) an application makes use of the hosting infrastructure.Below are the further division:- Availability: The amount of time an application is available to the end user. Lack of availability is significant because many applications will have a substantial business cost for even a small outage. Response time: The amount of time it takes for the application to respond to a user request.In Performance testing reference a response can be synchronous (blocking) or increasingly asynchronous, where it does not necessarily require end users to wait for a reply before they can resume interaction with the application. Throughput: The rate at which application-oriented events occur. A good example would be the number of hits on a web page within a given period of time. Utilization: The percentage of the theoretical capacity of a resource that is being used which include how much network bandwidth is being consumed by application traffic or the amount of memory used on a web server farm when 1,000 visitors are active. Taken together, these above KPIs can provide us with an accurate idea of an application’s performance and its impact on the hosting infrastructure. Approach best suited and Performance testing tool selected -This differs from Solution to Solution. Approach can be GUI OR API based. Then based on this choice the performance testing tool to be selected like Load Runner , JMeter, Load UI etc. The performance tester should form hypotheses, draw tentative conclusions, determine what information is needed to confirm or disprove their conclusions, and prepare key visualizations that give insight on system performance, bottlenecks, and support the narrative of the report.

November 5 @ 15:00
15:00 — 15:45 (45′)

Asha Singh