What is Performance Test Execution?
Performance Test Execution word refers to run the tests specific to performance testing like load test, soak test, stress test, spike test etc. using performance testing tool. Performance Test Plan contains detailed information of all these tests which need to be executed in the performance testing window. Basically, this phase has two sub-phases:
- Test Execution: To run the planned performance tests
- Result Analysis: To analyse the test result and prepare an interim test report
Performance Test execution phase has the following activities to be performed:
- Execute the documented/agreed performance test
- Analyse the performance test result
- Verify the result against defined NFRs
- Prepare an Interim Performance Test Report
- Take the decision to complete or repeat the test cycle based on the interim test result
A performance test analyst or engineer executes the tests as per the testing timelines. A performance test lead or manager has a responsibility to analyse the test result and make a way forward plan on the basis of test results. If a performance test analyst or engineer has enough experience to understand the performance testing points then he can also analyse the result. In such a case, a performance test lead or manager has a responsibility to verify the report at his end before sending to the project team.
Approach for Test Execution:
Before starting the test execution there are few pre-requisite which a performance tester should follow. He must pay attention to the below points:
- Verify all the performance test scripts locally
- Validate all the scenarios
- Check all the external file path in the test script. The file path should match with the file location available at load generator
- Check whether load generator and controller have sufficient disk space
- Reboot the controllers and all the load generators (if feasible)
- All the performance tests should run on the latest code of the application
- The performance test environment should have the QA (functional test) passed version code only so that application is free from any functional bug
- Verify the script on load generator by running a smoke test before starting the actual load test
- Verify all the test data (if feasible) so that there is no failure in the test due to the test data issue
- Restart Web/Application/Database servers before starting a test
- Perform server logs clean-up activity
- Conduct a quick Healthcheck to verify the stability of the environment
- Verify whether all the required monitors are up and running
- Validate the run-time setting and parameter files
- If the test is scheduled then check the system time must be synced with the testing tool time so that test can be started at the correct time.
After verifying all the check-points, a performance tester can press the ‘Run’ button to start the test.
Once the test starts then check the graphs and stats in the live monitors of the testing tool. A performance tester needs to pay attention to some basic metrics like active users, transactions per second, hits per second, throughput, error count, error type etc. Also, he needs to check the behaviour of the users against the defined workload. At last, the test should stop properly and the result should be collated correctly at the given location.
Once a performance test completes then a performance tester collects the result and starts the result analysis which acts as a post-execution activity. A performance tester should follow the below approach to conduct the performance test result analysis.
Approach for Test Result Analysis:
As mentioned at the beginning of this post, the second sub-phase of performance test execution stage is Result Analysis. Performance Test Result Analysis is an important and more technical part of performance testing. Performance test result analysis requires expertise to determine the bottleneck and remediation options at the appropriate level of the software system – business, middleware, application, infrastructure, network etc.
Pre-result analysis activities:
Before starting the performance test result analysis, a performance tester should follow these important points:
- The test should run for the defined duration
- Filter-out the ramp-up and ramp-down duration
- Eliminate ‘Think Time’ from the graphs/stats
- Eliminate ‘Pacing’ (if the tool counts) from the graphs/stats
- No tool specific error should occur like the failure of load generator, memory issue etc.
- No network related issue should occur during the test like network failure, LGs become disconnected from network etc.
- The testing tool should collect the results from all the load generators and prepare a combine test report
- CPU and Memory utilization percentage should be noted down for pre-test (at least 1 hour), post-test (at least 1 hour) and during the test.
- Use proper granularity to identify correct peaks and lows
- Use filter option and eliminate the unwanted transactions (if any)
- Start the analysis with some basic metrics like
- Number of Users: The actual load during steady state should meet the user load NFR
- Response Time: The actual response during steady state should meet the response time NFR. The response time should be measured at two levels – Individual transaction response time and end-to-end response time. If NFRs are available for both the levels then they should meet
- Transactions per second / Iterations per hour: If any of these metrics is defined then the actual result should match with the defined figure
- Throughput: Throughput should match (not apple-to-apple) for the same set of tests
- Error: The error count should be less than the defined error tolerance limit
- Passed Transaction counts: Ideally, the passed count of the first transaction should match with the passed count of the last transaction. If it is not the case then identify the failed transactions
- Analyse the graphs:
- Set the proper granularity for all the graphs
- Read the graph carefully and note down the points
- Check the spikes and lows in the graph
- Merge the different graphs to identify the root cause of the issue
- If performance testing tool and monitoring tool are not integrated then note the time when the error occurred and sync the graphs generated by both the tools
- Do not extrapolate the result on the basis of incomplete statistics
- Analyse the other reports:
- Generate the heap dump and analyse the Java heap during the test
- Perform thread dump analyse to check the deadlock or stuck thread
- Analyse the Garbage Collector logs
- Analyse the AWR report to find a long time taking DB query
Post-result analysis activities:
A performance tester gathers all the result i.e. client side and server side stats and starts analysing the result. He verifies the results against the defined NFRs. After each test, performance tester prepares an interim test report which is analysed by a Performance Test Lead or Manager.
Some key points for reporting:
- It is good practice to generate an individual test report for all the tests
- Define a template for test report and use the same template to generate the report
- Highlight the observations and defects in the test report
- If performance testing tool does not have reporting feature then prepare an interim test report (template link is available in the deliverable section of this post)
- Attach all the relevant reports like heap dump analysis report, AWR report etc. with the interim test report
- Provide defect description along with defect ID
- Conclude the result as Pass or Fail status
Along with result if a performance tester detects any performance bottleneck then he raises the defect and assigns to the respective team for further investigation on the root cause. The root cause analysis activity is basically a team effort wherein the performance tester, system administrators, technical experts and DBA play a vital role. The test execution and bottleneck analysis activities are cyclic in nature.
After the tuning of the application, the same test is repeated again to verify the performance of the application. If the issue still persists then again application is pushed-back for the tuning to meet the NFRs.
Interim Performance Test Report is the only deliverable of this phase. Download the template of the Interim Performance Test Report
PerfMate has created the test scripts and scenarios in the last two phases and now ready to start the test execution cycle. He checks all the required pre-requisite and starts the test. As per Performance Test Plan, he needs to conduct various tests on the application like load test, stress test, soak test and spike test.
|Cycle#||Round#||Test Type||Test ID|
|Cycle 01||Round 01||Load Test||C1R1Load|
|Round 01||Stress Test||C1R1Stress|
|Round 01||Soak Test||C1R1Soak|
|Round 01||Spike Test||C1R1Spike|
|Round 02||Load Test||C1R2Load|
|Round 02||Stress Test||C1R2Stress|
|Round 02||Soak Test||C1R2Soak|
|Round 02||Spike Test||C1R2Spike|
|Cycle 02||Round 01||Load Test||C2R1Load|
|Round 01||Stress Test||C2R1Stress|
|Round 01||Soak Test||C2R1Soak|
|Round 01||Spike Test||C2R1Spike|
|Round 02||Load Test||C2R2Load|
|Round 02||Stress Test||C2R2Stress|
|Round 02||Soak Test||C2R2Soak|
|Round 02||Spike Test||C2R2Spike|
After finishing each test, he analyses the application performance against the defined NFRs. He also checks the bottlenecks and raises all the identified defects in the defect management tool.
At last, he prepares an interim test report and shares with the project stakeholders.