A User Graph provides complete information about the load patterns during the test. This graph helps to identify:
- When did the user load start?
- What were the user ramp-up and ramp-down patterns?
- When did the steady-state start?
- How many users were active at a particular time?
- When was the user exited from the test?
In the runtime viewer, User Graph provides information about the number of users currently accessing the application. When you run a step-up test or spike test, the User Graph shows a live load pattern which can be compared with the defined workload model of the test. This is one of the best options to validate the live workload model. Hence this is a very simple but important graph. It is ‘simple’ because you can easily read this graph if you have the basic knowledge of reading a graph. And, it is an important graph because you can merge this graph with other graphs like response time, error per second, throughput, latency graph etc. to identify the bottleneck.
Every performance testing tool has its own term to represent user graph, some of them are below:
- LoadRunner: Running Vuser Graph
- JMeter: Active Threads Over Time Graph
- NeoLoad: User Load Graph
User Graph axes represent:
- X-axis: It shows the elapsed time. The elapsed time may be relative time or actual time as per the graph’s setting. The X-axis of the graph also shows the complete duration of the test.
- Y-axis: It represents the number of users (load).
How to read:
Merging of User Graph with others
- With Response Time Graph: Merge the User Graph with the Response Time graph to see how the user load impacts the response time of the application. For example, you can verify that the increase in response time should not let users to exit from the test. If it is so then find out the issue.
- With Throughput Graph: Merge the User Graph with the Throughput graph to identify the pattern of the data coming from the server. There should not be a sudden spike or drop in the throughput graph. During steady-state with a constant load, the throughput graph should follow a constant regular pattern.
- With Error Graph: User Graph when overlaid with error/second graph then you can easily identify the exact time when the first error occurred as well as the time until the error lasts. Generally, an increase in error % during ramp-up indicates the error due to load. If an error is identified during mid of steady-state then this indicates queue pile-up at the server (a large number of unprocessed requests at the server end) or maybe any other server-related issue.
Remember: Before making any conclusion, you should properly investigate the root cause of the performance bug by referring to all the related analysis graphs.