Performance Testing Interview Questions #7

Q. 31 How can you identify performance test use cases for any application?

Ans: The core business functionality and volume are two major factors to prioritise the performance test case. The core business functionality should have the topmost preferences and the user load comes in second place. After finalizing the scope of the core business flow, check for the database operations, concurrency limit etc. and other functionality that has a moderate load that may increase in future.


Q.32 How to identify memory leaks in the system?

Ans: You need to run the soak or endurance test to identify the memory leakage. When you run a test for a longer duration and observe that the server memory is going to increase gradually and remain unchanged even after the test completion then there may be memory leakage.

To confirm the suspect you need to analyse the GC.


Q. 33 What is throughput?

Ans: In LoadRunner, throughput is the data amount sent by the server in response to the request of the client in a given time period. LoadRunner measures throughput in bytes and MB. In JMeter, you measure throughput in terms of requests per second or minute.


Q. 34 How are benchmark and baseline tests different?

Ans: The process of running tests in sets is to capture information performance is baseline testing. This info can be used as a reference point when changes in the future are made to applications. On the other hand, benchmark tests are the process of comparing the performance of your system against industry standards given by other organizations. You can, for example, run baseline tests on applications, analyze the collected results and modify many indexes on the database of a SQL Server before running the identical test once more using the previous same results to find out whether or not new results are the same, worse or better.


Q.35 What are the common mistakes a performance tester makes?

Ans: Some common mistakes are:

  1. Without a smoke test with few users, directly jump to multi-user tests
  2. Start another test without validating the previous test result
  3. Bombarding the server without pacing or think time
  4. Run the test with unknown workload details
  5. Too small run duration
  6. No user ramp-up period
  7. Lacking long-duration sustainability test
  8. Confusion on user concurrency
  9. a significant difference between the test and production environment
  10. No Network bandwidth stimulation
  11. Underestimating performance testing schedules
  12. Shorten the performance testing schedule and consider it just a formality
  13. Incorrect extrapolation of pilots
  14. Inappropriate base-lining of configurations
  15. Perform bottleneck analysis without considering all the important graphs
  16. Perform the test without validating the test data