Performance Testing Interview Questions and Answers

Performance Testing Interview Questions and Answers

Last updated on 05th Oct 2020, Blog, Interview Question

About author

Yogesh (Sr Project Manager )

Highly Expertise in Respective Industry Domain with 7+ Years of Experience Also, He is a Technical Blog Writer for Past 4 Years to Renders A Kind Of Informative Knowledge for JOB Seeker

(5.0) | 12536 Ratings 1482

Performance testing, a non-functional testing technique performed to determine the system parameters in terms of responsiveness and stability under various workloads. Performance testing measures the quality attributes of the system, such as scalability, reliability and resource usage.

1. What is the difference between Performance Testing and Performance engineering?


In Performance Testing, testing cycle includes requirement gathering, scripting, execution, result sharing, and report generation. Performance Engineering is a step ahead of Performance Testing where after execution; results are analyzed with the aim to find the performance bottlenecks and the solution is provided to resolve the identified issues.

2. Explain Performance Testing Life Cycle.


Step 1: System Analysis (Identification of critical transaction)
Virtual User Generator

Step 2: Creating Virtual User Scripts (Recording)

Step 3: Defining Users Behavior (Runtime setting)
LoadRunner Controller

Step 4: Creating Load Test Scenarios

Step 5: Running the Load Test Scenarios and Monitoring the Performance
LoadRunner Analysis

Step 6: Analyzing the Results

3. What is Performance Testing?


Performance Testing is done to evaluate the application’s performance under load and stress conditions. It is generally measured in terms of the response time of the user’s action on an application.

4. What is Load Testing?


Load Testing is to determine if an application can work well with the heavy usage resulting from a large number of users using it simultaneously. The load is increased to simulate the peak load that the servers are going to take during maximum usage periods.

5. What are the different components of LoadRunner?


The major components of LoadRunner are:

VUGen: Records Vuser scripts that emulate the actions of real users.

Controller: Administrative center for creating, maintaining and executing load test scenarios. Assigns scenarios to Vusers and load generators, starts and stops loading tests.

Load Generator: An agent through which we can generate load

Analysis: Provides graphs and reports that summarize the system performance

6. What is the Rendezvous point?


Rendezvous point helps in emulating heavy user load (request) on the server. This instructs Vusers to act simultaneously. When the Vuser reaches the Rendezvous point, it waits for all Vusers with the Rendezvous point. Once designated numbers of Vusers reach it, the Vusers are released. Function lr_rendezvous is used to create the Rendezvous point. This can be inserted by:

  • Rendezvous button on the floating Recording toolbar while recording.
  • After recording Rendezvous point is inserted through Insert> Rendezvous.

7. What are the different sections of the script? In what sequence do these sections run?


LoadRunner script has three sections Vuser_init, Action, and Vuser_end.

  • Vuser_init has requests/actions to login to the application/server.
  • Action has actual code to test the functionality of the application. This can be played many times in iterations.
  • Vuser_end has requests/actions to login out of the application/server.

The sequence in which these sections get executed is Vuser_init is at the very beginning and Vuser_end at the very end. The action is executed in between the two.

8. How do you identify which protocol to use for any application?


Previously Performance tester had to depend much on the development team to know about the protocol that the application is using to interact with the server. Sometimes, it also used to be speculative.

However, LoadRunner provides great help in the form of Protocol Advisor from version 9.5 onwards. Protocol advisor detects the protocols that the application uses and suggests to us the possible protocols in which script can be created to simulate the real user.

9. What is a Correlation? Explain the difference between Automatic Correlation and Manual Correlation?


Correlation is used to handle the dynamic values in a script. The dynamic value could change for each user action (value changes when action is replayed by the same user) or for different users (value changes when action is replayed with a different user). In both cases, correlation takes care of these values and prevents them from failing during execution.

Manual Correlation involves identifying the dynamic value, finding the first occurrence of dynamic value, identifying the unique boundaries of capturing the dynamic value, writing correlation function web_reg_save_param before the request having the first occurrence of a dynamic value in its response.

Automated correlation works on predefined correlation rules. The script is played back and scanned for autocorrelation on failing. Vugen identifies the place wherever the correlation rules work and correlates the value of approval.

10. How to identify what to correlate and what to parameterize?


Any value in the script that changes on each iteration or with the different users while replaying needs correlation. Any user input while recording should be parameterized.

11. What is Parameterization & why is Parameterization necessary in the script?


Replacing hard-coded values within the script with a parameter is called Parameterization. This helps a single virtual user (Vuser) to use different data on each run. This simulates real-life usage of an application as it avoids the server from caching results.

12. How do you identify Performance test use cases of any application?


Test cases/Uses cases for Performance tests are almost the same as any Manual/Functional testing test cases where each and every step performed by the user is written. The only difference is that all Manual test cases can’t be Performance Testing use cases as there are few criteria for the selection as:

  • The user activity should be related to the critical and most important functionality of the application.
  • The user activity should be having a good amount of database activity such as search, delete or insert.
  • The user activity should be having good user volume. The functionality of having less user activity is generally omitted from the Performance testing point of view. For Example, admin account activity.

Any of the Manual test cases that fulfill the above criteria can be used as a Performance Testing use case/test case. If manual test cases are not written step by step, the Performance team should create dedicated documents for them.

Subscribe For Free Demo

Error: Contact form not found.

13. While scripting you created correlation rules for Automatic Correlation. If you want to share the correlation rules with your team members working on the same application so that he/she can use the same on his workstation, how will you do that?


Correlation rules can be exported through the .cor file and the same file can be imported through VuGen.

14. What are different types of Vuser logs that can be used while scripting and execution? What is the difference between these logs? When you disable logging?


There are two types of Vuser logs available –Standard log and Extended log. Logs are key for debugging the script. Once a script is up and running, logging is enabled for errors only.

Standard log creates a log of functions and messages sent to the server during script execution whereas the Extended log contains additional warnings and other messages. Logging is used during debugging and disabled while execution. Logging can be enabled for errors in that case.

15. Name a few types of performance testing


There are primarily six types of performance testing. They are-

  • Load testing
  • Endurance testing
  • Volume testing
  • Stress testing
  • Scalability testing
  • Spike testing

16. Differentiate between Stress Testing and Load Testing?


Stress Testing is also known as negative testing as the tester tests the system beyond its boundaries specified to discover the breakpoint threshold of the system. Whereas load testing is the easiest form of performance testing which is done by increasing the testing load step by step to reach the defined limit or the goal. 

17. What is concurrent user load in performance testing?


Concurrent user load in performance testing can be defined as something when many users hit any functionality or operation at the same time.

18. What is a protocol and name a few protocols?


A protocol is defined as a set of various rules for the purpose of information communication between the two or more systems. There are many protocols such as Http/Https, FTP, Web Services, Citrix, Http/Https and Web Services.

19. Name a few common performance testing problems.


A few recurring performance testing problems are-

  • Longer loading time
  • Poor Scalability
  • Bottlenecking
  • Poor response time

20. Name a few common performance bottlenecks?


Some popular common performance bottlenecks are-

  • CPU Utilization
  • Memory Utilization
  • Networking Utilization
  • S limitation
  • Disk Usage 

21. Name some popular performance testing tools?


Some common performance testing tools are-

  • HP Loader
  • HTTP Load
  • Proxy Sniffer
  • Rational Performance Tester
  • JMeter
  • Borland Silk Performer

22. What are the parameters considered for performance testing?


The parameters considered are-

  • Memory usage
  • CPU interruption per second
  • Committed memory
  • Thread counts
  • Top waits, etc.
  • Processor usage
  • Network output queue length
  • Response time
  • Bandwidth
  • Memory pages

23. Elucidate the steps that are required in JMeter to create a performance test plan?


To create a performance test plan in JMeter you need the following steps-

  • Add thread group
  • Add JMeter elements
  • Add Graph result
  • Run test & get the result

24. List down the phases of automated performance testing?


Here is a list of phases for automated performance testing

  • Design or Planning
  • Build
  • Execution
  • Analyzing & Tuning

25. Differentiate between benchmark testing and baseline testing?


Benchmark Testing is the method deployed for comparing the performance of your system framework performance against a set industry standard that is laid out already by some other organization. Whereas the Baseline Testing is the type of testing technique where a tester runs a series of tests in order to get hold of the performance information. When any future change is made in the given app, this data is then used as a reference point.

26. Name and elucidate the types of performance tuning.


In order to improve the performance of the system, primarily there are two types of tuning performed-

Hardware tuning: Enhancing, adding or supplanting the hardware components of the system under test and changes in the framework level to augment the system’s performance is called hardware tuning.

Software tuning: Identifying the software level bottlenecks by profiling the code, database etc. Fine-tuning or modifying the software to fix the bottlenecks is called software tuning.

27. Highlight the need for opting for Performance testing?


Performance testing is generally required to validate the below-given things:

  • The response time of application for the intended number of users-
  • Utmost load resisting capacity of an application.
  • The capability of the app under test to handle the particular number of transactions.
  • The constancy of an application under the usual and unexpected user load.
  • Making sure that users have an appropriate response time on production.

28. What is the reason behind the discontinuation of manual load testing?


Following were the drawbacks of manual Load Testing that lead to the adoption of Automation load testing:

  • Complicated procedure to measure the performance of the application precisely.
  • Complex synchronization procedures between the two or more users.
  • Difficult to assess and recognize the outcomes & bottlenecks.
  • Increased overall infrastructure cost.

29. In what manner would you identify the performance bottlenecks situations?


Performance Bottlenecks can be easily recognized by monitoring the app against load and stress conditions. To find bottleneck situations in performance testing the testers usually use Loadrunner because it supports many different types of monitors like a run-time monitor, network delay monitor, web resource monitor, database server monitor, firewall monitor, ERP server resources monitor and Java performance monitor. These monitors, in turn, help the tester to establish the condition which causes an increase in the response time of the application. The capacity of the performance of the application is based on response time, throughput, hits per sec, network delay graphs, etc.

30. Is it possible to perform spike testing in JMeter? if yes how?


Spike Testing is conducted to comprehend what changes happen on the appl when the abruptly large number of users is either increased or decreased. Unexpected changes in the number of users by increasing or decreasing at a certain point of application and then monitoring the behavior of the app thereafter. In JMeter, spike testing can be achieved by using Synchronizing Timer. The threads are jammed by synchronizing the timer until a specific number of threads have been successfully blocked, and then release them at once thus creating a large immediate load.

31. What is the throughput in Performance Testing?


Throughput in Performance testing is either the quantity of data sent by the server in response to the client request in a particular given period of time or it is the specific number of units of work that can be handled in per unit of time. The throughput is calculated in terms of requests received per second, calls per day, reports per year, hits per second, etc. In the majority of the cases, the throughput is premeditated in bits per second. 

32. What is profiling in performance testing?


Profiling is a procedure of pinpointing a bottleneck performance at miniature levels. This is done by presentation teams for manufacturing which mainly includes developers or performance testers. You can profile in any application layer which is getting tested. If you want to do application profiling you may require utilizing tools for performance profiling of application servers.

33. What is the Modular approach of scripting?


In the Modular approach, a function is created for each request (For Example, login, logout, save, delete, etc.) and these functions are called wherever required. This approach gives more freedom to reuse the request and saves time. With this approach, it is recommended to work with web custom requests.

34. What are the different types of goals in Goal-Oriented Scenario?


LoadRunner has five different types of goals in Goal-Oriented Scenario. These are:

  • The number of concurrent Vusers
  • The number of hits per second
  • The number of transactions per second
  • The number of pages per minute
  • The transaction response time

35.How is each step validated in the script?


Each step in the script is validated with the content on the returned page. A content check verifies whether specific content is present on the web page or not. There are two types of a content check which can be used in LoadRunner:

Text Check: This checks for a text/string on the web page.

Image Check: This checks for an image on a web page.

Course Curriculum

Get Performance Testing Training with Master Advanced Concepts By Experts

  • Instructor-led Sessions
  • Real-life Case Studies
  • Assignments
Explore Curriculum

36.How is the VuGen script modified after recording?


Once the script is recorded, it can be modified with the following process:

  • Transaction
  • Parameterization
  • Correlation
  • Variable declarations
  • Rendezvous Point
  • Validations/Checkpoint

37. What are Ramp-up and Ramp Down?


Ramp-up: Rate at which virtual users add to the load test.

Ramp Down: Rate at which virtual users exit from the load test.

38. What is the advantage of running the Vuser as the thread?


Running Vusers as thread helps generate more virtual users from any machine due to the small memory print of the Vuser running a thread.

39. What is wasted time in the VuGen Replay log?


Waste time is never performed by any browser user and just the time spent on the activities which support the test analysis. These activities are related to logging, keeping record and custom analysis.

40. How do you enable text and image checks in VuGen?


This can be done by using functions web_find (for text check) and web_image_check (for image check) and enabling image and text check from runtime settings.

Run Time Setting–>Preference–>Enable the Image and text checkbox.

41. What is the difference between web_reg_find and web_find?


web_reg_find function is processed before the request sent and is placed before the request in the VuGen script whereas a web_find function is processed after the response of the request comes and is placed after the request in VuGen script.

42. What are the challenges that you will face to script the step “Select All” and then “Delete” for any mail account?


In this case, the post for “Select All” and “Delete” will change every time depending on the number of mails available. For this the recorded request for the two should be replaced with the custom request and string building is required to build the post. 

43. What is the difference between pacing and think time?


Pacing is the wait time between the action iterations whereas thinking the time is a wait time between the transactions.

44. What is the difference between front-end and back-end performance testing? Which one is more important?


Both front-end and back-end performance testing measure how fast an application responds, but they measure different components of that overall user response time.

Front-end performance is concerned with how quickly text, images, and other page elements are displayed on a user’s browser page. Back-end performance is concerned with how quickly these elements are processed by the site’s servers and sent to the user’s machine upon request. Front-end performance is the part of the iceberg above the water line, and back-end performance is everything underneath that you can’t see.

Both are important because they can both determine whether a user continues to use your application. Front-end performance tends to be easier to test and can provide some quick wins due to the large amount of optimisation tweaks that can be done without writing code. Back-end performance tends to be more difficult to test because it often uncovers problems with the underlying infrastructure and hardware that are of a more technical nature.

45. Why does performance testing matter?


Performance testing matters because application performance has a significant impact on user experience. A site that is unreachable or slow to load due to an inability to cope with unexpected load will cause users to browse to competitor’s sites and tarnish the brand’s reputation.

46. How do you know when a load test has passed?


Ideally, you would have discussed your nonfunctional requirements with key stakeholders before load testing begins. This means that you set your own pass criteria before you even run the tests. You would ideally have a list of specific transactions (selected based on criticality or complexity according to the business) whose response time needs to fall under a threshold you’ve predetermined. “Fast” is not specific enough– a number is better. Depending on what kind of tests you’re running (soak, stress, volume, etc) you may have other nonfunctional requirements about duration, resource utilisation on the server side, or specific outcomes to scenarios you’d like to test.

As a general rule, don’t rely on a load test tool to determine whether your load test has passed. Rely on it to report your results, but always compare the results to the requirements to determine successes or failures.

47. What would you advise to clients who say they can’t afford to take a performance test because they don’t have the resources to maintain several load generators on site?


This is the main reason that performance testing has for so long been considered a luxury that only big companies can afford. Luckily technology moves on, and in 2018 we’re at a point where everyone can load test. The big innovation here has been the cloud and the ability to spin up thousands of virtual machines with a few mouse clicks. Services like Amazon AWS, Microsoft Azure and Google Cloud make it so that every budding entrepreneur can “borrow” the computing hardware necessary to do cloud load testing with thousands of users and then give them back after the test, without the hassle and cost of maintaining them. I would advise the clients to look for a cloud load testing solution that utilizes virtual machines on the cloud to run their tests affordably.

48. You run a load test against a server with 4GB RAM and the results show an average response time of 30 seconds for a particular request. The production server has been allocated 8GB RAM. What would you expect the average response time of the same request to be in production given the same load?


The response time would be halved to 15 seconds, reality is rarely that convenient. Response times are a factor of so much more than memory. Things like CPU utilisation, network throughput, latency, load balancing configuration, and just application logic are always going to influence load tests. You can’t assume a linear progression in response time just because you’ve upgraded one part of the hardware. This is why it’s important to load test against an environment that is as production-like as possible.

49. What is a percentile and why would you look at percentile response times when you already have average response times?


A percentile is a statistical measure that describes a value that a certain percentage of the sample either meets or falls under. For example, a 90th percentile response time of 5 seconds means that 90% of the responses took 5 seconds or less to be returned. It can be an important measure because they soften the impact that outliers have on more inclusive measures such as averages. A transaction with an average response time of 2.5 seconds may seem perfectly acceptable to the business, but when the 90th percentile response time is 20 seconds, this is a good reason to investigate further.

50. What is the number of graphs you can monitor using a Controller at a time? What is the max of them?


One, two, four and eight graphs can be seen at a time. The maximum number of graphs that can be monitored at a time is 8.

51. You have an application that shows the exam results of the student. Corresponding to the name of each student its mentioned whether he passed or failed the exam with the label of “Pass” and “Fail”. How will you identify the number of passed and failed students in the VuGen script?


For this text check is used for the web page for the text “Pass and “Fail”. Through the function web_reg_find, we can capture the number of texts found on the web page with the help of “SaveCount”. SaveCount stored the number of matches found.  For example-

  • web_reg_find(“Text=Pass”, “SaveCount=Pass_Student”, LAST);
  • web_reg_find(“Text=Fail”, “SaveCount=Fail_Student”, LAST);

52. During the load test, what is the optimum setting for Logs?


For the load test log level is set to minimal. This can be achieved by setting the log level to the standard log and selecting the radio button “Send a message only when an error occurs”.

53. How will you handle the situation in scripting where for your mailbox you have to select any one mail randomly to read?


For this, we will record the script for reading the first mail. Try to find what is being posted in the request to read the first mail such as mail ids or row no.  From the post where a list of emails is reflecting, we will try to capture all the email ids row no with correlation function and keeping Ordinal as All i.e. ORD=All. Replace the requested email id in the read post with any of the randomly selected email id from the list of captured email ids.

54. What is the Think Time? What is the Threshold level for think time and how can this change?


Think time is the wait time inserted intentionally between the actions in the script to emulate real user`s wait time while performing an activity on the application. The Threshold level for Think time in the level below which recorded think time will be ignored. This can be changed from Recorded options->Script->Generate think time greater than the threshold.

55. List out what are the common performance problem does user face?


  • Longer loading time
  • Poor response time
  • Poor Scalability
  • Bottlenecking (coding errors or hardware issues)
Course Curriculum

Best On-Demand Performance Testing Course from Real-Time Experts

Weekday / Weekend BatchesSee Batch Details

56. List out some common performance bottlenecks?


Some common performance bottlenecks include

  • CPU Utilization
  • Memory Utilization
  • Networking Utilization
  • S limitation
  • Disk Usage

57. List out some of the performance testing tool?


  • HP Loader
  • HTTP Load
  • Proxy Sniffer
  • Rational Performance Tester
  • JMeter
  • Borland Silk Performer

58. Why does JMeter become a natural choice of tester when it comes to performance testing?


JMeter tool has benefits like

  • It can be used for testing both static resources like HTML and JavaScript, as well as dynamic resources like Servlets, Ajax, JSP, etc.
  • JMeter has a tendency to determine the maximum number of concurrent users that your website can handle
  • It provides a variety of graphical analyses of performance reports

59. Mention what all thing involves in Performance Testing Process?


  • Right testing environment: Figure out the physical test environment before carry performance testing, like hardware, software and network configuration
  • Identify the performance acceptance criteria: It contains constraints and goals for throughput, response times and resource allocation
  • Plan and design Performance tests:Define how usage is likely to vary among end users, and find key scenarios to test for all possible use cases
  • Test environment configuration: Before the execution, prepare the testing environment and arrange tools, other resources, etc.
  • Test design implementation: According to your test design, create a performance test
  • Run the tests: Execute and monitor the tests
  • Analyze, tune and retest: Analyze, consolidate and share test results. After that, fine tune and test again to see if there is any enhancement in performance. Stop the test, if the CPU is causing bottlenecking.

60. List out some of the parameters considered for performance testing?


  • Memory usage
  • Processor usage
  • Bandwidth
  • Memory pages
  • Network output queue length
  • Response time
  • CPU interruption per second
  • Committed memory
  • Thread counts
  • Top waits, etc.

61. List out the factors you must consider before selecting performance tools?


  • Customer preference tool
  • Availability of license within customer machine
  • Availability of test environment
  • Additional protocol support
  • License cost
  • Efficiency of tool
  • User options for testing
  • Vendor support

62. Mention what is the difference between JMeter and SOAPUI?


                              JMeter                                SoapUI
It is used for load and performance testing HTTP, JDBC, JMS, Web Service(SOAP), etc.
It supports distributed load testing·      ——–
It is specific for web services and  has a more user-friendly IDEIt does not support distributed load testingFor most IDE, it has plugin support

63. Mention what is the difference between performance testing and functional testing?


                        Functional Testing                          Performance Testing
·To verify the accuracy of the software with definite inputs against expected output, functional testing is done.This testing can be done manually or automatedOne user performs all the operationsCustomer, Tester and Development involvement is requiredProduction sized test environment is not necessary, and H/W requirements are minimalTo validate the behavior of the system at various load conditions performance testing is done.It gives the best result if automatedSeveral user performs desired operationsCustomer, Tester, Developer, DBA and N/W management teamRequires close to production test environment and several H/W facilities to populate the load

64. Mention what is the benefit of LoadRunner on testing tools?


Benefit of LoadRunner testing tools is

  • Versatility
  • Test Results
  • Easy Integrations
  • Robust reports
  • Enterprise Package

65. Explain what is Endurance Testing and Spike Testing?


Endurance Testing: It is one type of performance testing where the testing is conducted to evaluate the behavior of the system when a significant workload is given continuously

Spike Testing: It is also a type of performance testing that is performed to analyze the behavior of the system when the load is increased substantially.

66. Explain what are the common mistakes done in Performance Testing?


The common mistakes done in Performance Testing are

  • Direct jump to multi-user tests
  • Test results not validated
  • Unknown workload details
  • Too small run durations
  • Lacking long duration sustainability test
  • Confusion on definition of concurrent users
  • Data not populated sufficiently
  • Significant difference between test and production environment
  • Network bandwidth not simulated
  • Underestimating performance testing schedules
  • Incorrect extrapolation of pilots
  • Inappropriate base-lining of configurations

67. Mention the steps required in JMeter to create a performance test plan?


To create a performance test plan in JMeter

  • Add thread group
  • Add JMeter elements
  • Add Graph result
  • Run test & get the result

68. Explain how you can execute spike testing in JMeter?


In JMeter, spike testing can be done by using Synchronizing Timer.  The threads are jammed by synchronizing the timer until a specific number of threads have been blocked and then released at once, creating a large instantaneous load.

69. Mention what is the difference between the benchmark testing and baseline testing?


Benchmark Testing:It is the method of comparing performance of your system performance against an industry standard that is set by other organization

Baseline Testing:It is the procedure of running a set of tests to capture performance information. When future change is made in the application, this information is used as a reference

70.Mention what concurrent user hits in load testing?


In load testing, without any time difference when multiple users hit on the same event of an application under the load test is called a concurrent user hit.

71. Explain the basic requirements of the Performance test plan.


Any Software Performance Test Plan should have the minimum contents as mentioned below:

  • Performance Test Strategy and scope definitions.
  • Test process and methodologies.
  • Test tool details.
  • Test cases details including scripting and script maintenance mechanisms.
  • Resource allocations and responsibilities for Testers.
  • Risk management definitions.
  • Test Start /Stop criteria along with Pass/Fail criteria definitions.
  • Test environment setup requirements.
  • Virtual Users, Load, Volume Load Definitions for Different Performance Test Phases.
  • Results Analysis and Reporting format definitions.

72. How is the Automated Correlation configured?


Any setting related to Automated Correlation can be done by General Options->Correlation. Correlation rules are set from Recording options->Correlations.

73. How do you decide the number of load generator machines required to run a test?


The number of load generators required totally depends on the protocol used to create the script and configuration of the load generator machine. Each protocol has a different memory print and this decides how many virtual users can be generated from the given configuration of the machine (load generator).

74. What are the capabiliti s exactly you look for while selecting the performance testing tool?


Performance testing tool should capable of:-

  • Testing an application built using multiple technologies and hardware platforms.
  • Determine the suitability of a server for testing the application
  • Testing an application with a load of tens, thousand and even thousands of virtual users.

75. How are concurrent users differing from simultaneous users?


All simultaneous users are concurrent users but vice versa is not true.

All the Vusers in the running scenario are Concurrent users as they are using the same application at the same time but maybe or may not be doing the same tasks. Simultaneous users perform the same task at the same time. Concurrent users are made Simultaneous users through rendezvous points. Rendezvous points instruct the system to wait until a certain number of Vusers arrive so that they all can do a particular task simultaneously.

performance testing Sample Resumes! Download & Edit, Get Noticed by Top Employers! Download

76. How do you identify which values need to be correlated in the script? Give an example.


This can be done in ways:

  • Record the two scripts with similar steps and compare them using WDiff utility. (See tutorial Correlation).
  • Replay the recorded script and scan for correlation. This gives a list of values that can be correlated.

Session-Id is a good example of this. When two scripts are recorded and compared using WDiff utility. Session ids in the two scripts should be different and WDiff highlights these values.

77. How does caching affect Performance Testing results?


When data is cached in the server’s memory, the server needs not to fetch the result and no server activity triggered. The test result does not reflect the same performance of real users using the application with different data.

78. How will you stop the execution of a script on error?


This can be achieved through lr_abort function. The function instructs the Vuser to stop executing the Action section and end the execution by executing the vuser_end section. This function is helpful in handling a specific error.

This can also be used to handle a situation rather than error where execution is not possible. The function assigned “Stopped” status to the Vuser which stopped due to lr_abort function. In the Run-Time setting, “Continue on error” should be unchecked.

79. How would you decide which tool to use except for budget issues? Think about you having tons of money.


What skills do you require is the most important issue. Some tools require JS, some of them Scala or Python. We need to consider the test team’s skills.

Protocol support: Some tools require a limited range of protocols to simulate. Do we need to understand what kind of protocols we need to test? Combining the protocols in a test is also crucial as a scenario might start with a TCP/IP request and continue with HTTPS. Combining and maintaining them must be easy.

Reporting. Some tools generate poor reports and you have to deal with all those numbers to come up with a solution. We need detailed reports that show how many users an application can handle, which pages or modules load slowly. The most important report is the Response Time Graph

Installation: Some tools are easy to install like a minute but some commercial tools require so many components to install before starting using. Also, the OS version they support is important. For example, JMeter supports Windows, Linux, and other environments but HP requires Windows for its core modules.

Cloud integration: Creating huge loads requires so many resources so cloud integration is a must. You can run JMeter and Gatling on SaaS platforms like Loadium and Blazemeter.

80. What’s the difference between a record and playtest and API testing?


In API testing, we only make requests to an endpoint. But in a Record and Play performance testing, we make a request to not only endpoint but also to HTML, JS, CSS files or a CDN server to retrieve static images. Record and Play performance increase the test coverage.

81. Why would you need a CSS extractor in a performance test?


In web application testing, we need to extract data from a page like a price of a product or username. In order to do those operations, we need to use CSS extractor.

82. What kind of data extraction strategies can you use besides CSS?


You can use regular expressions to extract data. JsonPath is a good way to extract data from a JSON file. You can also use XPath for SOAP web services.

83.How would you perform the performance test on a mobile application?


Mobile applications are no different than any web or desktop application. They are using similar protocols but recording the request from Mobile device is tricky. You need to set up a proxy and install SSL certificates in the target device to be able to capture all requests. MitmProxy or Charles Proxy are very powerful tools for that purpose. After capturing those requests, designing and executing them are no different.

84. What is mean, mode and median? Why are those crucial to analyze a performance test result?


The mean is the average of all the numbers and then divided by the count of numbers.

The median is the middle value in an ordered(smallest to largest) list of numbers .

The mode is the value that occurs most often. We can use those data to analyze the graph response time or distribution graphs and validate if the response times are stable. Additionally, nice to have statistical information besides those are  normal and formal distribution.

85. Why is parameterization and data correlation is a must for performance testing?


Nowadays, many applications are using caching between the server and the user. In case we don’t use random or parametric data, we can get a response directly from the cache server and our actual applications server don’t get the load. The more parametrization the more load on the server.

86. Explain the basic workflow of JMeter?


JMeter acts like a group of users sending requests to a target server. It collects responses from target servers and other statistics which depict the performance of the application or server via graphs or tables.

87. Name the protocols supported by JMeter?


Following are some of the protocols supported by JMeter.

Web Protocol: To test the web applications, it supports both HTTP and HTTPS protocols.

Web Services: To test web services applications, it supports both SOAP and REST.

FTP: File Transfer Protocol provides the support for testing the FTP servers and applications.

Database via JDBC: used for testing the database applications.

LDAP: Lightweight Directory Access Protocol

Message-oriented middleware (MOM) via JMS

Mail: used for testing of mail servers such as SMTP(S), POP3(S) and IMAP(S)

MongoDB (NoSQL): it is a recently supported protocol by JMeter.

Native commands or shell scripts


88. List the important features that JMeter supports?


Following are some of the key features of JMeter.

  • It’s open-source software and is freely available.
  • It has a very simple and intuitive GUI.
  • JMeter can do load and performance test of many different server types like Web – HTTP, HTTPS, SOAP, Database via JDBC, LDAP, JMS, Mail via POP3.
  • It is a platform independent tool. On Linux or Unix, the user can open the JMeter tool by clicking on the JMeter shell script. However, on Windows, it can be invoked by starting the jmeter.bat file.
  • It has full Swing and lightweight component support (precompiled JAR uses packages java.swing.* ).
  • JMeter prepares test plans in XML format.
  • It’s a full multithreading framework allows concurrent sampling by many threads and simultaneous sampling of different functions by separate thread groups.
  • It is highly extensible.
  • Can also be used to perform automated and functional testing of your application.

89. What is seen on the screen when you open a JMeter?


By default, JMeter screen displays the Test Plan and Workbench tabs.

90. What is a Test Plan in JMeter? List some of the test plan elements available in JMeter.


A Test Plan defines and provides a layout of how and what to test. JMeter can be used to prepare a Test Plan for the web application as well as the client-server application. It behaves like a container for running tests.

A complete Test Plan comprises one or more of the following elements.

  • ThreadGroup
  • Controllers
  • Listeners
  • Timers
  • Assertions
  • Configuration Elements
  • Pre-Processor Elements
  • Post-Processor Elements

A Test Plan should have at least one thread group.

91. Explain the role of Workbench?


It is simply an area to store test elements while you are in the process of constructing a test. Once you’ve finished designing the test items in the Workbench, you can copy or move them into the Test Plan.

It also contains non-test elements like:

1. Http mirror server

2. HttpProxy server

These items aren’t available in the thread group and Test plan.

92. What is a Thread Group? List down its main parts?


Thread group elements are the beginning points of any Test Plan. It is mandatory to have at least one thread group in the Test Plan.

One should know the following about the Thread Group.

  •  All controllers and samplers must be under a thread group.
  • Listeners may be placed directly under the test plan, in which case they will apply to all the thread groups.
  • The controls for a thread group allows you to:

i. set the number of threads.

ii. Define the ramp-up period.

iii. Sets the number of times to execute the test.

Following are the parts of a thread group.

Sampler: It sends various types of requests to the server.

Listeners: It saves the results of the Run. It can be opened for viewing also.

Timer: It makes the run more realistic by inserting delays between the requests.

Controller: It is responsible for controlling the flow of the thread group. An example scenario is where the request definition includes if-then-else or loop structure.

Config Element: information about the requests to be added to work with samplers.

Assertion: To check if the response is generated within the given time and contain the expected data.

93. What are JMeter controllers? Explain their types?


JMeter provides two types of Controllers.

Samplers Controllers: It enables JMeter to post specific types of requests to a server. It simulates a user’s request for a page from the target server.

For example, you can add an HTTP Request sampler if you need to perform a POST, GET, or DELETE operation on an HTTP service.

Logical Controllers: It lets you control the order of processing of Samplers in a Thread. Logic Controllers can change the order of request coming from any of their child elements.

Some examples are ForEach Controller, While Controller, Loop Controller, IF Controller, Run Time Controller, Interleave Controller, Throughput Controller, and Run Once Controller.

94.What is a Configuration element? List down its elements.


Configuration Element allows you to create defaults and variables to be used by Samplers. It can be used to add or modify requests made by the Samplers. It will get executed at the beginning of the scope before any Samplers present in the same range. Thus, we can say that access to a configuration element is only allowed from inside the branch where it is present.

Following are the key features of Configuration Element.

CSV Data Set Config: It supports reading line by line from a file and splitting the line into variables.

HTTP Authorization Manager: You can specify one or more user logins for web pages that are restricted using server authentication.

Java Request Defaults: Using this you can set default values for Java testing.

HTTP Cookie Manager: The Cookie Manager element has two functions

  • It stores and sends cookies just like a web browser.
  • Second, you can manually add a cookie to the Cookie Manager. However, if you do this, the cookie will be shared by all JMeter threads.

HTTP Request Defaults: It lets you set default values to be used by your HTTP Request controllers.

HTTP Header Manager: It enables you to add or override the HTTP request headers.

95. What are Listeners? List out a few JMeter Listeners.


It enables you to view the results of Samplers in the form of tables, graphs, trees or simple text in some log files. It provides visual access to the data gathered by JMeter for the test cases executed for the Sampler component of JMeter.

JMeter supports the addition of Listeners anywhere in the tests that are included directly in the Test Plan. They will collect data only from elements at the same or lower level.

Some of the important JMeter Listeners are as follows.

  • Spline Visualizer
  • Aggregate Report
  • View Result Tree
  • View Result in Table
  • Monitor Results
  • Distribution Graph(alpha)
  • Bean Shell Listener
  • Summary Report
  • Aggregate Graph
  • Assertion Results
  • Backend Listener
  • Comparison Assertion Visualizer
  • Generate summary results
  • Graph Results
  • JSR223 Listener
  • Mailer Visualizer
  • Response Time Graph
  • Save responses to a file
  • Simple data writer

96. Explain what a Pre-processor Element? Name a few of them.


It enables configuring a sample request before executing it or to update those variables present in the response text that may not be extracted.

Some of the main pre-processor elements are as follows.

1. A modifier for HTTP URL.

2. HTTP user parameter modifier.

3. HTML link parser.

4. BeanShell preprocessor.

97. Explain what a Post-processor is?


Post-processors get used for calling an action after a request is made.

For example, suppose JMeter sends an HTTP request to the web server, and the user wants JMeter to stop sending the request. If the web server shows an error, in this case, the user can use a post-processor to perform this action.

98. What is the execution order of Test Elements in the Test Plan of JMeter?


Following is the order of execution of the Test Plan elements.

  • Configuration elements
  • Pre-Processors
  • Timers
  • Sampler
  • Post-Processors (unless SampleResult is null)
  • Assertions (unless SampleResult is null)
  • Listeners (unless SampleResult is null)

99. Is it required to prepare a separate Test Plan using JMeter for the testing of the same application on a different Operating System?


Following facts support that a JMeter Test Plan can run on any OS.

1. JMeter is itself a pure Java-based application which makes it platform independent.

2. JMeter uses XML format while saving a Test Plan. Thus, they have nothing to do with any particular OS. You can run those Test Plans on any OS where JMeter can run.

100. How do you ensure re-usability in your JMeter scripts?


Taking the following points into consideration we can encourage re-usability in the test scripts:

1. Using config elements like “CSV Data Set Config” and “User Defined Variables” for supporting greater data reuse.

2. Modularizing the shared tasks and invoking them via a “Module Controller”.

3. Creating own Bean Shell functions and reusing them.

101. How can you reduce resource requirements in JMeter?


Following are the tricks that help in reducing resource usage.

  • Use a non-GUI mode.

jmeter -n -t test.jmx -l test.jtl

  • It is better to use as few Listeners as possible. Applying the “-l” flag as shown in above point may delete or disable all the Listeners.
  • Disable the “View Result Tree” listener as it consumes a lot of memory and may result in JMeter tool running out of memory. It will freeze the console too. It is, however, safe to use the “View Result Tree” listener with only “Errors” kept checked.
  • Instead of using a similar Sampler a large number of times, use the same Sampler in a loop and use variables (CSV Data Set) to vary the sample data. Or perhaps use the Access Log Sampler.
  • Avoid using functional mode.
  • Use CSV output rather than XML. Also, you may like to read some of the common points.
  • Try to save the data that you need.
  • Use as few Assertions as possible.
  • Disable all JMeter graphs as they consume a lot of memory. All the real-time graphs can be viewed using the JTL tab in the web interface.
  • Do not forget to erase the local path from CSV Data Set Config when used.
  • Cleaning of the Files tab before every test run.


102. Explain what is Assertion in JMeter? List its types.


Assertion helps to verify that the server under test returns the expected results.

Some commonly used Assertions in JMeter are as follows.

1. Response Assertion: It facilitates the user by comparing the server response against a string pattern to check that the result is as expected. For Example, while waiting for a response from the server the Response Assertion role is to verify that the server response has probable pattern string, “OK” or not.

2. Duration Assertion: You may need to test the response from the server reaches in user-defined time. If it takes longer than the defined time, server response fails.

3. Size Assertion: It is to test that each response coming from the server holds the expected number of bytes. It facilitates the user to specify the size i.e. equal to, greater than, less than or not equal to a given number of bytes. For example, if the response packet from a server is less than expected 5000 bytes in size, then a test case passes, else a test case fails.

4. XML Assertion: It verifies that the response coming from the server holds the data in a correct XML format.

5. HTML Assertion: It is helpful for checking the syntax of the response data.

103. What is Spike testing and how can we perform it in JMeter?


Suddenly increasing the number of users at a certain point of application and then monitoring its behavior at that interval is Spike testing.

In JMeter, Spike testing can be performed using Synchronizing Timer. This timer keeps on blocking the threads until a particular number of threads get reserved. It then releases them at once thus creating large instantaneous load.

Are you looking training with Right Jobs?

Contact Us

Popular Courses