The subject of this post is ‘Is JMeter, a good alternative to LoadRunner’.
The short answer is yes absolutely, the longer answer is of course a lot more complex and interesting and worthy of discussion.
We will not discuss the more technical aspects of the tools as there are many, many posts talk about this already and it’s not worth repeating the same thing again.
Let’s look at it from a usability in the real world perspective.
What I did not want to do was just generate a list of protocols supported by both tools.
The easiest way to demonstrate the differences is to say that JMeter supports:
- Web: HTTP, HTTPS sites ‘web 1.0’ web 2.0 (ajax, flex and flex-ws-amf),
- Web Services: SOAP / XML-RPC,
- Database via JDBC driver,
- Directory: LDAP,
- Messaging Oriented service via JMS,
- Service: POP3, IMAP, SMTP,
- FTP Service,
And I think you would be hard pushed to find a modern protocol that LoadRunner does not support as the list is significant.
That was easy, if JMeter supports the protocol you are using then it’s a good alternative if it does not then it is not.
Simple, lets move on.
Costs and Licenses
Not so simple this one.
JMeter has an open source license, these are licenses that comply with the Open Source Definition in brief, they allow software to be freely used, modified, and shared.
So free to use basically.
MicroFocus LoadRunner licensing and costs are more complicated, you can use the community edition which allows you to use 50 virtual users free of charge and access to all the protocols, with the exception of:
If you want to use more than 50 concurrent virtual users then it is not easy to price as there are many factors to consider.
Suffice to say it’s not free.
So if JMeter supports the protocol you are using, and you need a higher level of load than can be accomplished by 50 concurrent connections then from a cost perspective JMeter is a good alternative to LoadRunner.
Opportunities to Learn and Educate
Both tools offer chances to learn and educate yourself, this is extremely important as continual development of yourself and your skill set is the best investment you can ever make.
It’s fair to say that using JMeter will give you more opportunities to express yourself programmatically as LoadRunner gives you.
- Complex scenario building,
- Data extraction and interpretation,
- Intelligent script parameterisation,
- Speed Simulation and Pacing,
- Functions through Menu Options.
This is not an exhaustive list, but it highlights that with JMeter everything above requires you to think and devise your own approach through the use of the many
- Logic Controllers,
- Pre and Post Processors,
- Assertions etc…
that JMeter has to offer.
So if you want to improve your scripting skills, albeit primarily using Java or Groovy, or like to find ways to build complex and reusable performance testing scenarios through code then JMeter will give you plenty of opportunities.
You can write your own LoadRunner function libraries but the need to do so is greatly reduced with the rich built in functionalities.
Obviously the common theme through this blog post of:
if your application is not supported then it’s not an alternative
is relevant but the opportunities for self-development exist with both tools with the major difference being that the self-development with LoadRunner is around the use of the tool and becoming a product specialist.
Whilst with JMeter you get to solve, sometimes difficult, scripting problems whilst learning about the scripting languages supported by JMeter.
You are going to need talented Quality Assurance professionals to build and deliver performance testing in a sustainable and appropriate fashion and that takes the right kind of skills.
The choice of tool can easily be governed by the talented individuals you have at your disposal.
The answer to the question of Is JMeter a good alternative to LoadRunner is yes as long as it supports the protocols and you have resources at your disposal or you can find resources available on the market that have the necessary skills.
The same is true of LoadRunner, if you don’t have the talent then it’s not an option.
The important point here is your Quality Assurance team must have the skills to use any performance testing technology and an investment in them is an investment in robust well thought out performance testing.
DevOps or not DevOps… that is the question
So basically we are looking at Continuous Integration, Jenkins Pipelines, Docker Images, Version Control etc…
Both tools support the DevOps approach to shift-left performance testing in the form of Jenkins plugins and both JMeter and LoadRunner Controller have an image on Docker Hub.
From a Developer In Test perspective the JMeter tool is probably preferable with the ease at which a jmx file can be built and executed as an integration test in code, this is especially true when dealing with web based technologies and web services.
Add to this the fact that adding JMeter to custom Docker Images for use in distributed Jenkins configurations, where multiple slave images are run in parallel using pod templates defined in the pipelines, is a relatively straightforward task makes JMeter and DevOps a very powerful combination.
JMeter can also be wrapped in the Taurus CLI allowing tests to be built very quickly that fits in nicely again with testing in development.
Both tools support cloud based load injection with OctoPerf being the best example of supporting JMeter at scale.
So again we have demonstrated that not only is JMeter a good alternative but ‘protocols not withstanding’ a preferable option for DevOps.
Whilst there are a number of articles and forums for both tools online the most valuable free advice and knowledge based articles belong to JMeter being the open source tool.
Whilst LoadRunner has a significant amount of online information, posts, forums and knowledge bases you do need to be a license holder to access the supported knowledge bases and white-papers.
The other significant difference in this broadly defined category is that you can download or clone the source code for JMeter and freely contribute to the JMeter Issues Page or build and distribute your own JMeter Plugins.
This overlaps to a degree with the Learn and Educate section as if you are keen to get involved in the future development, maintenance and enhancement of the performance testing tool you are using then JMeter is your option.
Functions and Libraries
Using external libraries is straightforward in both tools in JMeter you just move any .jar files to the lib folder, restart and you are away.
LoadRunner ships with pretty much the whole C Standard Library with most functions already contained in this comprehensive tool.
The need to write custom libraries for either is unlikely for most performance testing requirements.
Functions can be created and shared using both tools by making sure they are available on JMeter’s classpath or creating custom header files in LoadRunner.
So as our question is ‘Is JMeter a good alternative to LoadRunner’ the answer has to again be yes.
LoadRunner primarily uses ANSI C whilst JMeter is in favour of Groovy or Java Expression Language (JEXL) as they both perform better under load.
There is little else to say really on this subject and its doesn’t really add to the good alternative subject matter expect to say that the only limitations to scripting in any of these languages is your imagination and any technical challenges you may face in your performance test development can be overcome using any of the languages.
Growth and Progression
The performance testing tools space is ever growing there are many to choose from with many being the product of the ever increasing move towards DevOps.
Have JMeter and LoadRunner evolved since they first launched?
The answer has to be an emphatic yes as they would not have survived this long.
LoadRunner has, as discussed in the section on protocols, always had an eye on increasing its support for many widely used protocols as well as enriching its many number of built in functions and processes.
JMeter has evolved with primarily the open source community supporting the development of sampler, controllers and listeners etc through not only the use of libraries but JMeter version updates.
JMeter 5.1 made a number of significant changes, most at a technical level but many at a UI level making the tools much more intuitive for the beginner.
You are probably getting bored of the ‘if the protocol is supported mantra’ but this obviously applies here as well.
One of the critical aspects of testing tools is their usability, both tools present their functionality in different ways and whilst on the surface LoadRunner has a User Interface that appears more intuitive when it comes to performance testing tools it really is all about how easy it is to build reusable tests in an agile world against tight time-scales rather than how pretty the interface.
Now both tools provide a significant amount of functionality and do have the ability to build very complex non-functional tests and provide a significant amount of opportunities to design complexity in the use of scripting and in-built functions.
The scripting language is a significant part of usability and this is discussed in a previous section of this post so perhaps your scripting language of choice may be the deciding factor in which tool you prefer as scripting can significantly enhance your testing tools capability no end and using a language you are comfortable with and use on a regular basis can make a huge difference.
Aside from scripting language let’s look at comparing our tools against the four factors we believe can help you create tests in the real world against real applications.
These comparison are very simplistic, as to delve into the complexities would require a Blog Post of their own, which we may will look to write in the future, but are enough to show the distinction between the tools.
This section covers the basic raw recording of a test.
Comes with a good variety of templates
to ensure that you build a test plan that is representative for the goals of your test and the technology you are testing.
It is easy to record through Firefox with the proxy correctly configured of which there are many examples in the JMeter User Manual.
In JMeter version 5.1.1 a Recorder dialog floats above the application making it easy to give each transaction a suitable name and/or introduce delay
this is a significant improvement over the pre-release 5.1.1 where you had to toggle between the application under test and JMeter to ensure a consistent naming convention as you record.
The JMeter templates manage the inclusion of a Header Manager and Cookie Manager where required
and will populate these with the necessary information for each sampler, the Cookie Manager does the correlation for you (see next section)
LoadRunner uses its VuGen component to record scripts
It provides the ability to record both Single Protocol and Multiple Protocol Scripts and will automatically create three functions
which is an elegant way of compartmentalising the set-up steps, the repeatable actions that define the test and finally the tear-down steps steps.
During recording a Recorder dialog floats above the application where you can choose which function the test steps should be recorded in, transactions start and end boundaries, comments, rendezvous points, the ability to create additional functions in addition to the default three; amongst others.
LoadRunner does not require the manual configuration of a proxy as the tool launches the application as part of the start recording process.
Correlation is the capturing of dynamic values into a variable or a property for re-use later in the test to ensure accurate re-execution of recorded tests, examples being session variables or authentication tokens.
In JMeter you typically use a Post Processor for your correlation; this is extremely powerful in terms of its ability to target extraction of data from any part of the response which helps in the performance of the correlation.
There are many and varied post processors with examples being:
- Boundary Extractor
- Regular Expression Extractor
- XPath Extractor
- JSON Path Extractor
Correlation is a heavily manual process with the only approach being to execute the test until failure, determine the dynamic data required for the correlation, replace with variables and re-execute the test.
For longer tests this can be a long-winded process that can be time-consuming unless you use the correlation rules engine that OctoPerf provides.
As part of the VuGen post generation activity a scan for values to correlate happens automatically and candidates are displayed in the Design Studio window.
LoadRunner also provides the ability to create your own correlation rules before recording starts as long as you know the left and right boundary for the value you want to capture.
There is the ability to manually correlate values and whilst the auto correlation mechanisms are useful it is more than likely that some manual correlation will be required, examples of correlation functions being:
the manual correlation of tests in LoadRunner is an exercise in execute, correlate and re-execute and as with JMeter potentially a long winded, but necessary process.
Parameterisation is the way that variable or random data can be provided to the test through the use of flat files, random variables, database queries etc. and are used to ensure that the data you provide into your test changes on a test by test or iteration by iteration basis providing diversity during execution.
In JMeter, paramerisation is achieved using Config Elements to provide access to static data; of which below are a few examples:
The way the data is accessed and shared amongst thread and thread groups can be configured as part of the Config Element.
In LoadRunner there is an option to select the text you want to parameterise and using a dialog set the type of parameter you want to use, examples being:
- Group Name
- Iteration Number
- Load Generator Name
- Random Number
- Unique Number
- User-Defined Function
- VUser Id
The input files or function frameworks are automatically created and the way the data is accessed by the virtual users can be described to a very granular level.
The ability to build your raw tests into a cohesive scenario that fulfils a requirement or representative business journey is at the core of performance testing.
Loading Micro Services, Web Sites, Applications, Databases etc. in isolation may help in identifying CPU, Memory, Database Indexing or Fundamental application issues but ultimately running components concurrently as the user would do in production conditions needs to be a execute in order to expose any issues with shared components and architecture and to do this we need to build scenarios from our tests.
The more common scenarios are
- Peak Hour Load
- Soak Tests
- Scalability Tests
- Resilience Testing
In JMeter you use Thread Groups or Timers to create scenarios and there are a number of 3rd Party Created Thread Groups and Samplers that can assist in this.
The distinction between the two approaches is that using Thread Groups allows you to build goal oriented scenarios that are dynamic in nature whilst using Timers allows you to throttle load in a more static fashion, either way in JMeter it’s a manual process that requires a fair degree of thought and planning.
LoadRunner Controller or Performance Center is used to build and execute scenarios where you have a choice between
- Manual Scenario
- Goal-Orientated Scenario
Complex scenarios can be built manually or driven by the goal orientated pre-configured settings where users and load can be dynamically altered during testing to support alternative scenarios and goals.
You still need to give your design some thought but the graphical interface takes a fair amount of the difficulty out of the process.
So as to the question is JMeter a viable alternative to LoadRunner the answer has to be yes, but requires a bit more effort to build complex scenarios.
Runtime and Analysis
Let’s discuss the merits of each tool during the execution and analysis of our tests, it is important to note that LoadRunner effectively requires and offers no integration with any 3rd party tooling whilst JMeter due to its open-source nature have many wrappers and tools that offer abstraction in terms of execution, reporting, analysis etc.
For the sake of our comparison we will compare native LoadRunner with native JMeter, we have already discussed the fact that both run in Jenkins pipelines and how the tools compare from this perspective so this will not be a factor in this Runtime and Analysis comparison.
As with the Usability section above these topics are complex and exhaustive and the aim of this post is to provide a high level comparison of the basics with more detailed comparisons to come in future Blog Posts.
The best way to generate analysis data from a JMeter test is to use the Simple Data Writer listener to either output the results of your test to a .csv or .xml formatted file.
There are a significant number of configurable output options that this offers and is lightweight when it comes to capturing results.
Execution should always be performed in headless mode with no listeners, apart from the Simple Data Writer which does restrict the level of information that JMeter provides during runtime but whilst the output is relatively basic it is enough to ensure the test is:
- not producing errors
- is keeping to the required transaction per/min rate
- gives an indication of response times
Reporting in JMeter is very easy with the file produced from the Simple Data Writer during execution.
The file is used as input to one of the many Listeners that produce reports and graphs to satisfy most of your test analysis requirements, examples of listeners are:
- Aggregate Report
- Hits per Second
- Response Time Distribution
- Response Times Over Time
- Connect Times Over Time
The LoadRunner approach to analysis is much more real-time with a host of available graphs, including:
- Running VUsers
- Trans Response Time
- Trans/Sec [Passed]
- Trans/Sec [Failed, Stopped]
- Hits per Second
These can be dragged and dropped in the LoadRunner Controller User Interface to get a real time view of the test in progress with a clear breakdown of response times on a transaction by transaction basis.
You can drill down into the Scenario Status in real time with information available at log level of Passed, Failed and Errored transactions.
The LoadRunner Controller also allows you to build in a set of Service Level Agreements that your application under test needs to meet as a Goal definition and then supports you in building scenarios to explicitly test and track against your particular Service Level Agreements and subsequently report against these.
There is also a LoadRunner Analysis application that takes the output of each test and provides a significant number of reports and graphs to display every aspect of your test, there is also the ability to merge graphs to show comparisons between multiple sets of tests as well as the ability to produce a results in the form of a HTML page.
Is JMeter a viable alternative in this section?
JMeter is again a viable alternative to LoadRunner with the level of reporting equally as comprehensive with perhaps LoadRunner slightly easier to use with its intuitive User Interfaces but there is no real distinction between the quality and quantity of the reporting.
Conclusion, is it a good alternative?
There is a common theme running through this post and that is.
If JMeter supports the protocols you are testing then it is a good alternative to LoadRunner…
If it does not then it is not, this I think sums up the content of this blog post.
You could argue that in the new world of DevOps it is preferable rather that just a good alternative.
This article may seem a little bias towards JMeter but that is not really the intention, the purpose of the post was to discuss whether JMeter is a good alternative therefore the emphasis was always going to be on JMeter.
LoadRunner is a great tool but I genuinely believe that if you are using an application that is supported, from a protocol perspective, by JMeter you should not only consider it but embrace it especially when building performance tests in a continuous integration environment.
To wrap up this blog post it is important to note that there are numerous open source and commercial performance testing tools on the market all with one common denominator their ability to build performance testing assets.
The ability to use them to deliver your organisations performance requirements is the critical thing and that is down to you or your Quality Assurance professionals and your investment in them.
Very unbiased article.
In reply to Mohammad Zeyaullah
Hi Mohammad, Thanks for your feedback!