When we think of performance testing we normally think of thousands of requests with thousands of users generating huge volumes on our application under test or increasing the load until the application under test fails or runs out of resources.
This is not a good approach when determining load for your performance tests for several reasons and can in some cases render your performance testing meaningless.
This post will outline some of the pitfalls that are commonly made when it comes to generating load and will look at ways to suggest improvements to your approach to load profiles.
As we have said in the introduction it is extremely common for performance testing to consist of huge volumes of load with very little though at to the accuracy of these volumes with the same being true for the number of concurrent users being simulated.
Another pitfall is not considering how the application you are testing will be used once in production, building load profiles that will not occur in production is not a good practise.
Not generating enough load, this may seem odd based upon what has been said so far but this is equally as common as running your tests at volumes that are way to high. If your calculated hourly transaction rate does not lead to concurrency of requests, then you are not really performance testing.
Why these pitfalls lead to bad testing
Volumes that are unrealistically high can lead to performance issues being investigated by the QA or Development teams that will not materialise in production.
By scaling your application resources, Memory and CPU etc to support a load profile that will not ever be reached in production can lead to high costs for physical or virtual hardware which will not be used.
Running unrealistic loads might lead to architectural changes to parts of the system that are considered bottlenecks at a cost to the programmes in terms of time and effort.
Running tests that are unrealistic may lead to reputational damage to the QA Performance team whose responsibility it is to build and run these tests.
A better way to think about load
Look to emulate real event that your system will have to handle, if you are an Insurance company you will have a period of renewals, or month end processing that you can simulate. Your organisation may run marketing campaigns that result in high volumes over a short period of time which is again a good candidate for load profiling.
Think about the shift patterns of your end users and how user load and concurrent sessions might be staggered to account for staggered working and break times.
Hourly business volumes or daily business volumes will not be evenly distributed through the day and should not be distributed through your tests, consider shorter tests with accurate concurrency rather than even distribution.
Consider a suitable ramp-up period for your tests, most applications do not go from zero load to peak volumes straight away, the load is gradual, and this should be factored into your testing.
Low volumes lead to tests that have no concurrency, if you have a load profile that requires only a small volume of business transactions then consider if you need to run a performance test at all.
If there is no concurrency then a single user test may be enough, if you do need to run a high volume test then you should at least execute with two concurrent threads as it’s possible that even events with low volumes will have times when requests are made in parallel and you should test for .
Best ways to determine load
We have considered above some better ways to think about load and we will now look at ways we can determine our load profiles from these considerations.
Talk to the business users to determine how they are seasonally affected and what the high-volume business traffic will be, these are the end users after all and they will have the best information on how the system is used and the volumes fluctuate to account for the time of year.
This needs to be backed up with evidence from your current production systems to what the actual volumes are and how they peak during the day, or the week or the month.
Build up a complete picture of how the application you are testing will be used so you can ensure you test at representative volumes.
Emulate real events in your business at volumes at which you have evidence for and importantly look to understand that peak for one business process may not overlap with peaks of another.
For example, do not execute peak new business and peak renewals with peak marketing campaigns if this will never happen in production, think of where there are overlaps in peak volumes and emulate these.
Think about concurrency, this is the key when it comes to your load profiles; consider at what levels your system must handle concurrency and understand how it performs under these levels of load by creating simple isolated tests. Performance tests can become complex and difficult to maintain which is not ideal.
Think about what it is you are trying to solve and build tests accordingly, a small number of well targeted, well thought out tests will tell you more about the performance of your application than complex business scenarios.
Implement load profiles into your test
We have looked at ways of targeting load and identifying what is suitable for your organisation, but you have got to turn this into a test and managing load profiles are best done using JMeter with its many timers.
Let’s look at these:
Before we list these, it is important to state that they are the ones that come shipped with JMeter as standard, other custom timers are available, but these are the default:
- Constant Timer,
- Gaussian Random Timer,
- Uniform Random Timer,
- Constant Throughput Timer,
- Precise Throughput Timer,
- Synchronization Timer,
- BeanShell Timer,
- JSR223 Timer,
- Poisson Random Timer.
Their definitions are best interpreted from the JMeter reference pages and are all self-explanatory.
Using these timers, you can create any variety of load profile you want whether its rigid and static or variable and random.
You can manage load at a very granular level and maintain complete control over your testing load.
Benefits of better load profiles
You will get better tests, more meaningful tests and test that will expose performance related issues should they exist.
Well managed profiles allow you to flex and change your tests to change the load they generate and therefore your tests can fulfill multiple purposes i.e. Load Test, Soak Test, Scalability Test or Smoke Test.
Feel confident that the application under test will perform should your results indicate this or feel confident in raising defects should your tests expose problems. Only complete confidence in your tests and the load profiles they emulate can provide this confidence.
The benefits of Performance testing are normally questioned when:
Performance issues are raised in test that do not materialise in production, even when no changes were made to mitigate them.
Performance issues are found in production when performance testing found nothing.
Performance test is, wrongly, seen as a luxury when it comes to QA where it is easily equal to all other QA activities. To justify the cost of performance testing the quality of testing must be evident and a good, well-structured load profile is the key to this.
Repeatability and Pacing
Once you have load profiles that meet your application business requirements then it is important that you are confident that your tests are repeatable and that each execution will give you consistency in terms of number of transactions and load concurrency.
Consistency in your execution is critical as you want to be able to measure your response times between tests and compare results and an incocnsistent load means you will not be comparing the like for like tests.
Most measures of consistency between tests are determining whether response times are the same, not identical as that is not a realistic aim but within a certain percentage difference (+/- 10%) or maybe within one or two standard deviations.
This repeatability aspect of your testsing all comes down to the pacing of your tests, if your pacing is to quick then you will execute a high load over a short period of time which may lead to your infrastructures resources (CPU, Memory etc) being saturated and therefore your response times will suffer.
Using JMeter timers (discussed earlier) in conjunction with the correct test duration and think time all contribute to even pacing and will ensure that your load remains consistent.
Let’s have a look at the principles of pacing:
test duration / (No transactions / No concurrent users*)
Lets’ say we have a user journey that takes 180 seconds to complete as a JMeter test, this journey has been built and timers user to balance the load so a single iteration meets the business expectations.
If we need to complete 1800 of these user journeys in a 1 hour period and the expectation is that 100 users will be available then we can determine our pacing from this so that we meet our expected load profile.
1800 user journeys / 100 users = 18. Each user has to be active for 18 user journeys
We have 3600 seconds for our test (technically we have 3600 seconds - 180 seconds as we want to finish our last iteration within the hour, but it makes the maths look messy!!)
Each user journey will have 3600 seconds / 18 = 200 sec to execute
This is effectively pacing and this is a simple example but remains the foundation of more complex ones.
Many tools will have the ability to calculate your pacing for you and OctoPerf is no exception. OctoPerf have a really good feature as part of their advanced configuration that gives you fine control over pacing and helps you ensure that your levels of concurrency are accurate and consistent with your agreed load profiles.
The link to the documentation on this feature is below and is worth reading as it will give you an insight into how using OctoPerf for your performance testing can really help you ensure your application will perform in production. OctoPerf advanced configuration
The dedication given to the script creation exercise and results analysis should also be reflected in the process of determining load.
The most advanced tests written using the modern approach to Continuous Integration and Continuous Delivery and integrated into pipelines are meaningless if what they test is inaccurate.
Focus on concurrency, focus on reality of your business, use evidence of how your business operates and emulate real business scenarios in your testing.
A simple set of tests run manually, that target exactly how your application will be loaded in production, are much better than complex, shift left, pipeline driven, execute on check-in, cover every business function type test.
That’s not to say that using an agile approach to performance testing is wrong because it is exactly the right thing to be doing, just not at the expense of ignoring the correct load profiles and levels of concurrency that you should be executing your tests at.