There’s no denying the importance of a performance test campaign in the quality assurance process. But I have lost count of customers requesting load test with no specific objective (or just “improve the performance”). Running a test campaign with no objective in mind is quite risky. You might spend a lot of efforts on useless improvements whereas the critical pain points remain.
I worked on performance campaigns with no objective defined a couple of times. Usually, this implies running tests until you run out of time/budget and optimizing whatever you can. It does not mean these campaigns were not successful, but they probably cost a lot more money than they should have.
First, you often end up with a larger infrastructure than needed which of course does not comes for free.
I once witnessed a customer purchase two large machines as application servers a couple of days before the go live. These machines allowed the application to handle 4 times the standard load. But the application was never expected to have so many users connected. In the end a lot of money got spent that way.
Then there is the cost of the tests themselves. Even if you use open source tools you have to consider :
- The test infrastructure,
- The performance tester(s),
- All the contributors.
When you add up all of this you will have the impression that performance testing is expensive. A lot of companies would like to have us think that way. But if you have a precise objective in mind you can control the costs efficiently.
Defining an objective
You can’t blame a newcomer for not having a precise objective. It is the role of the tester to help define the objective by asking the right questions or by giving examples.
Common objectives are :
- Response times (or error rate) under X seconds (or XX percent) at standard load.
- Validate the load balancing/failover mechanism
- Confirm that XXX users can be connected simultaneously
- Check that the background tasks don’t have an impact on the response times
This topic should be addressed as soon as possible because it will have an impact on the whole campaign.
Risk based testing
Performance testing is all about covering risks. Testing the load balancing covers the risk of a load balancing failure in production. With this in mind, to properly define objectives you have to consider the risks you would like to be covered. Cost wise, you won’t be able to cover all of them so try to focus on what matters the most.
For example, let’s assume you test a website which will be accessed worldwide and 24h/24h. Amongst other things, you must run a 24h test (soak test) to assess if server’s memory and network capabilities are stable over a long period of time.
It is all about assessing the circumstances that would put your system at risk and addressing them with a test. This is especially difficult for a first deployment where you have no statistics about the application usage.
What to do with my objectives
Now that you assessed the risks and defined objectives accordingly, you will be able to prepare for the rest of your campaign. At this stage, you should be able to determine :
- The virtual user profiles to create and their complexity.
- The number of tests to be launched.
- The expertise required to prepare and analyze the tests.
My point is that defining objectives is critical to get a clear view on the delays and work to be done. This will also tell you if your tests can fit in the timeline and budget of the project.
Establish an objective first, this will help you assess the complexity of the test campaign.
To help you establish objectives, think about :
- Common objectives such as response times and number of users supported.
- Risk based testing and your desired coverage.
- Your budget.