Testing early is part of the shift left strategy. The idea is to detect defects as early on as possible when the cost to fix them is still reasonable. Development processes already integrate most types of tests. But load testing usually comes last. Which is why I think a quick reminder of its benefits is in order. When in the development phase, fixing a performance issue should be time effective. But when the application moves toward production this cost increases a lot because:
- You have to get each new build/fix through the whole process again,
- Architecture defects involve re-thinking interactions between servers and extra deployments. For instance adding a second backend server means you need to load balance sessions. Also effort that has already been spent on deployments is lost,
- The development team might be less involved in the project in later stages. Getting them to focus again and fix a defect can take more time. After all a human brain can only handle so much context switching,
- Time runs short.
Consider also the impact of a defect on performance in production. As we have seen earlier.
And we should include other costs linked to production issues such as:
- Downtime or corrective actions outside of business hours,
- Synchronization between all teams,
- Increasing number of people involved in the process.
Also consider that in non critical situations, you might not want to fix the defects to avoid extra costs. This can cost you a lot in the long run. In particular because old issues will stack with new ones and stacked issues are a pain to solve. You could end up losing a lot of money and R&D flexibility.
We are only considering load testing in this training course. But whether the development process is agile or not does not changes why you should run tests. It will have more impact on planning and methods. In other words agility is interesting to us as shorter cycles means a different approach to testing. We will see a bit later the impact it has on the types of tests you will run. But to put it short you want to run simpler tests on a regular basis. These tests are referred as component tests as you will end up testing each component on its own.
Because of that testing early does not replace a complete test. Instead a combination of both is preferable. Run regular tests for each sprint and a more complete one before every major release. We will also cover this when talking about selecting the right testing environment.
For now we can sum up with:
- During each sprint we run component tests. Testing each component independently to assess if their individual performance makes sense,
- Before each major release we will run a complete load test. As we must see how all components work together along with the infrastructure.
Continuous integration of load tests is an important topic these days. Automated build processes are a must have and almost all companies rely on one. Functional tests are also very often part of the mix. But as usual performance testing is still behind and trying to catch up. The idea is of course to integrate a set of load test to each build process. The goal is to find out about regressions very quickly.
Continuous integration tools provide the means to automate tests. But also a reporting that you can share with other teams or decision makers. It is important to have a seamless integration of your load tests to avoid complex setup. And also you want to have a readable report in the end. This way one team can develop the test scripts (Dev team) and another check the results (Perf team).
This helps with having an early overview of the performance. What is more interesting to discuss here is what kind of tests you can run as part of a CI process.
Continuous integration KPI
In agile processes the application will evolve between two test runs. Since the purpose is to compare performance, reliability of the tests scripts is key. To guarantee this, a first step is to avoid long and complex tests scenarios. Otherwise you might spend too much time testing.
A simple solution is to re-use the component tests done on the dev environment. Since they should be up to date and simple enough to run as part of CI. But this kind of “simple” test might not spot real life issues.
So I would say the question is, how much effort are you willing to invest in load testing automation? Start simple at first an see if you can spare the time to test more things with each build. See if you have or can find the proper tools to automate more and more. Some tools are more helpful than others on updating test scripts and datasets.
Another important question is which KPIs to select? Here again you need a good mix of simplicity and efficiency. Stick to interesting KPIs that remain comparable when the application changes. Response times and error rates are usually a good choice. They are among the most basic and relevant metrics you can get. But feel free to look into more details depending on your requirements.