A long time ago, Quality Assurance was executed after development. Performance testing was an activity executed when software was ready for production.
If a performance issue was found, most companies:
- Fix the issue which means a complete new cycle including QA Tests and performance tests are required,
- Or put the software live and decided to fix it as part of ongoing development,
- Or borrow from the future. That’s technical debt.
Let’s be fair: this approach isn’t optimal.
The root cause is performance testing is not considered viable until the software reached a certain level of maturity.
It is known as shift left performance testing and it works it just needs Quality Assurance teams to think a little differently when it comes to performance testing and to have a performance test strategy in place to support shift left performance testing.
Let’s explore how to integrate load testing as part of your software development strategy.
Choosing the Right Tool
Lots of community support and free to use plus many opportunities to give back to the open source community. There are many sources of information on how these tools can be integrated into Jenkins pipelines as the mechanics of integrating them will not be discussed here.
Using Maven to support your Continuous Integration development strategy in the form of building and organising your development patterns is important and again this is something that OctoPerf can support with their Maven Plugin.
Self Contained Tests
Build a performance test for each service or piece of functionality. Make tests self-contained so they are portable across any environments:
- Create the data you need,
- Use it in your test,
- And remove it afterwards.
Run each performance test in all your test environments as soon as it is promoted from development at production levels of load and concurrency if your test environments can support this.
If they cannot then scale your test to volumes and concurrency that can be supported.
Eventually, you will have a number of individual performance test scripts that are regularly running against a code base that is constantly evolving. It garantees your test scripts are also evolving and stable.
Use Simplified Property Function throughout your tests to abstract users, load, duration, environment etc away from hard coded values in you test script, see this article on Flexible and Configurable Test Plans for information on how to do this.
Let’s assume you now have multiple performance test scripts all testing individual services, user journeys or database calls. Or any component that is part of your application under test.
If these were to be run in parallel then we have a performance test, it might not cover full functionality as some may still be in development, it might not be in an environment where full production volumes can be achieved.
You have run a test against an evolving application, but a performance test has been run. Analysis of the results can:
- Help developers address performance issues early,
- Gain confidence that their design patterns work,
- And ensure that connection pooling design is correct and performs well.
This type of information early in the delivery cycle is invaluable.
Test Early - Fail Fast
Running performance tests early in the software delivery process highlights issues that can influence common development practises and coding techniques meaning that the application quality is hugely improved.
Discovering performance issues early in the delivery process is always cheaper in terms of resources and time to fix.
Test Script Flexibility
We have built tests we are running them regularly and we are therefore regularly maintaining them. As the code changes you immediately known that your tests work against the latest version and if not you fix them and re-run.
We are performing script maintenance on a daily basis which resolves another big problem with large scale performance tests that are run infrequently.
It can take a while for the scripts written against an earlier version of the code to work with the latest version and sometimes tests have to be completely re-written as the script maintenance work will take longer
The scripts now serve multiple purposes, they can be:
- Run in isolation to test components and services,
- Run in parallel to support early performance and load testing.
And, if correctly parameterised, flexible enough to run in multiple environments under multiple load profiles. Let’s see how!
You will start to see response time trends and find it easy to spot anomalies in system performance.
Run your tests in parallel as often as possible, perhaps at the end of each sprint or when a significant piece of functionality has been developed and is available to test.
Work with the functional Quality Assurance team to share knowledge and results.
Encourage all members of the programme team to take accountability for performance quality by sharing knowledge of how to build and execute scripts.
Formal Performance Testing
Whilst your performance testing strategy should be aimed at de-risking performance from the very early stages of the programme you will at some point need to run a more formal set of tests.
In order to demonstrate how simple this can be with:
- well maintained,
- regularly run,
Performance tests script we will work through an example of these test scripts serving multiple performance testing purposes.
Let’s assume you have 4 tests that cover:
- 1x application user journey,
- 2x internal rest service requests,
- 1x web service request.
You maintain and execute these daily, maybe in the Jenkins pipeline maybe locally or on a remote server.
They cover the full business functionality of your application
You are unlikely to have this low a number of tests, but this is just for our example
Each test has 4 Simplified Property Function that are managed with a properties file that controls
- service or application url
- number of users
- throughput per minute
Let’s create an example file structure to hold our performance tests, remember we have abstracted the values from the tests/