This post does not look at a particular aspect of JMeter nor does it give a detailed overview of how to use a particular tool that will compliment your performance testing with JMeter.
What it is about is the principles of push to production pipelines and performance testing and while I have stated that this post is not specifically about JMeter in my experience JMeter is one of the best performance testing tools for this type of pipeline integration.
What problem are we trying to solve
Let’s consider how the world of application and technology development is moving.
Everyone seems to be focussed on Agile delivery and shifting their testing to the left and if done correctly and if Agile principles are followed this can be very successful.
We’ve already discussed about shift left testing and the principles behind the execution of JMeter tests from a Jenkins pipeline on this blog.
Now this is all good and speeds up the testing and ultimately the time for the product to reach production but there is a move towards using CI/CD tools ensure that Application definitions, configurations, and environments should be declarative and version controlled.
In essence if the tools detect a change to any aspect of your application or infrastructure through version control then a pipeline is spawned to ensure that they are all in sync.
The reality of this is that application that are considered stable and require a small change to the infrastructure will trigger a pipeline that will synchronise changes to production. Now clearly you need a robust set of internal controls for this to work in your organisation, but it will become a reality for some organisation soon and it is worth discussing how performance testing fits into this. If you need more context on how this works, it is worth looking at this overview of 3 of the main providers in this space:
Good thing or bad thing?
It depends on the maturity of your organisation and the nature of your business where some organisations have more ridged controls around regulatory or sensitive changes that may be required this may not be an option. But in essence, and for organisations whose software or business model suit this approach, we feel that a move towards a more continuous delivery to production can only benefit the end users in that regular change, especially enhancements that will improve their day to day working practises.
What does this mean for performance testing
We are not for one minute suggesting that from the first development phase that code would be pushed straight to production. And therefore all your formal performance testing stages (to demonstrate your application under test performs against a robust and sensible set of non-functional requirements) should not be followed.
Once an application is stable and in production some organisations will deliver periodic changes to production as part of a formal release process while other organisations will prefer to deliver change as soon as the development activity has completed and been checked in by the development teams.
This approach that sees the regular delivery of change in production has an impact on how Quality Assurance is delivered and defined and while the functional testing is not really going to be discussed in much detail here you will, I think, see a shift towards more automation and targeted functional tests.
What we can discuss is how your performance testing needs to adapt to this continuous process of small changes being pushed to production. Because you do not want to spend a day or so running a set of performance tests before the code can be released. This defeats the object of what you are trying to do with your push to production pipelines.
Looking at an example
We are going to look at how theoretically you can create a performance test that is triggered as part of these push to production pipelines and if successful we see the pipeline progress to the next stage in deployment.
Let’s first look at a theoretical process flow that replicates one that is common in Agile delivery.
In our process flow diagram, there is a very specific phase of performance testing that runs alongside the functional testing and before the business integration tests and this indicates that at this point in the delivery of code to production you would run a set of performance tests, now your organisation may execute testing performance later in the process, but this is here just to indicate the fact that in Agile delivery pipelines there is always a specific performance testing phase.
The performance testing phase may be run in a pipeline or it may be a more manual process, the Blog Post already mentioned earlier does provide guidance on how you could implement JMeter running in a Jenkins pipeline.
In our example the decision points indicated with All Passed rely upon an activity that halts the process, which may be automated or manual, and possibly a quality gate before moving onto the next step which, while a quick process in the Agile world, is not really a ‘push to production’ process as there are periods that the flow is halted along the way.
As stated earlier we are not suggesting that this process will become redundant any time soon, this is what Product Delivery looks like and the reality is that this is what many companies still aspire to.
What we are building up to is discussing how this might change once the software is established and already in production and you need to deliver a small, or even medium, changes.
So, we have briefly discussed some of the reasons you may, in your organisation, decide that pushing directly to production might be a good idea so let’s look at what that means for performance testing.
Let’s take our theoretical process and look at it from a push to production perspective.
You can see that the push to production process relies upon a more automated approach and while we still have the same stages in the process that we had in the Agile flow, we discussed earlier, the checks before moving to the next phase in the process flow are automated.
The point of this Blog Post
So up until this point this post has been all about theoretical process flows and as discussed, this type of continuous delivery to production with no intervention is aspiration for many companies and maybe something that most do not achieve or have no desire to achieve.
But we think in the future this will become more commonplace for organisations that have a mature enough Agile approach and whose business model and nature of business allows.
Adopting this process is all well and good but you need to make sure that your performance tests are robust enough, as discussed earlier JMeter is one of the best tools to implement an Agile and Push to Production strategy with and we are going to discuss how you can ensure that as you develop your tests you make them as robust as possible so that they could in the future support the type of process flows we have been discussing.
JMeter best practises
If you do want to evolve your performance testing in line with a continuous delivery model here are a few things we would recommend from a scripting perspective.
Make sure that every sampler has an expected outcome measured in the form of an assertion, these tests are now part of your company’s gateway to releasing code into production and you want to make sure that every step in your tests meets its expected result.
We also discuss assertions in JMeter in this blog post.
Parameterise all your environment variables
You may want your tests to run across multiple environments but more than likely you will need to take your tests that you originally ran in a performance test environment against your staging or pre-production environment and therefore parameterisation of key values should become your standard approach in script development.
We’ve already discussed on this blog on how you can effectively parametrise your JMeter tests.
Use dynamic data, set up and tear down
Try and keep your tests self-contained with the creation and destruction of data also part of the execution flow, if you rely on data being in the correct state in the environment you are targeting then it is more than likely that eventually your tests will fail because of data issues.
We also have an article on data parameterisation on this blog.
Consider what constitutes a pass or failure and build in error trapping
This is how we will ultimately determine if a code release performs and starts with a robust set of non-functional requirements as this will define what we are measuring our results against and once you have defined that, use assertions to fail a test should these requirements not be met.
Check our article on non-functional requirements.
Build good output results strategy
How and where you output your results is important as empirical evidence of these automated test executions is required should there be performance issues with a release that was promoted to production using a fully automated approach and you need to re-check the output from your tests.
Articles on Dynatrace, should this be your analysis tool of choice, and writing your own class files for reporting purposes can be found in the links below both of which will help with the results analysis.
You need to be executing your tests on a regular basis, daily even to ensure that they continue to run and there have been no application, environment or data changes that might stop then from running when they are needed as part of the push to production pipelines.
This is a good practise to get into with any automation. It allows you to be constantly aware of how your tests are running and also keeps your levels of familiarisation with the process current.
The IT world constantly evolves, and it is important that performance testing evolves with it that’s why the implementation of these Agile performance test execution phases including considering Push To Production pipelines should always be done for your organisation.