In this post we are going to look at performance testing on large scale programmes.
A few the posts we write define techniques and approaches based on a single application under test but sometimes you are faced with the prospect of performance testing:
- A new solution that replaces several legacy applications,
- A service migration from once cloud provider to another or one data center to another,
- An infrastructure update that covers multiple applications or services,
- A new solution that compliments and integrates with existing software.
Now, especially in the case of migration of services, performance is key, and you cannot afford to see a degradation in performance as the business users will have already become accustomed to the software and how it performs.
You can look to make it perform better but it is unlikely they will tolerate poorer performance just because you have migrated from one platform to another.
Equally, new solutions that replace legacy application will (rightly or not) be expected to perform better than their predecessor which is a challenge as your new solution will undoubtedly have a different workflow and approach to delivering what the end-users want.
These types of large-scale programmes can on the surface seem complex from a Quality Assurance perspective and we have put together this guide to help you understand some of the techniques you can use to ensure that the performance testing aspect of the testing is manageable and not overwhelming. We have set out in the sections below things to consider to assist in the performance testing of large-scale programmes of work.
Get involved early
Understand the scale of the programme as early as possible and understand the applications and systems that are going to be affected whether that’s through migration, integration with a new solution or replaced.
If you are dealing with migration or infrastructure upgrade, then you will need to understand which services or applications are affected as well as any application that are not in scope for migration/upgrade that integrate with the ones that are.
If it’s a new solution whether to replace an existing solution or compliment it, then you need to understand which applications the new solution will integrate with.
Make sure that you understand who the business owners are and who from a technology perspective can support you in setting up a test environment and any data requirements for the application or service that you have identified.
The biggest challenge when testing large scale programmes of work is that it will more than likely at some point involve the integration with legacy technology that is difficult to get a test environment set up for and developer assistance to support your testing. The earlier you can understand and plan for these challenges the better.
If your large-scale programme is migration of existing applications, then it is possible that some of the older technologies are difficult to test outside of production due to inadequate test environment or the skills to build new test environment for them.
Equally, if you are deploying a new solution that integrates with legacy software you will undoubtedly face similar issues, risk assessing all the application in scope for your programme and determining what can and should be performance tested in terms of some of your older applications is important and again something that should be done as early as possible to ensure that all involved in the programme are aware of the scope of performance testing.
You might not be able to performance test all systems and if you can justify under a sensible risk assessment strategy why then this allows you to focus on those applications that can be performance tested.
If your large programme of work involves a new application or set of services then you may or not be replacing an existing application, if you are then the business will have expectations around performance of certain business processes and you need to ensure that you work with them to understand that. The new solution will be different to the current solution and therefore performance will not be comparable. This needs to be documented and understood by key members of the programme.
If its migration, then your aim should realistically be no degradation and you should avoid being pressured into agreeing to better performance as this is an unrealistic goal of re-platforming existing applications.
As with the previous two sections this should be done as early as possible so that prior to development. If possible, you have defined which applications are being tested as part of this large-scale programme and the performance requirements against which you are being measured.
It is important to get this done before even starting to develop tests or consider your performance test approach.
Break down into services
When performance testing large programmes that affect multiple applications or services it is important to performance test each application or service individually and not try and performance test all services in an integrated way.
The integration between services will need to be tested but this should be a functional test activity, performance testing end-to-end is not a good idea especially if you have multiple test execution cycles mainly because data integration and consistency is an issue and can be difficult to create and maintain.
By focussing on singular application running under a load profile you have defined in your requirements gathering allows you to test applications at different stages in the programme and not wait until all applications have been migrated or integrated until performance testing can begin.
Another benefit of performance testing in isolation is that you can target accurate load into the application without relying on upstream applications to generate load, the volumes and concurrency rates are completely under your control.
This is by no means shortcutting performance testing, and it is a better more robust way to approach it and the only way to accurately performance test your applications that make up your programme of work.
It is probable that large scale programmes deliver in iterations rather than as a single activity meaning that development will also be iterative.
By isolating you tests above into services or applications will allow your testing to fit in with this approach, but you may also need to consider that you may not be able to define requirements and risk assess all aspects of the programme in one go and these activities that we have already discussed may need to also be iterative.
Move tests to regression during development
As you build tests and execute them for a particular application or service or as part of any new solution being delivered as part of the programme you need to be constantly executing existing tests as development on other areas of the application or other transformations are in progress.
This puts you in the position of running performance regression and the more standard performance tests (peak hour load, scalability, soak etc) in parallel and possible daily.
This is not as difficult as it may seem if you have a strong methodology and well defined and robust tests.
Be involved in the scrum of scrums
Most agile programmes will have a daily scrum of scrums where programme wide challenges and issues are discussed as well as providing a good overview of each strand of development and functional testing activity.
Getting involved in these will help you know the direction of the programme and issues being faced by the various workstreams which will help in understanding when performance testing for the various applications being tests will be required.
Work closely with the functional QA’s and Development Teams
Working with the functional QA’s and the developers is always an import part of any performance tester role regardless of the programme size or complexity. The important thing is to try and keep up the good practises and processes you have used with these teams in other programmes and don’t let the size of the programme overwhelm you.
We have spoken already about breaking down the applications and testing these independently and using this approach of dealing with a single application at a time should ensure that you can continue to use the close engagement with the QA and development teams in the same way you already do.
Understand the technical objectives of the programme
This may sound like an obvious thing to be doing but it is sometimes easy to forget.
If you really understand the technical objectives of the programme, whether it’s just re-platforming or replacing legacy software or applications and the benefits this brings the better you can approach your requirements gathering and risk assessment approach.
These large programmes are sometime done out of necessity where software or operating systems are end of life and therefore need upgrading which means that your may be just moving to a newer version of the application in which case you may already have tests that will already work as very little may have changed since the application was originally performance tested.
This will clearly save you time and effort and you may even have an ongoing performance regression test that runs meaning that these applications can be considered low risk from a performance testing perspective.
Sometime though your organisation is developing a new product or solution to a new product it is offering in which case you will have no choice but to write performance tests from scratch for the new solution, but you may be able to de-risk existing software that it may integrate with using the same approach as above where you may already have performance tests for these.
It is also possible that you are migrating from once cloud platform to another in which case you may already have a set of performance tests that run regularly, and you are simply running regression.
Understand how the new solution and legacy solution overlap and integrate
Again, this may sound obvious but assuming you are building a new platform that integrates with many legacy applications then the way they integrate is important to understand as you may need to work out how your can build tests that simulate the integration between systems.
It is possible that you are using ActiveMQ, Kafka or Fuse or any number technologies to integrate new and existing applications and knowing how to use your testing tools to test these technologies is an important thing to learn as the last thing you want is to be struggling to build tests because of technology constraints when you have tight deadlines.
Bering able to test these integration technologies as early in the programme as possible and understanding any limitations will ensure that as the programme evolves and makes more and more use of common technology which is likely then you will already have confidence that you know the integration platform can support the loads you need to test against to emulate production concurrency and load.
Large scale transformation programmes can seem overwhelming and by compartmentalising them will help with making the job of performance testing them a lot easier.
Its all about making sure the boundaries of what is being performance tested is known at an early stage in the programme, working closely with your colleagues in development and test and isolating applications rather than considering performance testing the whole solution in parallel.
Also, a regular understanding of the direction of the programme and the challenges it is facing as well as keeping in contact with the QA and development teams.