In this blog post we are going to look at some of the uncommon performance tests. By this we mean those scenarios that are not what we believe are commonly executed but those that are run periodically at best.
These uncommon scenarios should not necessarily take priority over the more common performance scenarios. They do add value by stressing parts of your application under test that may be missed by the more conventional tests.
We will discuss each scenario in turn and look at the benefits and some of the difficulties you may experience in designing these scenarios. We will also take time to give examples of when these scenarios would be useful.
Concurrent activity tests
You would have thought that the execution of this scenario is more common than it is. I have seen numerous examples of web sites crashing when big events, whether concerts or sales events happen as I am sure you have. Equally, when new platforms or services are launched invariably the attention they attract initially can be uncharacteristically high. While these events may not happen after the initial launch poor performance at this crucial stage in your applications life may put people off using it altogether.
What we are asking you to consider when determining if this scenario should be run, as part of your performance testing, is are there likely to be a transaction or number of transactions that under unusual conditions, like those outlined above, would be hit with a high degree of concurrency.
If there are you should consider building a Rendezvous point in your tests. Or experimenting with how many concurrent requests this service or business transactions can accept whilst still meeting its non-functional requirement. Look at the impact on the system as a whole and understand if you are going to cause your application to crash or not perform for the duration of the concurrent requests. The data you gather from these tests will inform you whether if then performance is poor you need to look at a code fix or increase hardware resources
The other thing this type of scenario is good for is to look at the business volumes you are load testing under and determine if what your scenario is doing is correct.
Let’s consider a system that processes 3600 transaction per hour based on business data metrics. If you built a test that executed 1 transaction per second over an hour you would feel that you are testing against business requirements.
But, consider that all the transactions for that hour happen in the first 10 minutes and the system is relatively idle for the remainder of the hour. This would be a common effect on systems that are not operational 24/7 and started at the start of the day. You would have tested at 1 transaction a second where in reality you should have tested at 6 transactions a second to accurately measure the impact of the load on the system.
Of course at such low volumes it doesn’t look that bad, but with hundreds or thousands more transactions it quickly becomes a large issue.
Long Soak / Endurance Test
We have added this one not because a Soak / Endurance test is uncommon but because the durations are normally not long enough. If you have a system that is 24/7/365 how long are you Soak Testing for, a couple of hours, a day, a weekend?
A system that is going to be available with no regular re-starts, albeit load balanced with redundancy, should be soak tested for a week at least. The volumes do not need to be as diverse as you would see in production but in our opinion a week long Soak / Endurance test is something you should consider. You do not need to make the scenario as complex as you might make your shorter scenarios.
You are looking to include simple business transaction that you know will not consume data that might run out and are stable from a code perspective. If your scenario fails overnight when you arre not monitoring it then resume the test in the morning. As long as the reason for failure is not the application under test or the tool you are using has had an issue or the server it is running on. To have a test running continually for this long a time will result in you being able to share and demonstrate the system performing in real time.
Any or all members of the project team can monitor or use the application in real time under load. This will provide real confidence in the team that the application is stable.
Future Database Volume tests
As you are aware a Volume Test is designed to determine the impact of database data volumes on the performance of your application. As tables grow you may start to see issues with missing indexes or poorly constructed SQL statements. Many organisations when running performance tests load data to the level of their current production systems. This is always considered is a very good test and one that should always be included in your performance testing.
Where this becomes uncommon is considering pushing the volumes in your applications database tables well above what is currently in production. Especially if you have no weeding policy in place to remove data from you production databases.
Being able to provide your operational teams information on how much data your application can hold, assuming you can find its limits, is extremely useful for operational monitoring. If you know that your current production system has for example 5 years’ worth of data. If you then double the data in your test environment and your application continues to meet your non-functional requirements. You can say with confidence that your application can support another 5 years’ worth of data growth with no performance degradation.
The objective of this performance test is to not necessarily be able to double or even quadruple data volumes and still have an application that performs, the objective is to know the limits of data growth. You determine that you have 1 years’ worth of growth before performance is affected and there is not an easy code fix. Then this is not a reason for your application to not go-live, you just know that you have 1 year to determine what you are going to do about it. Having this knowledge is extremely important and useful.
This does not happen currently test
When developing a new application to either replace an existing system or to offer your customers new functionality or business processes. it is common to base your performance test volumes on what you do currently. Or, if it is a new application, based on numbers provided based on customer base, product numbers etc. What can happen when you launch a new or replacement system is that it gets used in a way you did not consider it would be. Maybe in a way that you have not had the chance to performance test. Clearly, once a new system has gone live, and you understand better how it is being used you can update your performance regression tests to reflect this.
We think that another, more uncommon approach would be to consider what might happen in terms of load and concurrency that does not currently happen in any of your live systems during your initial pre-go-live testing. Stress your systems in ways that you, or the business do not forecast or consider and under loads that are inconsistent with the loads you see now in production.
We understand that the more common approach is to base all your performance volumes and concurrency models on current usage or business practises. But new systems, or even replacements for existing ones, give the users opportunities to work differently and your performance testing scenarios should reflect this.
Unusual world events test
What seems to be more common is that world events can and do have an impact on how software and hardware will be used. Technology platforms manage so much of the worlds business and lifestyle solutions that changes in any of these can have a huge effect on the volumes and concurrency of our technology solutions.
This scenario is probably the most uncommon of all because events that happen in the real world are unpredictable even by those people whose job it is to plan for them. We think what you should considered when planning your performance testing is to consider what would happen if suddenly your user base increase dramatically or the products you sell were to become more in demand.
With possible demand outstripping the supply and build scenarios that support the loads that will be placed on your systems if these things were to happen.
We mentioned in the introduction that these should not replace your standard set of performance tests. If time permits these could be considered and may even become a regular set of scenarios that you execute.
Technology is constantly changing and becoming more prominent in every aspect of our lives. Therefore, the way we performance test it has to be constantly changing as well. So, the common performance scenarios that are widely used could and maybe should be complimented by the uncommon ones.
Endurance tests is a mandary test for future application in production and it must not be a “not Uncommon Performance Testing”
For differents volumes of data in database, you need a tool or program that could add more data in yours data tables. I created a lot of populating programs with parameters to configure the number of data to add.
Dedicated Concurrent activity tests is a “uncommon performance test” but may be it could be more often used to find some deadlocks or bad global variables management in the code. It also requires relatively few resources to test.
I add a new “Uncommon Performance Testing” call “Infrastructure load testing” the purpose is to check the infrastructure limits before add the application to test especially for a new environment. Add a very simple application likes an “echo response” jsp or servlet, a big html page to compress and 1MB image to download. Then call at big cadences echo jsp and compress the big html page and the network with the big image. Check the load balancing, the monitoring, the logs files, the network throughput (number of 1MB images dowloaded) and the compress html throughput output.Reply
Your comment has been submitted and will be published once it has been approved.
Your post has failed. Please return to the page and try again. Thank You!