Everywhere you look on social media its DevOps, Agile Methodologies, Continuous Integration, Continuous Delivery. You could be forgiven for believing that most organisations and programmes follow these principles.
This is not true.
Many companies use a Waterfall model which is also known as a linear-sequential life cycle model. In a waterfall model, each phase must be completed before the next phase can begin and there is no overlapping in the phases. The Waterfall model is the earliest SDLC approach that was used for software development.
The waterfall Model illustrates the software development process in a linear sequential flow. This means that any phase in the development process begins only if the previous phase is complete. In this waterfall model, the phases do not overlap.
It is difficult to determine a percentage for the number of organisations that follow this model but its high. Probably more than half the number of software programmes follow this approach. Many companies prefer it, many companies still need to follow it.
This is due to the way that stakeholders manage the development and release of features. There are many organisations that due to regulatory reasons or compliance need to follow this way of developing and releasing software.
Many of the posts we publish focus on ways that performance testing fits into Continuous Integration and Continuous Delivery. We know that as the Waterfall model is not going to disappear any time soon so it’s time to look at how you could build performance testing for a Waterfall model. It is not correct to say that Waterfall is the way software was developed. Or Continuous Integration is the way that software should be developed. It is down to the individual organisation and the client the software is being developed for.
What is waterfall
The stages in a waterfall mode
- Requirement Gathering and analysis − All possible requirements of the system to be developed are captured in this phase and documented in a requirement specification document.
- System Design − The requirement specifications from first phase are studied in this phase and the system design is prepared. This system design helps in specifying hardware and system requirements and helps in defining the overall system architecture.
- Implementation − With inputs from the system design, the system is first developed in small programs called units, which are integrated in the next phase. Each unit is developed and tested for its functionality, which is referred to as Unit Testing.
- Integration and Testing − All the units developed in the implementation phase are integrated into a system after testing of each unit. Post integration the entire system is tested for any faults and failures.
- Deployment of system − Once the functional and non-functional testing is done; the product is deployed in the customer environment or released into the market.
- Maintenance − There are some issues which come up in the client environment. To fix those issues, patches are released.
Some situations where the use of Waterfall model is most appropriate are
- Requirements are very well documented, clear and fixed.
- Product definition is stable.
- Technology is understood and is not dynamic.
- There are no ambiguous requirements.
- Ample resources with required expertise are available to support the product.
- The project is short.
Before we start to look at how performance testing fits into a Waterfall Model, we are going to look at things you should avoid.
We will then move on to how you can leverage some of the benefits of the Waterfall Model to support your performance testing.
How you can avoid the pitfalls
If you are building a performance testing capability for a piece of software being developed using a Waterfall Model, there are a few things you need to avoid.
Avoid defining your non-functional requirements in the Requirements Analysis phase of the model. If you do, they cannot be included later in the programme. Make sure these requirements are signed off alongside the application requirements. You have the whole of the System Design stage to define your testing tools and your coverage for your performance testing. Avoid doing this during the Requirements Analysis phase.
Use this time in System Design wisely as once you move further through the Waterfall Model you will not have this time available to you.
Use the implementation phase to start building your tests. As each of the components are Integrated build your test and execute them in isolation. The software will be written and will have passed Unit testing at this point, do not wait until the Testing phase to begin this. You want the test script to be completed before the end of the Implementation stage, avoid script development if you can in the Testing phase.
Execute all your tests and scenarios as soon as the Testing phase begins and feedback as soon as possible. Avoid waiting until the functional testing is completed as otherwise you may run out of time.
One of the major drawbacks of the Waterfall Model is if the programme is running late then testing is normally the phase that gets shortened. To try and meet the original timescales it is important to test early.
How to benefit from the advantages
You have a whole Requirement Analysis phase to make sure your non-functional requirements are signed off. Make sure you use this time to ensure they are part of the design as that will ensure they must be fixed is they are not met.
Use the System Design phase to determine what performance tools you will use and how you will approach performance testing. Spend time with the development teams to ensure you understand the technology stack being used.
Use the Implementation stage to run your performance tests on individual components. Do this so you can start to provide feedback to the development teams. At this point changes to code will be relatively easy.
How performance testing fits into waterfall
We have spent some time looking at some of the pitfalls and some of the advantages that the Waterfall Model brings to performance testing. Let’s look at how we would approach performance testing from the start of the programme to its completion.
Firstly make sure that you have a testable set of non-functional requirements included and approved in the programme. There is a post here that will provide some guidance on how non-functional requirements can be made testable. These are just as important as the functional programme requirements.
This will ensure that these non-functional requirements become part of the formal sign-off. Therefore they cannot be ignored or not considered as high a priority as the functional requirements. If performance is not clearly articulated at this stage then the application can technically go-live with poor performance because they are not seen as one of the deliverables of the programme.
With Waterfall you have plenty of time to define these so make sure they are correct and accurate. The System Design phase is where the development teams assess the requirements and start to consider the architecture and technology to be used to build an application that meets the defined and approved requirements. Make sure you are involved in this phase.
Understanding the technology being used will help you determine the performance tools you will need to use to run your performance testing. Use this phase as an opportunity to run proof of concepts. If necessary, on performance testing tools to determine their suitability. Of course you may already have a preferred performance testing tool in which case this may not be necessary.
This is the also the time to define your performance testing scenarios and approach. You will have a clear set of requirements that you need to satisfy so the next important step is to clearly define how you will test them.
Programmes that follow the Waterfall Model traditionally have more robust levels of documentation than Agile programmes. And it is likely that one of the criteria for exiting the System Design phase is for all testing documentation to be completed. Make sure you document, and subsequently get approved, all the performance testing you want to run.
While Waterfall is document intensive the upside is that if your documents are approved by the programme then you will be fully supported in all the performance testing you have defined in these documents. The Implementation phase is the point at which you want to start writing your tests, as each unit is developed a test can be written.
There will more than likely be longer journey tests that require several components to be integrated with each other for the test to be written. But in the first instance focusing on each delivered component will allow you to start building your performance tests.
In theory at the start of the Testing phase you should have written most, if not all, of your tests and will now be able to start executing your scenarios.
This will be an iterative process in a Waterfall Model with a standard re-test lifecycle.
Once testing is completed and signed off there will be no requirement to performance test in the Deployment phase.
When the Maintenance phase is reached this is where you will need to ensure that you build a manageable and sustainable performance regression test pack. There is a blog post on this subject here.
The regression pack needs to be executed regularly and not just as part of maintenance or enhancement testing.
The Waterfall Model is not going to disappear for reasons we have discussed in the introduction.
There are benefits and drawbacks when it comes to performance testing.
The benefits being more time to ensure that performance requirements are included in the programme requirements making them part of the programme sign-off.
The drawbacks are that the performance testing window is likely to be relatively small. Performance testing can only be executed when the Testing phase begins, although some individual component testing could be run in the Implementation phase.
How you approach performance testing is definitely linked to the development methodology. So hopefully this post has given you some insight into performance testing using the Waterfall methodology.