Purpose of this chapter
In this new chapter we will discuss the design of your test scripts. This process involves a lot of techniques. But first we will discuss the two main types of test scripts.
We will then look at what will make our test scripts as realistic as possible. And last we must find a way to validate our test scripts before running the tests.
Virtual users are meant to be run concurently during a test. This means we must be careful when designing them. They must be optimized, sometime they even need unique data. We must also take a reasonable time to design them since load testing campaigns are usually short. And at the same time they must be realistic so that our tests are meaningful.
The good thing is, we are not interested in simulating every aspect of a real user. We’re mostly interested in what generates load to the server. We’ve already covered this when talking about test scenarios earlier. But here we will see what it means when actually designing our scripts.
Virtual users types
A protocol based virtual user completely ignores the client device. It is focused only on the network traffic between client and server. It’s usually recorded throug a proxy or network sniffer.
This type of virtual user replays in the exact same conditions as a real user. For web application that means launching a browser (often a headless one). Because of that each client based virtual user get its own browser instance during the test. Ideally traffic is captured by recording actions in a real browser as well.
Virtual users comparison
Now that we defined both types of virtual users let’s see what they are useful for.
These virtual users are more complex to design. They only replay the network traffic. That means any client based computation has to be scripted.
For instance on a single page application:
- A first call will load the website and/or application logic,
- The user inputs information on several screens,
- A single call will send all this data at once, probably in JSON format.
Our script may be composed of only two requests even though we had many interactions with the application. The last call might even have data processed in a different format than the input. Reproducing such processing in a test script might take some work.
Another issue with protocol based scripts is that they can only record the server response time. More specifically only the network traffic is replayed to the server. So you can only measure how much time the said server takes to respond to you.
At this point you might be wondering why bother with protocol testing at all. After all, running a real client for every virtual user will solve most of these issues right?
Well yes and … no.
First having the server response time is also a good thing. Because server response time only reflects server behavior. That way you can test the server performance under load. You are not influenced by client side performance.
Client based testing is much easier to prepare. You do not have to understand the underlying logic. Instead you define the interactions like a real user would. That is particularly usefull when dealing with very complex applications that change often. This way you can avoid being stuck in a design loop where scripting is so long that the application has already changed when it’s done.
Emulating the client will also get you a more realistic response time. But the response time of your browser highly depends on your machine configuration. Put more simply, the more CPU you have, the fastest your application will be. But how can you know the configuration of real users? In this case you might think it is best to use the smallest machine posible. But that can also lead you to an optimization loop just to satisfy a small percentage of your users.
Another problem is that emulating a client per virtual user, is expensive. You could think that with increasing computing power it gets easier to overcome. But in reality the more computer power available, the more resource hungry applications become. Because of that, it is still difficult to generate many clients from a single machine.
Another topic is that browsers are headed toward multi-core page rendering. Previously, a good practice was to launch one browser per CPU core available. Minus one for the central process that computes results. But with multi core rendering, how can we realiably simulate real browsers during a test. Launching two browsers on the same machine will make both of them slower. Thus the higher the load you simulate per machine the highest the response time. Or if you use a large machine to simulate many browsers the opposite could happen. If only a few browsers are active at a time they might use the large amount of resources to get a nice response time. Which could be a lot better than what your users can expect.
Best of the two worlds
In my opinion pure client based load testing is a mistake. Unless the situation gives you no choice, the benefits are small compared to costs and risks. And what’s worst in our situation, a server under load will have little to no impact on the client side rendering time. Based on this line of reasoning, I recommend to run the load through protocol based users. Once prepared, they are much more efficient resource wise. Then run a client based user to get the end-user response time. But you don’t need to run more than a few to do that.
This way during your tests you will see the server and client response time. Which is a good way to decompose the time spent on the different layers.