Recording - Methodology Design

Recording - Methodology Design

Methodology 5 minutes

This time we discuss how to create a virtual user and what external ressources make sense to test

Recording and transactions

Recording and transactions

If you prepared your test scripts manually it would be error prone. Plus creating every request from scratch is a tedious process. Which is why it is easier to capture real traffic.

The most common solutions include browser network recording or using a proxy. When doing so, try to always use a new private window or clear your cache/cookies beforehand. Otherwise you might not record some of the key processes of your application. Note that using your browser, you will record traffic only from your application. A proxy might record every background process occuring on your computer. In which case it is important to filter unnecessary calls.

Now once you have recorded all this it is critical to group the various requests. These groups are usually called transactions or pages. They are important first for clarity in your script. You must be able to quickly tell which request is part of which page. Then for reporting as well, since these transactions will have statistics attached. So try to use short but accurate names to describe them. Using numbers at the beginning of their names is a good idea to make the report easier to manipulate.

Example

Transactions example

Mobile native applications

Mobile native applications

When recording traffic from your browser, the device doesn’t matter. Unless there is a dedicated application to which each different device is directed. And it is easy to test by switching the user agent in chrome for instance.

But for purely native application it gets a bit more difficult. They still communicate over HTTP/S but the whole purpose is to avoid the browser. In this case you have to rely on a proxy recorder like Fiddler or Charles proxy.

The setup includes a lot of steps but is not complicated. What you need is a WiFi network available to the computer and the mobile phone. Then you basically use the computer as a proxy for the mobile. The procedure is documented in one of our blog posts.

You could achieve the same with a network sniffer like Wireshark but the setup to record HTTPS is much more complex. And even then it can’t be exported from wireshark efficiently.

External ressources

External ressources

After recording your applications’ traffic you should see a lot of external calls. Some might be relevant to test, others can be safely removed.

All web applications include this kind of traffic for various purposes:

  • Statistics
  • Delivering static content
  • Advertisement
  • Sponsored links
  • Social media

You will notice that some of them are static and don’t matter much like a facebook thumb up icon. But others might be a lot more dynamic and thus complicated to handle. Their amount can also make complicate your work because they flood your results.

As for everything in life, there is no definitive answer to what you keep or remove. But we can try to provide a few guidelines.

External ressources guidelines

Does it has an impact on my servers?

Your first priority should be to test your own servers. Because of this, you want to keep all related calls. Especially when you host all the resources. Testing under real conditions means reproducing all this traffic.

Am I likely to bring google down by myself?

Depending on the third parties you rely on it may not be relevant to test them. A few hundreds requests per second won’t put google servers down. And it would otherwise be expensive or complex to test thoroughly. Not counting security risks since it may go against their terms of service.

Is there a large impact on response time or load time?

You should consider large files even when hosted by third parties. Even when you established that these third parties are not a risk. The main reason is that you want to get a response time that is as accurate as possible. Maybe these files are part of your application. Maybe they slow it down under certain network conditions only. Even if hosted by a reliable CDN, the impact is still real. And users will see it. So you want to be aware of possible optimizations.

Is it even part of my page response time?

Consider statistics websites, they usually execute a script on the page after it is loaded. Which means they have no impact on the time each page takes to load. They also make complex calls that can take work to implement correctly. All that to add your load test to your production statistics. It’s not benefitial in the end since it will pollute your metrics (bounce/convertion rate, etc…) You can safely remove them from the equation.

Does it makes sense to test from different locations?

If so you should consider including them in your tests. Even a Content Delivery Network can perform poorly from another location. In particular if it is not configured well. It may take too much time to replicate data over the various nodes. That can be tricky to check manually. On the other hand, a distributed load test will allow you to see reponse times from different locations.

How much could it cost me?

Some of these third party services bill based on your usage. Which means the load test could generate additional costs. It could trigger billing alerts or limitations. Which is still a valid test in the end. But you’re better off anticipating these costs.

Resource selection example

Resource selection example