Running a load test to get a live report

This last OctoPerf’s tutorial guides you trough the following steps:

  1. Creating a runtime Scenario in OctoPerf (the JMeter Thread Group and much more),
  2. Configuring its User Profiles,
  3. Launching the test,
  4. Analysing the result as the load test is still running,
  5. Exporting the report when the test is complete.

Transcript

0s

Hello and welcome to this new tutorial.

2s

This is the 6th tutorial on OctoPerf new version.

6s

Today we will launch a test and look at the reporting.

9s

We will use all the work we've done in the previous videos.

13s

Let's move to the runtime tab and configure our test.

17s

The test configuration relies on several load profiles.

21s

There is a default one, "as recorded" in every runtime scenario, but we can configure it and add new profiles from the left menu.

31s

Each profile corresponds to one or several individual JMeter instances launched during the test giving a lot of flexibility in the configuration.

41s

For instance if we take this profile, we can select the virtual user it will launch and the maximum number of concurrent users.

49s

We can also change its duration for the ramp up phase, or the rest of the test,

54s

it immediately updates the upper section to show us what the number of users will be at any time.

61s

In this tab we can also overwrite the think times.

65s

Think times apply to every request and depending on the purpose of our test,

70s

we might want to use no think time at all or even a random value.

74s

The last option allows for a variable think time that guarantees every user generates a certain amount of hits per unit of time.

83s

Moving on to the locations, every profile can be launched from a different zone, either from the cloud or from on premises locations.

91s

Let's launch this one from our previously installed agent.

95s

The browser tab allows us to select the user agent sent in every HTTP/S request.

102s

Bear in mind that these are not real devices but just a different header in HTTP/S requests.

108s

If we are looking for a complete page load time, it is better to use our selenium integration.

113s

There we can also reinitialize the user context by clearing cookies and cache on every run of a virtual user.

120s

Next is the bandwidth tab where we can limit every user bandwidth to one of the presets.

126s

The DNS tab is useful when dealing with CDN or any other DNS based resolution.

133s

Since JMeter as a java based tool tends to do only one DNS resolution for all users running on the same machine,

141s

we can force an individual resolution for every user and even specify a list of DNS to use.

148s

As updating multiple profiles can be tedious there is a way to apply a setting to all profiles through this button.

156s

When the load policy is configured the way we want, we can just hit the Launch button.

160s

We will automatically start the required machines in the different zones.

165s

This process can take between 5 and 10 minutes depending on the number of required machines.

170s

If we launch a smaller test, or a full on premise test, the startup will be much shorter in about 2 minutes.

177s

In any case, the test initialization logs will tell us what's happening.

182s

When the test is up and running we can access all the metrics in real time.

187s

Bear in mind that everything in this report can be configured, from graphs to blocks of text.

192s

And at any moment we can drag and drop a new graph to show the relevant counters.

197s

For instance on this first graph we can see the hits and response times.

202s

But if we prefer to have a comparison between our on premise agent and the cloud we can add a curve and use filters.

210s

We can of course select other metrics than a response time,

213s

there is a list of metrics right here, with hits, connect time, latency, error rate and many others.

222s

But let's stick to the average response time.

225s

Filters can be applied on zones, load generators or runtime profiles.

229s

Then we can choose to display response time for requests, containers or any element of the user profile.

237s

Now that we filtered on the on premise agent, let's edit the response time curve to compare with our cloud load generators.

246s

Another thing we could do with this graph is to show a monitoring metric along with the response time.

252s

I'll remove the hits and add a new curve but this time we will go to the monitoring metric tab.

258s

In there we have a list of all the monitors available including load generators monitoring.

264s

I will select the CPU idle from our linux monitor and add it to the graph.

270s

We now have all the information we need to correlate the response times with the CPU of our server.

276s

Other elements of this report include the result table which is very usefull to get an overview of the response times for all requests.

284s

It can be edited to show additional metrics or just display containers instead of requests.

291s

This way if we named all your containers, we get a very easy to understand list of response times.

297s

As we can see we get a lot of errors on this particular step so let's move forward to look into this.

303s

There are a lot of other graphs in between but we won't take time to look at all of them,

308s

just keep in mind we can configure what metric they display.

312s

We can add or remove the same kind of graphs from the left menu as well.

318s

In the error section we first see the error rate, which is quite high during the whole test.

323s

A quick look at the pie charts tells us we have a percentage of 500 response codes.

328s

It does not seem very high, but we know they all happen on a particular step in the virtual user which makes it problematic.

336s

To go deeper, next we can see the list of all errors with details available.

341s

We can see the error code, the request and response and that way we can analyze what's the problem.

351s

A quick way to understand what is happening on our servers is also to use the thresholds table.

356s

There we will see if alert was raised, on which machine and for how long.

361s

It really helps correlating information.

365s

When the test is finished everything we configured will be available in the report.

371s

This report contains no mention of OctoPerf, feel free to use it as is.

375s

We can add our company name and logo on top of it and save it as a pdf file.

381s

This tutorial is now finished.

383s

More will come in the future but this one was the last of our serie on the new OctoPerf version.

389s

Keep in mind that everything you have seen can be achieved through your free account.

393s

I hope you found it useful, please comment under the video or let us know your questions through the chat inside OctoPerf.

399s

Thanks for watching, take care, bye.