Gatling FrontLine 1.6 is out!
You will find our latest release note here: https://gatling.io/docs/frontline/1.6.0/FrontLine-Release-Notes.pdf
Many new features and now Gatling FrontLine on AWS Marketplace is easier to use! It takes a few days before the version of Gatling FrontLine on AWS Marketplace is up-to-date but we will let you know as soon as it is available.
If you are not a user of Gatling FrontLine yet, test it now on AWS Marketplace or contact us to evaluate it!
By the way, next week we attend the AWS re:Invent, come to say hi if you are there!
Feel free to send you feedback!
The Gatling team
Let’s cut right to the chase and state how it is: mean and standard deviation aren’t useful in load testing.
Most of our time is used looking at metrics, we need to make sure it is spent as efficiently as we can. That in mind, which metrics should we use to have a clear view of what is happening at any point in time? Are these actually useful? This series is all about digging into common metrics, understand their common pitfalls, and avoid missing changes in your application behavior while load testing.
The mean—arithmetic average, describes the central value of a data set, and is defined as the sum of all parts divided by the number of parts. Hence, for parts:
The arithmetic average, also written , is a summary of central tendency, is easy to use, compute and so far, widely used.
The variance is a bit more involved. It describes how much values are spread around the mean. It is achieved by subtracting each part of the data set by its arithmetic average, squared, then dividing by the number of parts:
Before we dive into it’s actual sense, let’s go right to the standard deviation.
The standard deviation is the same as the variance, except it is expressed in the same unit as the mean, whereas the variance is expressed in squared units. You can use both interchangeably as long are you are rigorous with what units you are using:
Is it easier to think about the standard deviation as a description of variability rather than it’s formula. In fact, this is all the mathematics we’ll see for today. Hope you’re okay.
Distributions with the same arithmetic average can eventually be differentiated by their standard deviation:
Sadly, when using variance and/or standard deviation, you need to make sure which distribution you are dealing with. Knowing how much your data set is spread around the mean doesn’t account for much if you have no idea how the data looks like in the first place. Worse, how to make sense of the standard deviation if your data is shared between multiple binomial distributions—or, multi modal distributions, as such:
Such data set could be split into multiple sub data sets, then studied individually. Arguably, that would be cumbersome to do, which would defeat our initial purpose of gaining time when analyzing our data.
Furthermore, what happens when the mean and standard deviation are the same? Does this mean the data sets are the same? In fact, it is easy to craft distributions with these kind of properties:
Some people got even further as to squash all sorts of shapes with the same average, standard deviation, on both axis, in a single animation:
As you understand now, variance and standard deviation only make sense on Gaussian distributions, which are rarely encountered in the context of load testing. Most common cases are multi modal distributions, outliers or extreme values, long tails or skewed distributions, and so on.
The arithmetic average is very sensitive to outliers and it won’t tell us much about the shape of the distribution anyway. We will need a more powerful tool to deal with all these cases, which could be stated as extreme if they were not so common!
As said earlier, these are metrics that are easy to use and compute. However, they will only be efficient if the distribution is perfectly shaped—i.e., symmetric. It’s an understatement, to say the least, that it is not the case in the world of load testing.
Next time we’ll talk about metrics that are more robust and can handle these edge cases.
We released 3.0.0 three weeks ago. Since then, we’ve received a lot of feedback to help us stabilize Gatling 3. Thanks a lot to all of you! Gatling 3.0.1 is now out!
Most noticeable bug fixes are:
It also ships 4 new features:
Please check the full release note for more details.
Stephane for the Gatling team
We continue this series of blog posts about HTTP/2. Gatling 3 now support HTTP/2. We did a lot of research and we want to share with you what we have learned in our journey. Today’s topic: weight and dependency.
If you want an introduction to what HTTP/2 is and the concept of multiplexing, check out our previous article here.
With multiplexing, you can have multiple requests with the same connection. But you may want to organize your requests, ie organize the way they are handled within the connection.
What if you want to prioritize one of them? What if you would like to execute one request before executing another one?
For that purpose, you have 2 mechanisms in HTTP/2:
Let’s take a look at these concepts.
With multiplexing, you can trigger multiple requests at once.
When you launch a request, you create a stream. It’s possible to declare that this stream depends on the completion of another stream before it starts.
For example, I have a request B that depends on the completion of a request A. I am able to declare this relationship in the stream associated with the request B.
If it honors HTTP/2 dependencies, the server will begin by allocating its network resources for the completion of the request A, and then proceed to the completion of the request B (of course, it will depends on the server implementation and the optimization that can be done for specific use cases).
But the dependency is not only a one-to-one relationship. I can set multiple dependencies on the same request. For example, I can declare that the requests B, C and D depend on the completion of the request A. Furthermore, dependencies can have multiple levels.
You understand that it is possible to create a kind of tree of stream dependencies.
I can have this tree of dependencies:
The streams B and C have a common dependency on the stream A. If I start a stream named D and declare it with a dependency on the stream A as well, this is the dependency tree that we will get:
But what if I want to add another level and set the stream D to be dependent on the stream A and to be executed before every other requests? To do so, you can declare the dependency on the stream A to be exclusive.
It’s a flag that you can add on a dependency declaration and that is false by default. An exclusive dependency on the stream D declares that this stream is the only stream dependent on stream A. Therefore, by using the exclusive flag, the dependency tree will look like this:
HTTP/2 is able to fit any new stream into the existing dependency tree. Of course, you don’t have to use stream dependencies, you can just create “root” streams if you don’t want to bother with this mechanism.
What if we want to prioritize one request over another one on the same level of the tree? If we take the same example, once the request A and D are done, I would like to allocate more network resources to the request B than the request C.
In HTTP/2, when you create a stream, you have to assign it an integer between 1 and 256 (inclusive). For a root stream this integer is useless, but when you have siblings in a dependency tree, this number will represent the part of the resources that will be allocated for each one of the streams.
We assign the stream B with a weight of 10, and the stream C a weight of 20. The total weight for this tree level is 20 + 10 = 30. The stream B will get 10/30 = 1/3 of the resources available, and the stream C 2/3 of them. Once again, this is a hint for the server, there is nothing mandatory in following precisely these weights.
You now understand that with weight and dependency, you are able to precisely describe how you want to use your network resources to use multiplexing.
However, these features are not very used so far, since most of the people using HTTP/2 switch from HTTP/1 and don’t configure how their requests are prioritized for now.
Most of the client implementations don’t expose these mechanics, and the servers don’t necessarily honor them. But nothing can stop you from using these features in your server-to-server infrastructure to optimize things as much as you want.
Besides, I think that the usage will be more widely used once HTTP/2 has become more mainstream, and that the protocol implementations will become more mature.
Thanks for reading this article! Next week we are going to talk about server push in HTTP/2.
Alexandre for the Gatling team
Come and meet us at AWS re:Invent in Las Vegas, from 26th November to 30th November!
Gatling FrontLine is now available on AWS Marketplace, use it now for only $9: https://aws.amazon.com/marketplace/pp/B07DTWPZG8
Our booth is #2703, we can’t wait to meet you!
The Gatling team