We continue this series of blog posts about HTTP/2. Gatling 3 now support HTTP/2. We did a lot of research and we want to share with you what we have learned in our journey. Today’s topic: weight and dependency.
If you want an introduction to what HTTP/2 is and the concept of multiplexing, check out our previous article here.
With multiplexing, you can have multiple requests with the same connection. But you may want to organize your requests, ie organize the way they are handled within the connection.
What if you want to prioritize one of them? What if you would like to execute one request before executing another one?
For that purpose, you have 2 mechanisms in HTTP/2:
Let’s take a look at these concepts.
With multiplexing, you can trigger multiple requests at once.
When you launch a request, you create a stream. It’s possible to declare that this stream depends on the completion of another stream before it starts.
For example, I have a request B that depends on the completion of a request A. I am able to declare this relationship in the stream associated with the request B.
If it honors HTTP/2 dependencies, the server will begin by allocating its network resources for the completion of the request A, and then proceed to the completion of the request B (of course, it will depends on the server implementation and the optimization that can be done for specific use cases).
But the dependency is not only a one-to-one relationship. I can set multiple dependencies on the same request. For example, I can declare that the requests B, C and D depend on the completion of the request A. Furthermore, dependencies can have multiple levels.
You understand that it is possible to create a kind of tree of stream dependencies.
I can have this tree of dependencies:
The streams B and C have a common dependency on the stream A. If I start a stream named D and declare it with a dependency on the stream A as well, this is the dependency tree that we will get:
But what if I want to add another level and set the stream D to be dependent on the stream A and to be executed before every other requests? To do so, you can declare the dependency on the stream A to be exclusive.
It’s a flag that you can add on a dependency declaration and that is false by default. An exclusive dependency on the stream D declares that this stream is the only stream dependent on stream A. Therefore, by using the exclusive flag, the dependency tree will look like this:
HTTP/2 is able to fit any new stream into the existing dependency tree. Of course, you don’t have to use stream dependencies, you can just create “root” streams if you don’t want to bother with this mechanism.
What if we want to prioritize one request over another one on the same level of the tree? If we take the same example, once the request A and D are done, I would like to allocate more network resources to the request B than the request C.
In HTTP/2, when you create a stream, you have to assign it an integer between 1 and 256 (inclusive). For a root stream this integer is useless, but when you have siblings in a dependency tree, this number will represent the part of the resources that will be allocated for each one of the streams.
We assign the stream B with a weight of 10, and the stream C a weight of 20. The total weight for this tree level is 20 + 10 = 30. The stream B will get 10/30 = 1/3 of the resources available, and the stream C 2/3 of them. Once again, this is a hint for the server, there is nothing mandatory in following precisely these weights.
You now understand that with weight and dependency, you are able to precisely describe how you want to use your network resources to use multiplexing.
However, these features are not very used so far, since most of the people using HTTP/2 switch from HTTP/1 and don’t configure how their requests are prioritized for now.
Most of the client implementations don’t expose these mechanics, and the servers don’t necessarily honor them. But nothing can stop you from using these features in your server-to-server infrastructure to optimize things as much as you want.
Besides, I think that the usage will be more widely used once HTTP/2 has become more mainstream, and that the protocol implementations will become more mature.
Thanks for reading this article! Next week we are going to talk about server push in HTTP/2.
Alexandre for the Gatling team
One of the main features of Gatling 3 is the HTTP/2 support. Its implementation is one of the main reasons of the rewriting of a specific HTTP client for Gatling. If you want to know more about this new HTTP client, you can read our latest article, written by Stéphane Landelle here.
HTTP/2, as its name suggests, is the second version of the HTTP protocol. The previous version was HTTP/1.1, which was initially released on 1997.
In this article and the following ones, we are going to see what is HTTP/2 and explain the new features coming with this protocol.
Multiplexing: to be able to send different signals (here requests) on the same communication link.
A limitation of the HTTP/1 protocol, is that each HTTP connection is only able to handle one request at a time. It means that the only way to parallelize multiple requests is to open several HTTP connections. That is what browsers do. They open multiple connections (6 by remote for most of them), to be able to launch potentially 6 requests at a time to a given remote.
It is not very resource and network efficient. Opening more connections is an overhead for both the client and the server. TLS handshakes must be performed for each connection if you are doing HTTPS, the TCP slow start process must be performed etc.
And what happened if you want to perform more than 6 requests at a given time? You got it, some of your requests are queued, until one of the connection is available to handle it.
That explains the need for multiplexing. One TCP connection to rule them all.
There is one attempt to overcome this issue in HTTP/1.1 which is HTTP pipelining. With pipelining, an HTTP client is able to send to a server multiple requests at once. The problem is that the requested HTTP server has to answer to all these requests, keeping the order in which they were made by the client. This mandatory requirement leads to a performance-limiting phenomenon known as Head-of-line-blocking.
To represent that, imagine that a client sends a request A involving a lot of computing server side, and a very easy to handle request named B. The server, which is able to deal with several requests at once will deal with the request B very quickly, but can’t send the response because it is waiting for the end of the A computation to respect the requests’ order. The slow requests become the limitation of all following requests.
Regular HTTP requests management vs HTTP pipelining
This attempt was unsatisfactory, and so the need for a good multiplexing system on top of HTTP remains.
HTTP/2 enable multiplexing without the issues faced by the HTTP pipelining.
In HTTP/2, each request/response tuple is associated with a unique ID, and is called a stream. When the client sends a request, it gives it an ID, and that ID will be put in the server answer. Therefore, the client knows to which request the answer is, and there is no requirement of respecting the requests’ order like in HTTP pipelining. And therefore, no more Head-of-line-blocking anomaly!
Requests and responses belonging to different streams will flow through this unique HTTP/2 connection at a given time.
The HTTP/2 multiplexing
Thanks for reading this article! If you want to know more about HTTP/2, the new article of this series about weight and dependency is now available here!
Alexandre for the Gatling team