One of the main features of Gatling 3 is the HTTP/2 support. Its implementation is one of the main reasons of the rewriting of a specific HTTP client for Gatling. If you want to know more about this new HTTP client, you can read our latest article, written by Stéphane Landelle here.

HTTP/2, as its name suggests, is the second version of the HTTP protocol. The previous version was HTTP/1.1, which was initially released on 1997.

In this article and the following ones, we are going to see what is HTTP/2 and explain the new features coming with this protocol.

Multiplexing in HTTP

What is multiplexing and why do we want it?

Multiplexing: to be able to send different signals (here requests) on the same communication link.

A limitation of the HTTP/1 protocol, is that each HTTP connection is only able to handle one request at a time. It means that the only way to parallelize multiple requests is to open several HTTP connections. That is what browsers do. They open multiple connections (6 by remote for most of them), to be able to launch potentially 6 requests at a time to a given remote.

It is not very resource and network efficient. Opening more connections is an overhead for both the client and the server. TLS handshakes must be performed for each connection if you are doing HTTPS, the TCP slow start process must be performed etc.

And what happened if you want to perform more than 6 requests at a given time? You got it, some of your requests are queued, until one of the connection is available to handle it.

That explains the need for multiplexing. One TCP connection to rule them all.

From Pipelining to HTTP/2

There is one attempt to overcome this issue in HTTP/1.1 which is HTTP pipelining. With pipelining, an HTTP client is able to send to a server multiple requests at once. The problem is that the requested HTTP server has to answer to all these requests, keeping the order in which they were made by the client. This mandatory requirement leads to a performance-limiting phenomenon known as Head-of-line-blocking.

To represent that, imagine that a client sends a request A involving a lot of computing server side, and a very easy to handle request named B. The server, which is able to deal with several requests at once will deal with the request B very quickly, but can’t send the response because it is waiting for the end of the A computation to respect the requests’ order. The slow requests become the limitation of all following requests.

Regular HTTP requests management vs HTTP pipelining

This attempt was unsatisfactory, and so the need for a good multiplexing system on top of HTTP remains.

HTTP/2 enable multiplexing without the issues faced by the HTTP pipelining.

In HTTP/2, each request/response tuple is associated with a unique ID, and is called a stream. When the client sends a request, it gives it an ID, and that ID will be put in the server answer. Therefore, the client knows to which request the answer is, and there is no requirement of respecting the requests’ order like in HTTP pipelining. And therefore, no more Head-of-line-blocking anomaly!

Requests and responses belonging to different streams will flow through this unique HTTP/2 connection at a given time.

The HTTP/2 multiplexing

Thanks for reading this article! If you want to know more about HTTP/2, the new article of this series about weight and dependency is now available here!

Alexandre for the Gatling team

We recently released release candidates for Gatling 3: we wrote a blog post about all the new features, read it here! If you missed them, you will find posts about the support of closed workload model and new feeders features. Today, we will talk about our new HTTP Client for Gatling 3!

Parting Ways with AsyncHttpClient

When I created Gatling 7 years ago, I was looking for a Java library implemened to top of Netty that was really non blocking at its core. AsyncHttpClient (AHC) was exactly that (kudos to Jean-François Arcand!).

At some point, I took over as the core maintainer and I believe it both benefited both projects: AHC was actively maintained and benefited from performance improvements because it was used as Gatling foundation, and Gatling benefited from feedback and contributions from a project with a larger user base.

Then, at some point, I realized things were not as simple:

When we from Gatling Corp decided to implement HTTP/2 support for Gatling 3, we realized that the current AHC code base was not suited, and that we had no idea what a generic HTTP/2 client API should look like, while we had a pretty good idea about what we wanted for Gatling.

A brand new internal HTTP client

The natural yet difficult decision was to go with implementing a new HTTP client mostly from scratch (we reused some helpers from AHC that I has implemented over the years).

This client is not a public library. It’s intended for internal Gatling usage only, so we can break things any time we want if we realize we were wrong with our design choices.

This new client leverages one of the most complex thing in AHC implementation: concurrency. In AHC, requests can be generated from some thread (outside Netty) while response chunks will be processed from Netty eventloop threads while timeouts will be triggered from a different dedicated threads. This results in very complex concurrency handling that could cause weird bugs in old Gatling version such as virtual users being lost and simulation not terminating.

With this new client, virtual users have affinity with a Netty eventloop and all their HTTP events (sending requests, handling response chunks, timeouts) will be processed in the same thread. No more concurrency issues!

We hope to be able to generalize this concept of eventloop affinity to the rest of Gatling in a future version.

BoringSSL support enabled by default

For better performance, the new client uses by default netty-tcnative and its native SSLEngine implementation instead of the default one from the JDK.
BoringSSL is Google’s fork of OpenSSL.


Coming up next week: a blog post about HTTP/2, by Alexandre Chaouat! Stay tuned!

Stephane for the Gatling team

After several public announcements and several release candidates, we are thrilled to tell you today that Gatling 3.0.0 is finally here!

Since 3.0.0-RC4, we’ve fixed quite few bugs, please check the release note for more details.

The whole Gatling team has put a great deal of work into this new major version, so we hope you’ll like it.

Thanks a lot to our users and to all the contributors who gave the Release Candidates a try, provided feedback and helped making it stable. Kudos to them!

Stephane for the Gatling team

Come and meet Gatling Corp at AWS re:Invent! Our booth is #2703.

From November 25th to 30th, we will present you Gatling FrontLine, which is now available in AWS Marketplace.

If you can attend, we would be thrilled to meet you there!

The Gatling team

Gatling 3 comes up with many new features. We already talked about the closed workload model last week. Today, we talk about feeders.

First of all, why do we need feeders for load testing (you will find the following definition also in our documentation)?

We need dynamic data so that all users don’t play exactly the same scenario and we end up with a behavior completely different from the live system (due to caching, JIT etc.). This is where Feeders will be useful.

Feeders are data sources containing all the values you want to use in your scenarios. There are several types of Feeders, the most simple being the CSV Feeder.


Let’s now have a look today at the new features for CSV like feeders in Gatling 3 (you can have a look at the documentation).


Since Gatling 1, file based feeders were fully loaded in memory when the Simulation was instanciated. The rationale was to avoid disk access while the load test is running.

The downside is huge memory usage when using very large feeder files, all the more as prior to Java 9, Java String was implemented a UTF-16 char array, hence using twice more space for US-ASCII chars.

Gatling 3 introduces a batch option so file based feeders can be loaded in memory by chunks. Note that some strategies then work a bit differently, for example random only picks a random entry in the current chunk, not in the full data.

// load file by chunks of 2.000 lines (default)
// records are picked randomly in current chunk
// when current chunk is empty, move to next chunk
// when last chunk is empty, move back to beginning of file
val feeder = csv("file.csv").batch.random


If you’re dealing with large feeder files, it might be complicated to push them into your git repo.

Gatling 3 introduces unzip to deal with gzip and zip (single entry) archives. As feeders are text based, this might help you save a lot of space.

// unzip file before loading it
val feeder = csv("").unzip.random

Grabbing the whole data

Some people have been feeders as convenient CSV parsers and then hack Gatling internals to get a hold on the underlying data and manipulate it their own way.

Instead of letting users hack feeders internals (that do change with Gatling 3), you should now use the readRecords option.

// load the whole file content in memory and return it as a sequence of records
val records: Seq[Map[String, Any]] = csv("file.csv").readRecords


This feature is only available for FrontLine customers. While the option is available in Gatling OSS, it’s a simple noop there.

When dealing with distributed tests, you might want to make sure that virtual users don’t use the same entries in your file based feeders, even though they sit in different cluster nodes.

shard makes FrontLine automatically take care of distributing data so each cluster node only uses a dedicated slice.

// assuming file.csv contains 5.000 records
// and you're deploying a cluster of 5 injectors
// first injector will use the first 1.000 records
// second injector will use the next 1.000 records
// and so on
val feeder = csv("file.csv").shard


Our next post will be about our new HTTP client. Stay tuned!

Keep sending us some feedback about our release candidates of Gatling 3 before the stable version!

Stephane for the Gatling team