Please take a number

Gatling 3 comes up with many new features. We already talked about the closed workload model last week. Today, we talk about feeders.

First of all, why do we need feeders for load testing (you will find the following definition also in our documentation)?

We need dynamic data so that all users don’t play exactly the same scenario and we end up with a behavior completely different from the live system (due to caching, JIT etc.). This is where Feeders will be useful.

Feeders are data sources containing all the values you want to use in your scenarios. There are several types of Feeders, the most simple being the CSV Feeder.

Let’s now have a look today at the new features for CSV like feeders in Gatling 3 (you can have a look at the documentation).


Since Gatling 1, file based feeders were fully loaded in memory when the Simulation was instanciated. The rationale was to avoid disk access while the load test is running.

The downside is huge memory usage when using very large feeder files, all the more as prior to Java 9, Java String was implemented a UTF-16 char array, hence using twice more space for US-ASCII chars.

Gatling 3 introduces a batch option so file based feeders can be loaded in memory by chunks. Note that some strategies then work a bit differently, for example random only picks a random entry in the current chunk, not in the full data.

// load file by chunks of 2.000 lines (default)
// records are picked randomly in current chunk
// when current chunk is empty, move to next chunk
// when last chunk is empty, move back to beginning of file
val feeder = csv("file.csv").batch.random


If you’re dealing with large feeder files, it might be complicated to push them into your git repo.

Gatling 3 introduces unzip to deal with gzip and zip (single entry) archives. As feeders are text based, this might help you save a lot of space.

// unzip file before loading it
val feeder = csv("").unzip.random

Grabbing the whole data

Some people have been feeders as convenient CSV parsers and then hack Gatling internals to get a hold on the underlying data and manipulate it their own way.

Instead of letting users hack feeders internals (that do change with Gatling 3), you should now use the readRecords option.

// load the whole file content in memory and return it as a sequence of records
val records: Seq[Map[String, Any]] = csv("file.csv").readRecords


This feature is only available for FrontLine customers. While the option is available in Gatling OSS, it’s a simple noop there.

When dealing with distributed tests, you might want to make sure that virtual users don’t use the same entries in your file based feeders, even though they sit in different cluster nodes.

shard makes FrontLine automatically take care of distributing data so each cluster node only uses a dedicated slice.

// assuming file.csv contains 5.000 records
// and you're deploying a cluster of 5 injectors
// first injector will use the first 1.000 records
// second injector will use the next 1.000 records
// and so on
val feeder = csv("file.csv").shard

Our next post will be about our new HTTP client. Stay tuned!

Keep sending us some feedback about our release candidates of Gatling 3 before the stable version!

Stephane from the Gatling team

Related resources

You might also be interested in…