Feeders

Feeder is a type alias for Iterator[Map[String, T]], meaning that the component created by the feed method will poll Map[String, T] records and inject its content.

It’s very simple to build a custom one. For example, here’s how one could build a random email generator:

import scala.util.Random
val feeder = Iterator.continually(Map("email" -> (Random.alphanumeric.take(20).mkString + "@foo.com")))

The structure DSL provides a feed method.

feed(feeder)

This defines a workflow step where every virtual user feed on the same Feeder.

Every time a virtual user reaches this step, it will pop a record out of the Feeder, which will be injected into the user’s Session, resulting in a new Session instance.

If the Feeder can’t produce enough records, Gatling will complain about it and your simulation will stop.

Note

You can also feed multiple records all at once. If so, attribute names, will be suffixed. For example, if the columns are name “foo” and “bar” and you’re feeding 2 records at once, you’ll get “foo1”, “bar1”, “foo2” and “bar2” session attributes.

feed(feeder, 2)

Strategies

Gatling provides multiple strategies for the built-in feeders:

.queue // default behavior: use an Iterator on the underlying sequence
.random // randomly pick an entry in the sequence
.shuffle // shuffle entries, then behave like queue
.circular // go back to the top of the sequence once the end is reached

Warning

When using the default queue strategy, make sure that your dataset contains enough records. If your feeder runs out of record, behavior is undefined and Gatling will forcefully shut down.

Implicits

An Array[Map[String, T]] or a IndexedSeq[Map[String, T]] can be implicitly turned into a Feeder. For example:

val feeder = Array(
  Map("foo" -> "foo1", "bar" -> "bar1"),
  Map("foo" -> "foo2", "bar" -> "bar2"),
  Map("foo" -> "foo3", "bar" -> "bar3")
).random

File Based Feeders

Gatling provides various file based feeders.

When using the bundle distribution, files must be in the user-files/resources directory. This location can be overridden, see Configuration.

When using a build tool such as maven, files must be placed in src/main/resources or src/test/resources.

CSV feeders

Gatling provides several built-ins for reading character-separated values files.

Our parser honors the RFC4180 specification.

The only difference is that header fields get trimmed of wrapping whitespaces.

val csvFeeder = csv("foo.csv") // use a comma separator
val tsvFeeder = tsv("foo.tsv") // use a tabulation separator
val ssvFeeder = ssv("foo.ssv") // use a semicolon separator
val customSeparatorFeeder = separatedValues("foo.txt", '#') // use your own separator

Those built-ins load by default all the data in memory so Gatling doesn’t perform disk access while simulation is running.

Then, if your files are very large, it might be difficult to have them sit in memory. You can then use the batch mode.

batch must be the first option to be configured.

When in batch mode, random and shuffle can’t of course operate on the full data, and only operate on an internal buffer of records. The default size of this buffer is 2,000 and can be changed.

val csvFeeder = csv("foo.csv").batch.random
val csvFeeder2 = csv("foo.csv").batch(200).random // tune internal buffer size

Also, if your files are very large, you can provide them zipped and ask gatling to unzip them on the fly:

val csvFeeder = csv("foo.csv.zip").unzip

Supported formats are gzip and zip (but archive most contain only one single file).

Finally, if you want to run distributed with FrontLine and you want to distribute data so that users don’t use the same data when they run on different cluster nodes, you can use the shard option. For example, if you have a file with 30,000 records deployed on 3 nodes, each will use a 10,000 records slice.

shard is only effective when running with FrontLine, otherwise it’s just a noop.

val csvFeeder = csv("foo.csv.zip").shard

JSON feeders

Some might want to use data in JSON format instead of CSV:

val jsonFileFeeder = jsonFile("foo.json")
val jsonUrlFeeder = jsonUrl("http://me.com/foo.json")

For example, the following JSON:

[
  {
    "id":19434,
    "foo":1
  },
  {
    "id":19435,
    "foo":2
  }
]

will be turned into:

record1: Map("id" -> 19434, "foo" -> 1)
record2: Map("id" -> 19435, "foo" -> 2)

Note that the root element has of course to be an array.

JDBC feeder

Gatling also provide a builtin that reads from a JDBC connection.

// beware: you need to import the jdbc module
import io.gatling.jdbc.Predef._

jdbcFeeder("databaseUrl", "username", "password", "SELECT * FROM users")

Just like File parser built-ins, this return a RecordSeqFeederBuilder instance.

  • The databaseUrl must be a JDBC URL (e.g. jdbc:postgresql:gatling),
  • the username and password are the credentials to access the database,
  • sql is the query that will get the values needed.

Only JDBC4 drivers are supported, so that they automatically registers to the DriverManager.

Note

Do not forget to add the required JDBC driver jar in the classpath (lib folder in the bundle)

Sitemap Feeder

Gatling supports a feeder that reads data from a Sitemap file.

// beware: you need to import the http module
import io.gatling.http.Predef._

val feeder = sitemap("/path/to/sitemap/file")

The following Sitemap file:

<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
  <url>
    <loc>http://www.example.com/</loc>
    <lastmod>2005-01-01</lastmod>
    <changefreq>monthly</changefreq>
    <priority>0.8</priority>
  </url>

  <url>
    <loc>http://www.example.com/catalog?item=12&amp;desc=vacation_hawaii</loc>
    <changefreq>weekly</changefreq>
  </url>

  <url>
    <loc>http://www.example.com/catalog?item=73&amp;desc=vacation_new_zealand</loc>
    <lastmod>2004-12-23</lastmod>
    <changefreq>weekly</changefreq>
  </url>
</urlset>

will be turned into:

record1: Map(
           "loc" -> "http://www.example.com/",
           "lastmod" -> "2005-01-01",
           "changefreq" -> "monthly",
           "priority" -> "0.8")

record2: Map(
           "loc" -> "http://www.example.com/catalog?item=12&amp;desc=vacation_hawaii",
           "changefreq" -> "weekly")

record3: Map(
           "loc" -> "http://www.example.com/catalog?item=73&amp;desc=vacation_new_zealand",
           "lastmod" -> "2004-12-23",
           "changefreq" -> "weekly")

Redis feeder

This feature was originally contributed by Krishnen Chedambarum.

Gatling can read data from Redis using one of the following Redis commands.

  • LPOP - remove and return the first element of the list
  • SPOP - remove and return a random element from the set
  • SRANDMEMBER - return a random element from the set

By default RedisFeeder uses LPOP command:

import com.redis._
import io.gatling.redis.feeder.RedisFeeder

val redisPool = new RedisClientPool("localhost", 6379)

// use a list, so there's one single value per record, which is here named "foo"
val feeder = RedisFeeder(redisPool, "foo")

An optional third parameter is used to specify desired Redis command:

// read data using SPOP command from a set named "foo"
val feeder = RedisFeeder(clientPool, "foo", RedisFeeder.SPOP)

Note that since v2.1.14, Redis supports mass insertion of data from a file. It is possible to load millions of keys in a few seconds in Redis and Gatling will read them off memory directly.

For example: a simple Scala function to generate a file with 1 million different urls ready to be loaded in a Redis list named URLS:

import java.io.{ File, PrintWriter }
import io.gatling.redis.util.RedisHelper._

def generateOneMillionUrls(): Unit = {
  val writer = new PrintWriter(new File("/tmp/loadtest.txt"))
  try {
    for (i <- 0 to 1000000) {
      val url = "test?id=" + i
      // note the list name "URLS" here
      writer.write(generateRedisProtocol("LPUSH", "URLS", url))
    }
  } finally {
    writer.close()
  }
}

The urls can then be loaded in Redis using the following command:

`cat /tmp/loadtest.txt | redis-cli --pipe`

Converting

Sometimes, you might want to convert the raw data you got from your feeder.

For example, a csv feeder would give you only Strings, but you might want to convert one of the attribute into an Int.

convert(conversion: PartialFunction[(String, T), Any]) takes:

  • a PartialFunction, meaning that you only define it for the scope you want to convert, non matching attributes will be left unchanged
  • whose input is a (String, T) couple where the first element is the attribute name, and the second one the attribute value
  • and whose output is Any, whatever you want

For example:

csv("myFile.csv").convert {
  case ("attributeThatShouldBeAnInt", string) => string.toInt
}

Grabbing Records

Sometimes, you just might want to reuse or convenient built-in feeders for custom needs and get your hands on the actual records.

readRecords returns a Seq[Map[String, Any]].

val records: Seq[Map[String, Any]] = csv("myFile.csv").readRecords

Warning

Beware that each readRecords call will read the underlying source, eg parse the CSV file.

Non Shared Data

Sometimes, you could want all virtual users to play all the records in a file, and Feeder doesn’t match this behavior.

Still, it’s quite easy to build, thanks to flattenMapIntoAttributes e.g.:

val records = csv("foo.csv").readRecords

foreach(records, "record") {
  exec(flattenMapIntoAttributes("${record}"))
}

User Dependent Data

Sometimes, you could want to filter the injected data depending on some information from the Session.

Feeder can’t achieve this as it’s just an Iterator, so it’s unaware of the context.

You’ll then have to write your own injection logic, but you can of course reuse Gatling parsers.

Consider the following example, where you have 2 files and want to inject data from the second one, depending on what has been injected from the first one.

In userProject.csv:

user, project
bob, aProject
sue, bProject

In projectIssue.csv:

project,issue
aProject,1
aProject,12
aProject,14
aProject,15
aProject,17
aProject,5
aProject,7
bProject,1
bProject,2
bProject,6
bProject,64

Here’s how you can randomly inject an issue, depending on the project:

import io.gatling.core.feeder._
import java.util.concurrent.ThreadLocalRandom

// index records by project
val recordsByProject: Map[String, Seq[Record[Any]]] =
  csv("projectIssue.csv").readRecords.groupBy { record => record("project").toString }

// convert the Map values to get only the issues instead of the full records
val issuesByProject: Map[String, Seq[Any]] =
  recordsByProject.mapValues { records => records.map { record => record("issue") } }

// inject project
feed(csv("userProject.csv"))

  .exec { session =>
    // fetch project from  session
    session("project").validate[String].map { project =>

      // fetch project's issues
      val issues = issuesByProject(project)

      // randomly select an issue
      val selectedIssue = issues(ThreadLocalRandom.current.nextInt(issues.length))

      // inject the issue in the session
      session.set("issue", selectedIssue)
    }
  }