Response times and distribution
Gatling and Gatling Enterprise provides you with response times of all your requests or groups of requests. With Gatling’s Domain Specific Language (DSL), you can define groups of requests, to have a transactional view within your reports: for instance, you can group all the requests related to a login process.
Our reports display distributions, meaning what you analyze is how the worst percentiles are distributed. Percentiles are fractions of your requests: if you are looking at your 90th percentiles, you are looking at the maximum response times of 90% of your fastest requests, 95th percentiles, 95% etc. Your aim is to investigate how the slowest requests are impacted in terms of performances.
A lot of tools provide you with means and standard deviations. But this isn’t enough: you need to take a look at the percentiles. If you have 90% of your requests below 200 milliseconds and 10% of your requests above 10 seconds, you know you have a performance bottleneck and that you are losing 10% of your users because of slow response times (the threshold is often considered to be 2 seconds in terms of page loading time).
To check this, Gatling and Gatling Enterprise have “assertions” that you can define for a specific simulation. “Assertions” are acceptance criteria. You define them according to your business requirements (for instance, you want to have 99% of your users with response times below 2 seconds, because this is the standard value within your market). When running your tests, Gatling and Gatling Enterprise will be able to let you know if these requirements are met or not.
Advanced metrics (in Gatling Enterprise)
Gatling Enterprise comes up with advanced metrics to analyze your tests: runs comparisons, TCP connections metrics, DNS resolutions and injectors’ monitoring.
Comparing runs is very useful to detect performance regressions from one test to another and to investigate. This is very useful if you implemented Continuous Load testing within your organization.
TCP connections are often a bottleneck. This is a limitation related to your infrastructure. Testers mostly focus on response times, when running load tests, but having servers not able to open and close enough TCP connections per seconds is a very common performance bottleneck.
For some companies that have their own DNS servers, DNS resolutions can be a bottleneck too.
Injectors’ monitoring is very important in terms of methodology. When you detect slow response times, you need to make sure that these slow responses aren’t related to your load testing solution itself. There are many factors that can impact your load testing solution’s behavior: your testing infrastructure isn’t sized properly or another solution running on the same infrastructure impacted your metrics. You need always to check that the load injectors’ behavior wasn’t impacted.