Customer stories

How InPost keeps parcel delivery fast and reliable with Gatling Enterprise Edition

About the company

InPost is one of Europe’s leading out‑of‑home parcel network, operating dense locker and courier networks across nine markets. The company’s digital platform coordinates thousands of services and billions of inter‑service calls to keep parcels moving from checkout to locker pickup—often within a day in Poland (D+1).

Statistics
Industry
Transportation
Location
Europe
Revenue
€2.552 billion
Employees
5,000+
Key metrics
From 1.6 million to over 10 million parcels per day
Gatling Enterprise users
60+ users across 30 teams

10M+

Parcels a day

≥99 %

success rate

1–5,000

requests per second (RPS)

Why performance matters at InPost

InPost’s biggest challenge is the pace of growth across countries. Business leadership mandates proof that the platform can handle anticipated surges. That translates into:

  • Billions of hourly requests across microservices, with strict latency expectations
  • Clear internal standards (e.g., <500 ms for typical backend service responses at 100 RPS; ≥99% success thresholds via assertions)
  • Database targets of <1 ms on average, driving aggressive indexing, caching, and architecture choices
“Observability is a culture here. If you want to ship code, you need to understand performance and how your services behave.”

Mateusz Piasta, Site Reliability Engineer, InPost

Before choosing Gatling Enterprise Edition, teams used Gatling Community Edition, ghz, and JMeter, but scaling both people and infrastructure was painful: manual machine tuning, fragmented visibility, and extra effort to visualize tests.

Why they chose Gatling Enterprise Edition, and how they use it today

Three drivers for going Enterprise:

  1. Visualization and scalability: moving beyond self‑hosted metrics stacks (e.g., InfluxDB) and manual exports.
  2. Protocol coverage: gRPC, MQTT, JDBC, WebSockets—critical for InPost’s Google Cloud environments and event‑driven flows.
  3. Distributed, managed load: autoscaled Kubernetes load generators and Private Locations eliminate machine babysitting and enable concurrency across many teams.

Additional practices:

  • GitLab CI builds and ships simulations; headers integrate with Dynatrace so every run is visible in APM alongside Gatling’s real‑time charts.
  • Hybrid model: Community Edition in CI to prepare large datasets, then Enterprise to execute high‑value scenarios without burning credits.
  • Self‑service onboarding: 5 minutes for no‑code URL checks; 30–60 minutes for test‑as‑code with a shared README and standards.

“With autoscaled generators we stopped maintaining boxes. We likely saved FTEs—now we pay a subscription instead of paying with people's time.” — Mateusz Piasta

InPost’s testing strategy

Performance tests are a must‑pass gate before releases. Teams codify guardrails using global and detailed assertions (e.g., ≥99% success, latency ceilings). Typical service targets are 1–5,000 RPS, with a minimum bar of 10 RPS on 1 vCPU/2 GB RAM for the smallest K8s footprint. Teams:

  • Prioritize business‑critical flows (pay, be paid; core parcel lifecycle)
  • Run peak‑day simulations (8–9 hours) that mirror 08:00–17:00 curves
  • Occasionally run end‑to‑end, long‑haul tests to surface slow‑burn issues

The five‑day logistics simulation

Among InPost’s most impressive validations was a five‑day continuous logistics test, designed to mimic the complete parcel journey—from a Saturday purchase to final locker pickup days later. The test emulated the real‑world logistics rhythm in Poland (D+1 delivery standard) and surrounding markets, layering weekend backlogs and weekday surges to reproduce the full system load cycle.

The simulation captured every stage: orders placed, courier pickups, sorting center operations, locker deliveries, and parcel retrievals—some occurring within hours, others after the maximum two‑day storage period. Running the test over multiple business days exposed bottlenecks invisible in short runs and led to fundamental architectural tuning.

“The five‑day test revealed bottlenecks from code to cables. We tuned caches, optimized database indexing, even replaced physical connectors between servers and storage arrays. It literally helped us strengthen our logistics software backbone.”

Mateusz Piasta

As a result, InPost scaled its IT network from 1.6 million to over 10 million parcels per day, uncovering resource saturation points and validating the performance of its end‑to‑end logistics ecosystem under real operating conditions.

Test maintenance

A single shared repository enables code reuse and peer review. Product teams own scenarios; the platform team supports environments, auth secrets, and permissions.

Results with Gatling Enterprise Edition

Early regression detection

  • Discovered an unexpected 2× RPS jump before production; root cause: unnecessary HTTP→HTTPS redirect in an internal path. Fixed before it hit users.

Scalability and cost control

  • Dynamic provisioning removed idle capacity and manual ops. Generators scale to zero when unused and spin up within minutes when needed.

Right‑sizing and resilience

  • Long‑run tests, including the five‑day logistics simulation, drove end‑to‑end tuning: DB indexing and vacuuming, cache sizing, GC configuration, and deployment shapes. Findings even influenced hardware—migrating one hot path to a physical server, then upgrading disk interconnects and NVMe arrays after revealing a storage cable bottleneck.

Unified visibility

  • Engineers correlate Gatling charts (RPS, errors, latencies) with Dynatrace/OpenTelemetry to pinpoint code hotspots, DB connections, and third‑party latencies.
“Run comparisons are straightforward—pick the metric, compare, done. What we’d love next is automated baseline comparison that can fail a run when it regresses.”  

Mateusz Piasta

InPost’s favorite features

  • Private Locations + autoscaled K8s generators for secure, elastic load
  • Protocol breadth (gRPC, MQTT, JDBC, WebSockets)
  • Run comparison & centralized reporting
  • Test‑as‑code with GitLab CI, plus simple no‑code smoke tests when speed matters

Gatling’s impact on InPost

  • From siloed performance testing to an organization‑wide practice: 60+ users across 30 teams after a few months, trending to ~100
  • Faster onboarding and less toil: minutes for no‑code checks; ~1 hour for code‑based onboarding
  • Auditable reliability for leadership: clear pass/fail gates and reproducible benchmarks

What’s next for InPost

  • Embed Gatling deeper into CI/CD so performance tests run before every production release
  • Launch an internal Microsoft Teams channel to build community and speed up Q&A
  • Explore automated baseline gating and AI‑assisted insights and summaries

Your all-in-one load testing platform

Start building a performance strategy that scales with your business.

Need technical references and tutorials?

Minimal features, for local use only