
How Tape à l’œil protects digital performance during retail peaks with Gatling Enterprise
From Scala to Java across client applications.
About the company
Tape à l’œil (TAO) is a French-born retailer specializing in children’s fashion for ages 0 to 16.
With a blend of brick-and-mortar locations and a robust e-commerce platform, TAO reaches customers across France, Belgium, and select international markets.
Their digital success depends on the ability to scale operations during high-traffic periods like January sales and TAO Days, while maintaining a smooth, responsive customer experience across devices.
and enhance your performance engineering.
Statistics
Location: France/Belgium
Industry: Retail (children’s fashion)
Digital footprint: 350+
Store presence: 300 stores (250 owned, 50 affiliated)
Gatling usage since: 2017 (Enterprise since 2022)
CI/CD stack: GitHub Actions
Monitoring tools: Dynatrace, Datadog
Test frequency: ~1 every 1–2 months (goal: monthly in CI/CD)
Peak load periods: Sales (Jan/Jul), TAO Days, Christmas, back-to-school
Why they needed Gatling
Sales surges, API complexity, and mobile memory bottlenecks demanded more than intuition: TAO needed simulation-based certainty.
- Past traffic spikes had caused outages during key events
- Non-functional requirements had to be validated under peak concurrency
- A mobile app architecture issue caused RAM spikes during login
- Complex promotional tools required realistic load to verify performance
Gatling helped the team shift from reactive troubleshooting to structured, scenario-based testing that replicates real customer behavior.
The challenge: From theory to validation
TAO faced growing pressure to validate performance, not just assume it. One critical case involved mobile app users triggering 8+ API calls on launch. The app worked in QA, but when real users hit the platform at scale, RAM usage spiked.
Another challenge came from the way TAO handled pre-sale carts. Thousands of customers would prepare baskets before big sales events, only to check out en masse at 8 AM on launch day. Without simulating that burst load—including the cart transfer and promotion logic—the backend couldn’t cope.
Even with auto-scaling enabled, their environments couldn’t be trusted blindly. “Autoscaling isn’t magic,” Nordine explained. “We needed to warm up our pods before the real load hit.”
What they achieved with Gatling Enterprise
- Reduced memory usage by consolidating multiple mobile API calls into one
- Validated performance improvements after switching JSON serializers
- Simulated complex carts of 10+ items for sales-day benchmarking
- Ran warm-up tests to trigger autoscaling before traffic surges
- Built dedicated test scripts for mobile and web customer journeys
- Migrated from Scala to Java for easier scripting and broader adoption
- Performed load testing exclusively in staging while mimicking production behavior
Solution: simulate what matters, not just what’s easy
The team built specific load testing scenarios using Gatling Enterprise:
- A mobile script mimicking full user behavior: login, browse, cart, checkout
- A web simulation using closed injection models for homepage, product lists, and cart events
- Load tests for backend services like the commercial operations engine to validate complex pricing logic
- Performance regressions tracked using Gatling dashboards and correlated with Dynatrace and Datadog
What Tape à l'oeil Says
“Before Gatling, going live felt like crossing our fingers and hoping it held. Now we can simulate, measure, and move forward with clarity.”
Nordine El Mojahid, Head of Digital IT,
TAO
The result: Fewer surprises, more confidence
- Improved memory usage by optimizing startup calls in the mobile app
- Detected and fixed issues in promotional pricing engines before campaign days
- Validated architectural changes (e.g., switching JSON libraries) through load simulation
- Reduced backend load by consolidating multiple endpoints into a single optimized one
- Logged 15–20 performance-related tickets annually, many directly linked to Gatling testing
What’s next for Tape à l'oeil
- Automate tests into CI/CD via GitHub Actions
- Run monthly performance simulations to catch regressions before they hit production
- Adopt reference-tier benchmarking for performance thresholds
- Correlate Gatling results with Datadog metrics using upcoming integrations
- Explore AI-powered reporting and test suggestions via Gatling Studio
- Use Postman-to-Gatling imports to simplify test script creation
From Our Blog
Stay up to date with what is new in our industry, learn more about the upcoming products and events.

10 Performance testing metrics to watch before you ship

What is the software development lifecycle (SDLC)? Complete guide
