Customer stories

How Tikamoon makes load testing a non-negotiable step before every release

About the company

Tikamoon is a French Digital Native Vertical Brand (DNVB) specializing in the design and sale of sustainable solid wood furniture.

Founded around fifteen years ago, the company primarily serves B2C customers (with some B2B activity), and sells mainly online across Europe and the United States. France remains its historical and largest market.

Statistics
Industry
Retail
Location
France
Revenue
$205M
Employees
200+
Key metrics
2 pre-release load test runs per week
Gatling Enterprise users
8 developers across 3 teams

2

pre-release test runs per week

1

annual infrastructure limit test

When performance became a business risk

For Tikamoon, performance is not a technical nice-to-have but a condition for doing business. As a digital-native furniture brand operating almost entirely online, the company’s e-commerce platform sits at the center of revenue, customer experience, and brand promise. Any slowdown or instability has an immediate impact. Loïc Chero leads IT for the e-commerce scope and manages the development teams behind Tikamoon’s internal solutions. His role spans technical leadership, cross-team coordination, cybersecurity, and web performance—at the intersection of delivery and platform reliability.

“The website is the heart of our revenue. Everything depends on it,” explains Loïc. To protect that platform, Tikamoon relies on Gatling Enterprise Edition to validate performance before every production release, detect regressions early, and understand the real limits of its infrastructure long before traffic does it for them.

Tikamoon’s approach is pragmatic: build internally where differentiation matters, and adopt proven tools when the market already offers a strong solution.

Tikamoon has tracked web performance for years. In fact, load testing practices began around 2020, initiated by a developer deeply invested in web performance, but load testing became a priority when traffic peaks started exposing hard limits. For example, when commercial events—like sales periods and Black Friday—brought enough visitors that the platform could no longer handle the load.

Over time, that foundation evolved into a shared discipline—one that shapes release decisions week after week. 

That moment created two long-term objectives:

  • Guarantee the platform can sustain traffic peaks, including the unexpected ones

  • Prevent regressions over time, ensuring that application or infrastructure changes never reduce the capacity Tikamoon already knows it can sustain

Why Tikamoon chose Gatling

Tikamoon adopted Gatling early, initially writing simulations in Scala. The initial decision came down to a combination of strong competitiveness versus alternatives and the ability to model their website realistically at scale.

For Tikamoon, scenario fidelity matters: not all pages behave the same way under load, and they needed a tool flexible enough to reflect that reality while producing results detailed enough to analyze and act upon.

Today, the team runs and analyzes its load tests using Gatling Enterprise Edition.

A key evolution: from Scala to JavaScript for team ownership

This year, Tikamoon made a major shift: they rewrote their simulations in JavaScript.

The change was deliberate. When load testing was implemented, it was driven by a single developer who naturally chose the language that suited him at the time—and Gatling did not yet offer the breadth of language options it does today. But Tikamoon’s goal evolved: they wanted the entire team responsible for the e-commerce platform to be able to understand the scenarios, review them, and challenge them with a critical eye.

Rewriting in JavaScript helped Tikamoon align load testing with the language their developers already use daily in an e-commerce context. It also created an opportunity to simplify and reset parts of the testing logic that had accumulated over time.

“This year we rewrote our simulations in JavaScript so the whole team could really take ownership. It’s the most accessible language for our developers; and in e-commerce, we use it every day,”

Test design: realistic navigation, focused on the website

Tikamoon focuses its load testing on the website layer and uses HTTP only.

The reason is simple: today, the load risk is concentrated on the front-end experience. APIs do not yet see enough volume to justify dedicated load testing, although the team sees it as a potential future topic.

To keep tests representative, Tikamoon structures its simulations around page types and traffic distribution rather than a theoretical end-to-end journey. Their scenario is designed to navigate the site the way real users do, with the weight of each page type matching its importance in production traffic: product pages—by far the most visited—make up the largest share of interactions, followed by listing pages and then the homepage. Lower-traffic pages are still included, but with a much smaller weight so the overall journey stays aligned with what happens in production.

Tikamoon continuously compares this simulated mix to production analytics. If real traffic evolves, the scenario must evolve too, otherwise the benchmark loses meaning.

Two tests, two objectives: release confidence and capacity clarity

Tikamoon runs two main scenarios, each serving a distinct purpose.

Pre-release validation: linearity between versions

The first scenario is a moderate ramp-up test used to validate performance linearity between application versions under controlled load.

It is executed before every production deployment—typically multiple times per week—and acts as a hard gate. The team compares the new version against the previous one and checks that response times and behavior remain consistent for the same traffic distribution and page mix.

“No Gatling run, no production. It’s not a suggestion. It’s a rule.”

If a regression is detected, the release is blocked until the issue is understood and resolved. For Tikamoon, the goal is to avoid discovering problems after deployment—when customers would already be feeling the impact.

Capacity limit testing: find the breaking point and the weakest link

The second scenario is designed to push the platform until it breaks.

This is not about mirroring current traffic. It’s about deliberately injecting a much higher load to identify:

  • The maximum sustainable traffic threshold

  • The safety margin above normal conditions

  • And the first component that fails under stress

Tikamoon plans to run this exercise roughly once per year. It takes time: the team iterates and converges on the failure point by gradually increasing and decreasing the injected load until the threshold is clear.

The last time they ran it, it confirmed a maximum intentionally higher than their average traffic because in e-commerce, unexpected spikes can happen outside controlled marketing campaigns.

Just as importantly, this stress testing reveals which part of the system breaks first—database, cache, application layer, etc. That insight becomes a decision-making tool: it shows where the weakest link is, and whether reinforcing it would be a quick fix or a complex change requiring anticipation.

How Tikamoon runs, maintains, and operationalizes load testing

Tikamoon’s load testing is designed to fit the reality of day-to-day delivery, not to become an overhead.

Ownership and maintenance

The main pre-release scenario is frozen and versioned in GitHub, giving full visibility to the development team. That stability is intentional: if the scenario changed constantly, comparisons across versions would lose their value.

Before freezing it, Tikamoon benchmarked multiple variants and load levels to land on a scenario that is representative of average traffic and sensitive enough to detect pessimistic performance changes quickly.

The annual capacity tests are the exception: for those, the team intentionally varies injection speed and volume to explore the system’s limits.

Who runs the tests (and who analyzes them)

Load testing is not confined to a single specialist. Loïc describes seven people using Gatling regularly, plus a DevOps profile who is particularly involved during capacity-limit exercises.

Tests are launched manually in a pre-production environment designed to be as close as possible to production. While automation is considered, Tikamoon has chosen pragmatism: environments are powered down outside working hours for consumption reasons, and deployments sometimes require manual post-release steps before the system is in a meaningful state for testing.

Given that launching a run is fast and simple in Gatling Enterprise Edition, manual execution remains the most efficient approach in their context.

Performance thresholds and SDLC impact

At Tikamoon, load testing is embedded into the delivery process as a release gate. The pre-release scenario creates a consistent benchmark that allows teams to validate “linearity” and detect regressions before production.

When something changes, the benefit is immediate: because tests happen before each deployment, the team can pinpoint the responsible release quickly—and investigate within a limited scope, instead of chasing performance drift across weeks of accumulated changes.

For Tikamoon, this is where load testing proves its value: not just running tests, but using them as a disciplined mechanism to protect release quality and keep performance predictable over time.

Key Gatling Enterprise Edition capabilities Tikamoon relies on

For Tikamoon, comparison is central: it turns raw results into a decision tool for the release process and two capabilities stand out in their workflow:

  • Run comparison to validate linearity and spot regressions quickly
  • Result dashboards to drill into what changed: global behavior vs. specific page types, error rate shifts, redirect differences, and response time behavior across the scenario

Benefits: confidence, clarity, and shared protection

For Tikamoon, the biggest outcome isn’t a single KPI, it’s operational confidence.

Developers can’t reliably predict the performance impact of every change from local testing alone, especially in environments that are inherently variable. Gatling provides a controlled way to validate the platform under load before customers ever feel the impact.

When a regression appears, the team can identify the release that introduced it and focus investigation immediately—because performance testing is tied directly to the deployment rhythm.

That confidence extends beyond engineering. Performance is treated as a business concern inside Tikamoon: leadership and marketing understand the cost of instability during traffic peaks, and load testing is seen as a shared safety net for protecting revenue and customer experience.

Next steps

Tikamoon’s approach is mature and intentionally stable. For Tikamoon, load testing is a safeguard embedded in the way software ships.

They don’t plan major workflow changes in the short term. Automation may come later, but it isn’t the priority while the current process is fast, reliable, and effective.

Their main focus is vigilance: ensuring the scenario remains representative as the platform evolves. New page types and changes in user behavior can quietly break test relevance, so Tikamoon periodically re-checks scenario distribution against production analytics—and updates when necessary. They also plan to rerun capacity tests to keep infrastructure limits and safety margins well understood.

Your all-in-one load testing platform

Start building a performance strategy that scales with your business.

Need technical references and tutorials?

Minimal features, for local use only