Software
How Houghton Mifflin Harcourt improves back-to-school readiness
About the company
Houghton Mifflin Harcourt (HMH) is an education technology company that supports digital learning experiences used by students, teachers, and school administrators across the United States.
For HMH, performance is not a secondary concern. When students and teachers cannot access the platform, learning stops. That makes performance testing a core part of protecting user experience, operational continuity, and brand trust.

50+
load generator pods
40+
simulations across 10-15 apps
Why performance matters at HMH
For HMH, the most critical period of the year is Back-to-School. Over roughly two months, traffic climbs sharply as students and teachers return to class and begin using online systems at scale.
To prepare for that surge, HMH tests systems at one, two, three, and up to five times normal load. The goal is simple: validate that core services can absorb demand before the peak arrives.
The stakes are high. If users cannot log in during Back-to-School, the impact reaches far beyond engineering. Performance issues during peak periods can trigger customer escalations, legal exposure, financial penalties, and brand damage, or like Muzzamil Shaikh, Staff Performance Engineer at HMH puts it, “if there’s a school-wide outage people could file a legal suit.” That is why HMH treats performance as a business-critical discipline.
Why HMH chose Gatling
HMH has used Gatling for almost ten years along with other testing software, but Gatling has become the clear standard, accounting for the large majority of their performance testing activity.
That preference comes from a mix of technical and organizational fit. Gatling’s code-based approach matches the way HMH wants to build and maintain tests. “With Gatling, everything is visible in the form of code. So that has really helped us,” adds Muzzamil. The team uses Java with the Gatling Java SDK, which aligns well with internal engineering skills and makes script maintenance easier over time.
That said, HMH’s move to Gatling Enterprise Edition was driven by a very specific scaling problem.
The moment Gatling Enterprise Edition became necessary
Before moving to Gatling Enterprise Edition, HMH used Gatling with an in-house load generation setup running on Apache. That setup worked for isolated one-to-one service tests. It broke down when the team needed to simulate realistic, large-scale traffic across all services together.
For Back-to-School readiness, HMH needed to run around 50 services in parallel. That exposed the limits of their previous approach.
Some services failed to start. Others started but produced no reports. Load generators did not always spin up correctly. Resource issues inside containers made execution unreliable. During long-running tests, the team also lacked live visibility into what was happening.
That was the turning point. HMH pulled the trigger and made the switch to Gatling Enterprise Edition. The first result was immediate and practical. “We were able to run all our 50 tests together,” Muzzamil explains.
HMH’s testing strategy
Performance testing at HMH is embedded in the release process. The team runs tests almost daily, with two to three tests typically executed before release.
Execution is integrated into CI/CD. HMH previously used Concourse pipelines and has since moved to GitHub Actions to pull the latest code from GitHub and launch Gatling Enterprise Edition tests.
The team’s approach starts with production reality. They use Datadog to analyze volumetric patterns, identify heavily used APIs, study peak periods, and understand which endpoints receive the most traffic. That production data helps shape their simulations.
What HMH tests
Today, HMH tests 10 to 15 different applications and maintains more than 40 simulations. Most of those tests are API-based and often organized one simulation per service.
That approach works well for service-level validation and fits HMH’s code-based performance testing model. Tests are maintained in code, integrated into CI/CD, and used to validate critical services under load.
Muzammil also shared a recent example where HMH compared an existing API-based script with a UI-based simulation for the same flow. “When we migrated our script to the UI it uncovered a performance issue of that particular service which we were not able to simulate from the API.”
In that case, the API script exercised four services with manually calculated request rates. The UI-based simulation showed that one service was actually being called more frequently than the API test had modeled. That exposed a real performance issue, which was raised to developers and is now being fixed.
For HMH, the lesson was not to move away from API testing, but to make sure critical user journeys are modeled as accurately as possible when validating end-to-end behavior.
How Gatling Enterprise Edition supports the team today
For HMH, the most valuable features of Gatling Enterprise Edition are the ones that improve visibility and scale.
Live visualization during execution is one of the biggest wins. Private Locations and hybrid SaaS architecture is another. HMH primarily runs tests from its private AWS environment and can scale to more than 50 load generator pods for large all-in-one tests.
The team also uses Gatling Enterprise Edition as part of a broader ecosystem that includes AWS and Datadog. While Datadog supports observability and volumetric analysis, Gatling Enterprise Edition is what allows the team to run realistic tests at scale and observe how those tests behave in real time.
The impact
HMH now approaches performance testing with more confidence.
By testing up to four and five times normal traffic before peak periods, the team has reduced the chance that performance problems will first appear in production. Last year, HMH saw fewer performance issues because they were able to validate behavior more thoroughly ahead of time.
The team also has stronger confidence heading into the next Back-to-School period, especially as they continue broadening the realism of their testing approach.
Advice for teams scaling performance testing
HMH’s advice is practical. Muzzamil advices teams to “have a dedicated performance environment and a stable functional build, so your scripts don’t fail because of unrelated functional issues. Make sure you’re testing against a populated database, not an empty one.”
He also stresses the importance of being mindful of potential cloud costs by saying that “If your environment is in the cloud, shut it down when you’re not testing and bring it up again when you need to run performance tests.” That level of operational discipline matters just as much as tooling.
What’s next for HMH
HMH expects performance testing adoption to grow beyond the core performance engineering team. Previously unused repositories are becoming active again as more teams seek access and want to run their own tests.
Next priorities include:
- Enabling more teams to adopt Gatling Enterprise Edition through pipelines and standardized setup
- Expanding UI-level performance testing for more realistic end-user journeys
- Improving testing environment isolation with a dedicated performance environment
- Continuing to mature performance practices across teams
For HMH, Gatling Enterprise Edition is not just a way to run tests. It is the foundation for scaling performance validation across a complex education platform during the periods when reliability matters most.
As the team expands performance testing across more services, more teams, and more realistic user journeys, Gatling Enterprise Edition continues to provide the visibility and scale they need to prepare with confidence.
Your all-in-one load testing platform
Start building a performance strategy that scales with your business.
Need technical references and tutorials?
Minimal features, for local use only

