Skip to main content

Working Through a Monorepo Migration with 25 Developers

When I was asked to lead a frontend consolidation into monorepo, I thought it would be mostly technical work. Hard, yes, but predictable. I was wrong. The technical part was difficult, but still manageable. The real challenge was people, habits, and fear of change.

The old setup was painful in a way only large legacy systems can be painful. We had 11 services, 33 servers, and five teams that had learned to survive inside that architecture. Every service drifted over time. Different library versions, different configs, different release rhythm. Shared code existed, but as separate libraries that also needed their own maintenance cycle. Any change in shared logic could break another service in another repository, and then you spent half a day just figuring out where the blast radius started.

From the outside, consolidation looked obviously correct. From inside, it was a threat to everyone’s routine.

We started with almost no safety net. Legacy code, zero automated tests, and 25 developers who still had feature deadlines while the platform was moving under their feet. I tried to keep changes inside service code minimal because each team still owned its domain. Even with that principle, migration was messy. I copied services step by step, forced them to build together, then fixed whatever exploded next.

At first I hoped this would be solved by better tooling choices. It was not that simple. Package manager experiments burned time. Bundler migration burned more. Legacy dependencies were built for assumptions that no longer existed. For a while the process felt absurdly repetitive: new build error, new workaround, new conflict, repeat tomorrow.

The React upgrade was another reality check. React 18 didn’t "break" good code. It exposed bad patterns we had been ignoring for years. Some developers felt like the migration was causing problems. My view was the opposite: the migration made hidden problems visible. Painful, but useful.

The DevOps part became political very quickly. I asked for monorepo-style pipeline behavior: install once, build in parallel, deploy selectively. The first answer was basically, "That’s not how we do it." We got 11 separate jobs and a sequential meta-pipeline. Same repo cloned repeatedly, same dependencies installed repeatedly, same waste repeated in every run. The full cycle took more than 1.5 hours.

That was the moment I stopped waiting for alignment and just built the flow myself. Parallel execution, caching, cleaner build graph. Time dropped to around 25 minutes, and with warm cache it was down to minutes. I was asked not to touch it anymore, but teams kept using it because it was simply better.

The hardest days were not technical. The hardest days were when people asked to roll everything back. One team lead pushed for full rollback because his team had deadlines and no capacity to adapt. I offered partial rollback for his service while keeping the rest moving. He still wanted the entire thing undone.

That week was rough. I spent most of it proving, over and over, that migration was not vanity work. Build comparisons, dependency scenarios, defect tracking, repeated demos. By the end I managed to keep the effort alive, but mentally that was the lowest point of the project. I remember thinking, more than once, that maybe rolling everything back would be easier than dragging everyone through resistance.

After rollout, communication became its own full-time job. Same questions every day. Branch strategy, deployment order, merge conflicts between teams, service-level rollback paths. I wrote docs, wiki pages, and walkthroughs. Almost nobody read them. Most people still preferred direct answers in chat, every single time, to the same issue they had asked yesterday.

That was the point where I finally accepted something simple: you cannot force every good practice at once. If you try, you burn energy on symbolic battles and lose momentum on important ones.

In the end, the migration held. Bugs directly related to migration stayed low in the stabilization period, especially considering we moved 11 services with no test baseline and upgraded multiple core layers at the same time. Infrastructure dropped from 33 servers to 3. Build time collapsed from around 90 minutes to around 25 minutes. Shared code moved into unified internal packages instead of scattered dependency islands.

Was it worth it? Yes, but not for the reason people usually expect.

The biggest gain was not a number. It was changing the direction of the system. Before migration, every month added more friction. After migration, the platform finally had a shape where improvement was easier than survival. That is the difference between a system that slowly decays and one that can still evolve.

The thing I underestimated most was human cost. Technical solution was maybe 20 percent. Working through fear, politics, ownership boundaries, and change fatigue was the other 80 percent.

If I had to summarize the whole experience in one line, it would be this: migrating legacy systems is rarely a tooling problem first. It is a trust problem first.