How I Migrated 11 Microservices to a Monorepo (And Why I Almost Rolled It All Back)
The Beginning: "It Was About Time"
When I was first asked to handle the frontend consolidation, my initial thought was: "Finally." The problems with the old architecture had been accumulating for years.
33 servers. 22 for production (2 per each of 11 services) plus 11 for development. Sounds harmless until you realize what it means: want to add a stage environment? Get ready to scale this entire architecture again. It's a massive undertaking.
Each service was updated differently. Even though we tried to keep them consistent, over time each started living its own life - different library versions, different configurations, constant dependency conflicts during installation.
Code sharing? Separate shared libraries that also need maintenance, updates, and compatibility checks with each service. Code sharing was organized at the most basic level.
The worst part - when you make a change in a shared service and don't have quick access to another repository. It's unclear why something broke elsewhere. You have to switch between repositories, search, and verify.
In short, there was a lot of hope for consolidation. Plus a chance to update frameworks - the project was built from legacy code that hadn't been updated in years. There was no architecture to speak of.
Main Concern: Legacy Code and 5 Teams
Before starting work, I was most worried about the legacy code. We had plenty of it, and zero tests.
The second problem was organizational. 5 teams of 5 people each, with each team responsible for their own service. They needed the ability to deploy both separately and together. Independently roll back their code if something went wrong.
They needed freedom + standardization simultaneously. These issues had to be solved first.
At that point, I hadn't yet realized that the most difficult part would be retraining. Many were used to working the old way and didn't want to change anything.
First Step: Copy-Paste and Prayer
I started by simply copying one service as-is and trying to make it work with another. Started with two, moved bit by bit, checking each time: does everything work, does it build, does it even start?
My principle: minimal intervention in the services themselves. These are other teams' areas of responsibility. But I still had to refactor some libraries and write shims for legacy code.
Package Manager Hell: yarn vs pnpm vs "Everything Broke"
The most problems came from legacy libraries. One service used a new version of some package, another used an old one. Yarn 1.x refused to work with different versions - workspaces support isn't as good as in recent versions.
Tried upgrading to yarn 4. Didn't work. Again, due to legacy libraries.
Tried pnpm. Didn't notice much difference compared to yarn - maybe just better isolation. Speed and build sizes were roughly the same.
Dependency installation takes a very long time, and testing with different package managers took more time than I expected. At some point I just got tired of experimenting and stayed with yarn 1.x - at least it worked stably.
Webpack: Wanted Vite, But Legacy Said No
Personally, I like Vite - fast, modern. But it doesn't work with CommonJS (old syntax with require()). And our legacy libraries are exactly on CommonJS. Had to give it up.
The original project was on Webpack 4 with tons of customizations written 5 years ago. Nobody understood what they were for. It just worked and nobody touched it.
Decided to start with Webpack 5 from a clean install, with gradual customization for each service. Webpack 5 supports better tree shaking and bundle optimization.
Spent most time on old libraries. They didn't want to install, build, threw lots of errors:
Module not found: Error: Can't resolve 'crypto'
Module not found: Error: Can't resolve 'stream'
Module not found: Error: Can't resolve 'buffer'
Webpack 5 removed automatic polyfills for Node.js modules. Old libraries expected them to be there. Had to add polyfills manually:
resolve: {
fallback: {
crypto: require.resolve('crypto-browserify'),
stream: require.resolve('stream-browserify'),
buffer: require.resolve('buffer/'),
// ... about 10 more
}
}
After a week of various attempts, I finally achieved stable operation. Every day - a new polyfill, a new error, a new workaround.
React 18: Migration Revealed What Was Already Broken
When migrating to React 18, I encountered the fact that most developers don't understand how hooks work. Plus React 18 switched to asynchronous state updates.
All errors that arose during migration - it's incorrect hook usage that React 17 silently forgave:
- Async code in
useEffectwithout cleanup - Changing states one after another expecting them to update synchronously
- Missing dependencies in dependency array
Had to explain the React update cycle, new batching. "Why did it work before but doesn't work now?" - the most common question.
My answer: it didn't work correctly before either, React 17 just hid the problem. React 18 is more honest.
I consider the migration successful. Problems that needed to be fixed anyway were identified and corrected. Better now than when they blow up in production at who knows when.
DevOps: "That's Not How It Works"
The main problem of large companies - many different departments responsible for different things and located in different cities.
The company had never used monorepo. When I asked the DevOps team to set up Jenkins with the ability to run either one service, or several, or all, they said:
"That's not how it works."
They created 11 separate build jobs plus one meta-pipeline that runs all 11 sequentially.
What this meant:
- 11× git clone of the same repository
- 11× yarn install of the same dependencies
- 11× builds sequentially
Total: over 1.5 hours for each update.
I tried to explain that monorepo is one repository, dependencies need to be installed once, then Turborepo builds services in parallel. They didn't listen. "We've always done it this way."
Had to create it myself. Set up parallelism, Turborepo caching. Reduced build time to 25 minutes. With cache - to several minutes.
The DevOps department didn't like it. I was asked not to touch the settings anymore. But the solution turned out to be so much more convenient that it was kept. Development teams voted with their feet - everyone started using my pipeline.
There were other disagreements. I suggested deploying to /var/www/ - the standard web folder. They replied: "We've always deployed from the home folder and will do the same."
In this case, I didn't insist. Basically, it's not my area of responsibility. Choose your battles.
"Roll Everything Back"
Not everyone took the updates positively.
There was a moment when a team lead from another team asked to roll everything back. Motivation: "The team doesn't have time to migrate, we have deadlines."
I suggested a compromise: roll back only his service and keep the rest. Monorepo allows this - you can set up deployment of one service separately.
But he insisted on a complete rollback. Said the migration creates more problems than it solves.
Had to fill out lots of documentation and run tests to convince everyone that the migration was necessary. Spent a whole week on:
- Installing/removing dependencies in different combinations
- Builds in different modes
- Comparing bundle sizes
- Build time measurements
- Demonstrating the convenience of working with shared code
In the end, I managed to convince them. The team agreed to continue the migration.
But there was a moment when I myself thought: "Maybe we really should roll it back? Why did I get into this?"
It's very difficult to prove the necessity of updates because part of the advantages are a foundation for the future. Creating shared code, longer support, reducing infrastructure. This isn't visible immediately, it pays off over months.
Coordinating 25 People: Nobody Reads Documentation
After other developers got involved, questions started coming. The main ones were related to transitioning to the new architecture:
- How do we now work with Git branches in monorepo?
- What's the deployment order?
- How to merge if multiple teams change code simultaneously?
- How to roll back only your service without touching others?
I filled out documentation. Wrote a Wiki. Conducted training sessions.
Result? Almost nobody read the documentation. Had to explain to everyone personally. The same questions 10 times.
Still can't convince the team to commit yarn.lock and install from it. Someone still adds it to .gitignore.
I explained. Provided documentation. Filled out wiki. Useless. People are used to working the old way and don't want to change.
Decided to give up on this front. Choose your battles. Yarn.lock isn't critical, monorepo works without it.
Results: Less Than 5 Percent Bugs
Checked commit analysis for 2-3 weeks after all developers were fully onboarded.
Less than 5 percent of commits were related to bugs caused by migration. Most were normal development bugs unrelated to migration.
I consider the migration successful. Especially considering we had:
- Zero automated tests
- Legacy code 5+ years old
- 25 developers, 5 teams
- Three major upgrades simultaneously (React, Node, Webpack)
What we got:
Build time: 1.5 hours → 25 minutes (with cache - several minutes)
Servers: 33 → 3
Shared code: now in packages/ui and packages/utils instead of separate npm packages
Deployment: one command, one PR, one CI/CD pipeline
Technical debt:
- React 18 ✅
- Webpack 5 ✅
- Node 22 ✅
- Configuration standardization ✅
- Foundation for micro-frontends ✅
Trade-offs:
Bundle became larger by 5.5 MB (gzip: +2-3 MB). This is temporary - haven't configured shared chunks between services yet. After optimization, I expect a 30-40 percent reduction.
Yarn workspaces don't work perfectly in 1.x version, but migration to the latest yarn version is postponed to the second phase.
What I Would Do Differently
1. Incremental Approach
Could have done monorepo first, then versions. But then there wouldn't have been deadline pressure from server consolidation. Without deadlines, such projects drag on for years.
2. More Time for Communication
Technical solution - 20 percent of the work. People management - 80 percent. Underestimated this.
3. At Least Some Tests
Zero tests - Russian roulette. Got lucky with 5 percent bug rate, but that's luck, not skill. Next time I would insist on minimal test coverage before migration.
4. Not Try to Push yarn.lock
Spent a month trying to convince. Useless. Should have given up immediately and spent time on something more important.
Was It Worth It?
Yes.
There were many moments when I thought about quitting. Resistance from DevOps. Requests to roll everything back. Team misunderstanding. Legacy code that doesn't want to work.
But without this there wouldn't have been:
- Infrastructure reduction (saving company money)
- Build speed (saving developers' time)
- Foundation for shared code (future refactoring)
- Technical debt that would have to be addressed sooner or later anyway
The biggest achievement, I think, is code consolidation. The ability to extract common parts. Shared logic for build and deployment. Unified Webpack settings. Standardization.
This helped lay a good foundation for further refactoring. Helped get rid of libraries we used to share code between projects. Now everything is in the shared part. Which means no problems with updates, no separate hassle with npm modules, installation in each service.
Now everything is in one place and connected during build.
Metrics (for those who love numbers):
- 25 developers, 5 teams
- 11 services + 2 libraries → 1 monorepo
- 33 servers → 3 servers
- Build: 90+ minutes → 25 minutes (90 percent faster)
- Bug rate: less than 5 percent of commits related to migration
- Zero automated tests (yes, I know)
- React 17→18, Node 16→22, Webpack 4→5
- 2 weeks of active work