“We’re going to move to microservices.” This is a phrase heard regularly at the start of engagements. Rarely because the team has analysed its constraints and concluded that is the right architecture. More often because it is what others do, because recruitment requires it, or because a consultant recommended it without digging into the context.
The result, too often: exploding operational complexity, increased latency, and a team spending more time orchestrating services than delivering value.
The problem is not the monolith, it’s the poorly structured monolith
Two things must be distinguished that the tech discourse constantly conflates:
The big ball of mud monolith: everything is coupled to everything, responsibilities are mixed together, changing one line can break anything. This is indeed a problem. But it is not a deployment topology problem. It is an internal structure problem.
The modular monolith: a single deployable artefact, but with internal modules with clear responsibilities, explicit interfaces between modules, and strict rules about who can call whom. This is a perfectly viable architecture for a very large range of systems.
The confusion between the two pushes teams to “solve” an internal structure problem by adding distribution complexity. That is treating the wrong problem with the wrong tool.
What microservices deliver, and what they cost
The real benefits
- Independent scalability: scale the image processing service without scaling the user service
- Independent deployment: one team can ship their service without coordinating with others
- Fault isolation: a service going down does not take the whole system with it
- Technological heterogeneity: each service can use the stack adapted to its problem
The real costs, often underestimated
- Operational complexity: you now have 30 services to monitor, deploy, debug. The error surface explodes
- Network latency: a local call becomes a network call. On workflows that chain 10 services, this accumulates
- Distributed transactions: what was a local ACID transaction becomes a distributed consistency problem (saga pattern, compensation, idempotence)
- Discovery and observability: without distributed tracing (Jaeger, Tempo), understanding why a request fails becomes a detective exercise
- Integration tests: testing interactions between services is exponentially more complex than testing internal modules
Martin Fowler has been saying it clearly for years: “Don’t start with microservices.” This is not a rejection of microservices; it is a recognition that the entry cost is high and only justified in certain contexts.
The real decision criteria
Criterion 1: team size and structure
Conway’s Law says that a system’s architecture tends to reflect the communication structure of the organisation that produces it. That is an observation, not a prescription.
Microservices are natural when you have multiple autonomous product teams that need to be able to ship independently. If you have a single team of 5 to 8 developers, microservices add inter-service friction without delivering the autonomy benefits.
Question to ask yourself: is organisational coupling the real problem? If so, microservices may help. If not, they add complexity without solving anything.
Criterion 2: differentiated scalability requirements
If all parts of your system have the same scalability requirements (they grow together, they are used in the same way), a horizontally scalable monolith is sufficient.
Microservices add value when a specific part of the system has radically different requirements from the rest. Concrete example: an on-demand report generation service, CPU-intensive, that must not affect the latency of the main transactional service.
Criterion 3: operational maturity
Microservices require a mature deployment, monitoring and observability infrastructure. Without Kubernetes (or equivalent), without a service mesh or gateway, without distributed tracing, without granular alerting per service, microservices are a promise without a safety net.
Blunt question: do you have an ops team capable of maintaining this infrastructure? If the answer is no, you are going to create an operational debt greater than the technical debt you are trying to resolve.
Criterion 4: compliance and isolation requirements
In regulated contexts (health data, financial data, sensitive personal data), isolation of data between domains can be a compliance requirement. Microservices with dedicated databases per service naturally satisfy this requirement.
A monolith with a shared database can also satisfy it via separate schemas and database-level access controls, but it is less natural.
The modular monolith: how to build it correctly
If the modular monolith is the right answer for your context, here are the principles that make it viable in the long term.
Define explicit module boundaries
Each module has a clear responsibility, a documented public interface, and a private internal space. Modules do not call each other directly: they go through the public interfaces.
In Java: packages with package-private access on implementations. In Go: packages with exported/unexported. In TypeScript: barrel files and import rules enforced by eslint (dependency-cruiser).
One module = one logical database
Even if everything physically shares the same database, each module must have its own schema or set of tables, and never directly access other modules’ tables. This is the hardest rule to maintain, and the most important.
If your modular monolith respects this rule, migration to microservices later (if the context requires it) is feasible: you just need to extract the module and its database into a separate service.
Synchronous and asynchronous interfaces between modules
Modules communicate via explicit interfaces. For synchronous calls: typed methods or interfaces. For events: an internal event bus (simple Go channel, Node event emitter, Spring ApplicationEventPublisher).
Internal asynchronism allows testing fault tolerance and naturally prepares for a possible future distribution.
When moving to microservices is the right decision
There are cases where microservices are the right answer:
- Multiple teams with different delivery cycles, where deployment autonomy is a real need
- Part of the system with radically different scalability requirements, such as a real-time processing function that must scale independently
- Integration of heterogeneous existing systems, with microservices as an integration layer around legacy systems that cannot be modified
- Strong isolation requirements for compliance, where each data domain sits in its own perimeter
And often, the right answer is a hybrid system: a modular monolith for the core of the system, with a few services extracted for specific well-identified needs.
Conclusion
Architecture is not a prestige competition. Just because Netflix and Uber use microservices does not mean that is the right answer to your problem. They have teams of thousands of people and scalability constraints you probably do not.
The question is not “microservices or monolith?”. The question is “which architecture minimises accidental complexity and maximises the team’s ability to deliver value sustainably?”. Sometimes that is microservices. Often, it is a good modular monolith.
Are you in the process of defining your architecture trajectory? Let’s talk.