← Blog / Opinion

In Defence of the
Majestic Monolith

Small SaaS teams don't need microservices. They need a well-structured monolith. Here's why the industry's obsession with distributed systems is costing founders time, money, and shipping velocity.

Henning Botha Henning Botha
February 2026 · 8 Min Read · Opinion
Server rack with cables and blinking green lights

I was on a call last year with a founder who had just hired two developers to build the first version of their SaaS product. They had $80k in runway. They needed to ship something in three months or they were out of business. When I asked what their architecture looked like, they told me, somewhat proudly, that they were building microservices on Kubernetes.

They burned through their runway setting up infrastructure. They never shipped. The company is no longer around.

This is not a rare story. It plays out constantly in the startup world, usually with the same cast of characters: engineers who learned distributed systems at a big tech company, now applying Netflix-scale architecture to a product with zero users. The vocabulary sounds impressive. The results are catastrophic.

"Start-ups should not use microservices. That's a Netflix problem." — Martin Fowler

Fowler said this years ago and it's still true. What changed is that the tooling has gotten so good — Kubernetes, Docker Compose, service meshes — that it's now easy to build the wrong architecture quickly. That's not progress. That's a faster route to the wrong destination.

What a Monolith Actually Is (Not What You're Thinking)

When most developers hear "monolith," they picture a sprawling ball of mud — thousands of lines of unstructured code, global state everywhere, every function calling every other function. A legacy PHP application from 2004. That's not what I mean.

A majestic monolith is a single deployable unit with clear internal structure. The code is modular. Concerns are separated. There are clear boundaries between the auth layer, the billing layer, the core domain logic, and the API layer. It's just that those boundaries exist as code organisation within a single application, not as network calls between separate services.

The key insight: Microservices enforce module boundaries via network boundaries. A well-structured monolith enforces the same module boundaries via code conventions and directory structure. Both can work. Only one of them requires you to think about service discovery, distributed tracing, network partitions, and eventual consistency before you've validated your product idea.

Shopify ran as a Rails monolith for most of its history, serving millions of merchants and billions in GMV. Stack Overflow — arguably one of the most efficient servers on the internet — still runs as a monolith. Basecamp, GitHub (for a long time), and Notion all ran or run as monoliths at significant scale. The ceiling on a well-built monolith is much higher than most developers believe.

The Real Costs of Premature Microservices

Let me be specific about what microservices cost a small team, because the costs are often invisible until they compound:

Operational overhead: Every service needs its own CI/CD pipeline, its own logging, its own health checks, its own deployment configuration, its own environment variables managed in secrets. What's one configuration file in a monolith becomes a distributed configuration management problem across a dozen repositories. Someone on the team now spends 30% of their time on DevOps instead of product.

Local development complexity: Running a microservices architecture locally requires orchestrating multiple services. Docker Compose helps but doesn't eliminate the problem — every developer needs to spin up every service they need to touch, and the chances of version drift between local and production environments increase with every service added. Compare this to a monolith: one command, one process, everything running.

Distributed debugging: When something goes wrong in a monolith, you read the logs. When something goes wrong in a microservices architecture, you pull up your distributed tracing tool, correlate request IDs across four service logs, identify which service introduced latency, and figure out why the retry logic on service C is sending 3x the expected traffic to service D. Debugging time that used to take minutes takes hours.

Data consistency nightmares: In a monolith, a database transaction either commits or rolls back. You get strong consistency for free. In microservices, cross-service operations that need to be atomic require either two-phase commit (complex, brittle) or saga patterns (complex, eventually consistent). A feature that would be a single database transaction in a monolith becomes a distributed coordination problem with multiple failure modes.

The acid test: If an engineer on your team can't explain exactly how a multi-service operation would remain consistent in the face of a network partition, you're not ready for microservices. If your entire team can't explain it, you're absolutely not ready.

Velocity tax: Adding a new feature to a monolith is usually one PR in one repository. Adding the same feature to a microservices architecture might require coordinated PRs across three repositories, with careful attention to API versioning and deployment order. This isn't a small overhead. It's a velocity multiplier applied negatively at every single feature you ship.

When Microservices Actually Make Sense

I'm not anti-microservices. I'm anti-premature-microservices. There are legitimate reasons to extract services from a monolith:

Independent scaling requirements: If your image processing pipeline needs 50x more compute than your API during batch jobs, extracting it as a separate service (or worker) lets you scale it independently without scaling the whole application. This is a real need — but it's a need that emerges from actual usage data, not one you architect for speculatively before you have users.

Team topology: Conway's Law states that organisations design systems that mirror their communication structures. If you have 8 teams of 8 engineers, each owning a distinct product domain, microservices with strong service ownership boundaries map well to your team structure. If you have a team of 4 engineers, you have one team. Build one thing.

Genuinely different technology requirements: The QuartzBot architecture has a Python FastAPI service for the AI pipeline alongside a Next.js frontend. That's not microservices for its own sake — Python is the right language for LangChain, NumPy, and pgvector operations. The services are separated by technology constraint, not architectural ideology.

Compliance and data isolation: If parts of your system handle regulated data (PCI, HIPAA, GDPR) and parts don't, isolation at the service level can simplify your compliance scope. This is a real architectural driver — but one that applies to a small fraction of applications.

The Path: Monolith First, Extract Later

The pattern I recommend for every new SaaS product is the same: build a well-structured monolith first. Ship fast. Learn what's actually slow, what actually needs scaling, what actually belongs in a separate service. Then extract, with evidence.

This is the opposite of how most technical founders approach it, because designing a distributed system feels like doing serious engineering. It feels like you're building something that can scale to a million users. In reality, you're building complexity that doesn't survive contact with your first hundred users' actual behaviour, let alone a million.

The irony is that a monolith, built well, is far more likely to get you to product-market fit — and therefore to the scale where microservices might actually be warranted — than a microservices architecture that burns your runway before you've validated anything.

Practical Guidance: Structuring a Monolith for Future Extraction

If you're convinced, here's how to build a monolith that's ready to be broken apart if and when the time comes:

Organise by domain, not by layer. Instead of /models, /controllers, /services, structure your code as /billing, /auth, /notifications, /core. Each domain module owns its own models, business logic, and data access. When you eventually extract a domain into a service, the code boundary already exists.

Treat domain boundaries as API contracts. Even inside a monolith, establish the convention that Domain A does not reach directly into Domain B's database tables. Domain B exposes a function interface. Domain A calls it. This discipline makes future extraction trivial — you're just replacing a function call with an HTTP call.

Use a job queue for async work from day one. Background jobs — sending emails, processing uploads, generating reports — should never run inline in a request. Use a job queue (Sidekiq, BullMQ, anything). This gives you the operational profile of an async worker without the complexity of a separate service. When you do need to scale the worker separately, you already have the separation.

Measure before you extract. When you experience actual performance problems, profile first. You will almost always discover that 80% of your latency is in 20% of your code — usually a specific database query or a synchronous operation that should be async. Fix those first. Extract a service only when profiling shows that extraction is the right answer, not the first answer.

Closing: Ship the Thing

The best architecture for a startup is the one that lets you ship fast, iterate quickly, and keep your options open. A well-structured monolith gives you all three. Premature microservices give you none of them.

At HJB CodeForge, every product we build starts as a monolith. Net Terms Tracker is a Remix monolith deployed as a single Fly.io app. Track & Thrive is a Next.js monolith. QuartzBot has a two-service architecture — but only because Python and JavaScript are each the right tool for their respective jobs, not because we thought we needed a distributed system.

The decision to split should always be driven by evidence — scaling constraints, team topology, compliance requirements — not by architectural fashion. Build the simplest thing that ships. Then let real usage data tell you what to build next.

Henning Botha
Henning Botha
Founder, HJB CodeForge. I help founders make pragmatic architecture decisions and ship software that works.
Discuss Your Project →
← Previous
How We Built QuartzBot's RAG Pipeline
Next →
How to Hire a Developer When You're Not Technical