Why .NET 10 Makes Modular Monoliths More Viable Than Microservices

Microservices were never meant to be the default starting point.
They were a response to a very specific set of problems, large teams, independent deployment requirements, organisational scaling, and systems that had already outgrown a single codebase. Somewhere along the way, that nuance was lost. Microservices became a goal rather than a tool. The result is familiar. Systems that are operationally complex long before they are functionally complex. people spending more time managing infrastructure, retries, contracts, and observability than delivering business capability. Debugging becomes an exercise in tracing distributed failures instead of understanding domain logic.
This isn’t because microservices are flawed. It’s because they are expensive, and that cost is paid every single day you operate them.
What has changed over the last few runtime releases, and becomes much harder to ignore in .NET 10, is that the alternative has become significantly stronger.
Not the old-style monolith. A modern modular monolith, built around explicit boundaries, asynchronous workflows, and in-process messaging. One that keeps the architectural discipline people originally reached for microservices to achieve, without paying the network tax.
.NET 10 materially changes the cost of doing the right thing inside a process.
The Cost
Architecture is ultimately about trade-offs, and trade-offs are driven by cost. Not just financial cost, but cognitive cost, operational cost, and failure cost. A network boundary is not just slower than a method call. It introduces an entirely different failure model. Once you cross a process boundary, you must assume partial failure as the default. You design for retries, idempotency, backoff, circuit breaking, message duplication, and timeouts. Even when everything is healthy, latency is measured in milliseconds rather than nanoseconds.
An in-process boundary, by contrast, fails fast and predictably. There is no transport, no serialisation, no handshake, no packet loss, no retry storm. Historically, Developers accepted the network cost because in-process architectures tended to degrade into unstructured, tightly coupled systems.
That trade-off has shifted.
.NET 10 makes it cheap to build structured in-process systems. Async execution is cheaper. Coordination is cheaper. Cancellation is cheaper. Observability is cheaper. These are not headline features, but together they change what is viable.
What .NET 10 Changes in Practice
The most important improvements in .NET 10 are not APIs you call directly. They are changes in behaviour. Async state machines allocate less and resume more efficiently. The ThreadPool reacts more intelligently under mixed workloads. Cancellation propagation is faster and more predictable. Diagnostic activity flows with less overhead. These things compound.
In earlier runtimes, building internal pipelines of asynchronous work could quickly put pressure on the ThreadPool, increase allocation rates, and create pathological scheduling behaviour under load. This pushed teams toward external queues, message brokers, or separate services simply to regain stability.
In .NET 10, that pressure is largely gone. Internal asynchronous workflows are now cheap enough to be the default rather than the exception.
This is great because modular monoliths live or die on internal messaging.
From HTTP Chains to In-Process Workflows
Take a common microservices setup. A request enters an API gateway, flows through multiple services, and each hop introduces latency and failure risk.

Every arrow here represents a network call, even if all services are deployed in the same cluster. Each hop requires retries, timeouts, and tracing just to understand what happened when something goes wrong.
Now compare this to a modular monolith using in-process messaging.

There is still decoupling. There are still explicit boundaries. There is still asynchronous execution. What’s missing is the network.
In .NET 10, this pattern scales far further than it used to.
In-Process Messaging That Holds Together
The key mistake that doomed so many monoliths was direct coupling. Modules reached into each other’s internals because it was easy. The solution is not to add HTTP calls. It is to add structure.
A simple example illustrates the point.
public interface IDomainEvent { }
public sealed record UserCreated(Guid UserId) : IDomainEvent;
Modules do not call each other directly. They publish events.
public sealed class UserService
{
private readonly IEventDispatcher _dispatcher;
public UserService(IEventDispatcher dispatcher)
{
_dispatcher = dispatcher;
}
public async Task CreateUserAsync(User user, CancellationToken ct)
{
// Persist user
await _dispatcher.PublishAsync(new UserCreated(user.Id), ct);
}
}
Handlers live in other modules, completely unaware of who raised the event.
public sealed class BillingUserCreatedHandler
: IEventHandler<UserCreated>
{
public Task Handle(UserCreated evt, CancellationToken ct)
{
// Initialise billing account
return Task.CompletedTask;
}
}
In .NET 10, patterns like this are cheap enough to use everywhere. The runtime no longer punishes you for doing the right thing. Cancellation flows correctly. Backpressure is manageable. Observability can be layered on using Activity without paying a large overhead. You end up with a system that behaves like a distributed system, but runs like a single process.
Failure Semantics Are the Real Cost of Distribution
When you cross a network boundary, performance is not the biggest thing you give up. Predictability is.
An in-process call either succeeds or fails. If it fails, it fails immediately and deterministically. An exception is thrown, the stack unwinds, and the system remains in a known state. You can reason about it locally. You can test it easily. You can usually fix it by reading the code.
A network call does not fail like this.
When a request times out, you do not know whether the operation failed, partially succeeded, or completed successfully but lost its response. The caller is left in an ambiguous state, and ambiguity is poison to simple reasoning.
That ambiguity is why distributed systems require an entirely different set of patterns. Retries are no longer an optimisation but a necessity. Idempotency stops being a nice-to-have and becomes mandatory. Compensating actions appear, not because the domain demands them, but because the transport does. Observability shifts from helpful to essential, because without it you cannot reconstruct what actually happened.
This is the point where many systems quietly cross a line.
Once failure becomes ambiguous, every interaction must be designed as if it might run twice, or not at all, or succeed in isolation while failing in aggregate. The business logic does not change, but the mental model does. You stop writing code that describes intent, and start writing code that defensively survives uncertainty.
This is not free. It increases cognitive load, test complexity, and operational overhead long before it delivers any architectural benefit.
By contrast, in-process boundaries preserve simple failure semantics. If a module throws, the operation stops. If a transaction fails, state is rolled back. There is no need to ask whether a retry is safe, because retries are not implicit. There is no need for compensating logic, because partial success cannot leak past the boundary.
This difference explains more about architectural complexity than latency ever could.
It explains why distributed systems need sagas. It explains why message deduplication exists. It explains why debugging often involves log correlation rather than code inspection. Most importantly, it explains why introducing a network boundary too early permanently changes how the system must be written.
A modular monolith delays that cost.
It allows you to structure your system around clear boundaries and asynchronous workflows while retaining deterministic failure behaviour. You still model concurrency. You still handle cancellation. You still design for resilience. But you do so without the added burden of transport-level uncertainty.
This is the real advantage .NET 10 amplifies.
Not that it makes systems faster, but that it makes disciplined in-process design cheap enough to remain attractive. As long as your boundaries live inside a process, failure stays local, reasoning stays simple, and complexity grows with the domain rather than the infrastructure.

Why This Changes the Architecture Decision
Microservices force you to pay the full distributed systems cost up front. Modular monoliths let you defer that cost until it is justified.
With .NET 10, the ceiling for how far you can push an in-process architecture is much higher. You can scale vertically, then horizontally, and still preserve clean module boundaries. You can deploy as a single unit without sacrificing internal autonomy. You can reason about behaviour without reconstructing a distributed trace for every bug. Most importantly, you can keep the system understandable.
That is the real win.
Microservices are still the right answer in some cases. Organisational scale, regulatory boundaries, and independent vendor ownership all justify them. But for many systems, they were adopted as a workaround for limitations that no longer exist.
.NET 10 doesn’t eliminate the need for microservices. It makes not needing them a far more realistic option!






