<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Dotnet Digest]]></title><description><![CDATA[Microsoft .Net Engineering Blog]]></description><link>https://dotnetdigest.com</link><generator>RSS for Node</generator><lastBuildDate>Fri, 10 Apr 2026 09:27:28 GMT</lastBuildDate><atom:link href="https://dotnetdigest.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Microsoft Agent Framework .NET 1.0]]></title><description><![CDATA[Microsoft Agent Framework 1.0 is not just another wrapper around an LLM endpoint. Microsoft is positioning it as the successor to the agent work that previously lived across Semantic Kernel and AutoGe]]></description><link>https://dotnetdigest.com/microsoft-agent-framework-net-1-0</link><guid isPermaLink="true">https://dotnetdigest.com/microsoft-agent-framework-net-1-0</guid><category><![CDATA[Artificial Intelligence]]></category><category><![CDATA[software development]]></category><category><![CDATA[Microsoft]]></category><category><![CDATA[C#]]></category><category><![CDATA[dotnet]]></category><category><![CDATA[agentic AI]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[software architecture]]></category><dc:creator><![CDATA[Patrick Kearns]]></dc:creator><pubDate>Mon, 06 Apr 2026 12:06:41 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/67c36038c69a4b7143c5fc49/b42dd562-5796-4161-9272-a56dbc9e611a.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Microsoft Agent Framework 1.0 is not just another wrapper around an LLM endpoint. Microsoft is positioning it as the successor to the agent work that previously lived across Semantic Kernel and AutoGen, and the core 1.0 release is described as battle-tested, stabilised, and supported with backward compatibility going forward. At the same time, the current documentation and package still show some mixed signals, with some Learn pages still carrying preview notes and several surrounding packages still using prerelease versioning, so you need to approach 1.0 as a stable core with an environment that is still settling around it.</p>
<p>That tension is exactly why this release is worth understanding. Up to now, most .NET teams interested in agents have been forced into one of two bad choices. They either built thin prompt wrappers and called them agents, or they jumped into orchestration patterns without a solid runtime model for sessions, tools, middleware, durability, hosting, and protocol interoperability. Agent Framework is Microsoft’s attempt to give .NET teams a real application model for this space, not just a demo model. The official overview reduces the framework to two big ideas, agents and workflows, but that simple split is the right place to start because it tells you what the framework is trying to solve and what it is not.</p>
<p>The first capability is agents. In Microsoft’s model, an agent is the runtime boundary around an LLM-backed interaction. It can process input, call tools, work with MCP servers, manage conversational context, and produce responses. The second capability is workflows. Workflows are graph-based compositions that let multiple agents and functions cooperate in explicit execution paths with routing, checkpointing, and human-in-the-loop support. That separation is crucial. An agent is where reasoning and tool use happen. A workflow is where execution policy happens. If you blur those two concerns, your design gets messy fast.</p>
<p>The most important sentence in the current overview is also the most sobering one, if you can write a function to handle the task, do that instead of using an AI agent. That is not a disclaimer buried in the docs. It is the design principle that should govern every production use of the framework. Agents are for open-ended interpretation, ambiguous intent, context-sensitive decisions, and tool-guided reasoning. Workflows are for controlled multi-step execution. Plain .NET code is still the right answer whenever the work is deterministic. Developers that ignore this end up building expensive, slow, flaky systems that would have been better as ordinary application code.</p>
<img src="https://cdn.hashnode.com/uploads/covers/67c36038c69a4b7143c5fc49/e369b368-990b-4f22-a732-13ac6e843020.png" alt="" style="display:block;margin:0 auto" />

<h2>Why .NET needed this framework</h2>
<p>The old split between Semantic Kernel and AutoGen was always awkward in practice. Semantic Kernel leaned toward enterprise concerns such as filters, telemetry, structure, and service integration. AutoGen pushed harder into multi-agent patterns and orchestration. Microsoft Agent Framework combines those lines of thought into one model, explicitly calling out session-based state management, type safety, middleware, telemetry, and graph-based workflows as core features. That is a much better fit for how real .NET systems are built, because .NET teams care less about agent demos and more about stable abstractions, observable behaviour, and predictable hosting.</p>
<p>Its also good that the framework is not pinned to one model provider. The current providers overview for .NET calls out Azure OpenAI, OpenAI, Foundry, Anthropic, Ollama, GitHub Copilot, Copilot Studio, and custom providers. Underneath that, the framework leans heavily on Microsoft.Extensions.AI.IChatClient for chat-client-based scenarios, which is a very .NET way to structure the problem because it favours composition, dependency injection, decoration, and provider interchangeability over magical SDK lock-in.</p>
<p>This gives you a clean mental model. Your application owns the domain. The agent owns open-ended reasoning. Tools own deterministic side effects. Workflows own execution order and recovery. Hosting owns transport. That is much healthier than the common anti-pattern where the prompt tries to own everything at once.</p>
<h2>The right mental model before you write a single line of code</h2>
<p>You should think of Agent Framework as a layered execution pipeline, not as a chatbot library. The pipeline documentation is one of the most useful parts of the current material because it shows where the framework expects you to plug in behaviour. At the outer layer, agent middleware and telemetry wrap the run. Then the raw agent resolves context providers and gathers per-run middleware. Then the chat client pipeline handles model calls, tool invocation, and provider-specific communication. After that, responses flow back through the same layers, and context providers are notified so history can be stored.</p>
<p>That architecture is cool for two reasons. First, it stops you from shoving every concern into prompts. Second, it gives you multiple places to enforce policy. You can intercept a whole run, a tool invocation, or the low-level model call. That means content filtering, redaction, budget enforcement, audit logging, and approval are all first-class runtime concerns instead of brittle prompt instructions that the model may or may not obey. Microsoft’s middleware docs explicitly call out logging, security validation, error handling, and result transformation as intended use cases.</p>
<p>Once you see the framework this way, a lot of design questions become easier. Should this compliance rule live in the prompt, in a tool, or in middleware? Usually middleware. Should this long-running business process be one giant agent? Usually no, it should be a workflow. Should a database write be done by the model? Never. It should be a deterministic tool or plain application code invoked under policy.</p>
<h2>The first serious agent in .NET</h2>
<p>The quickest way into the framework on .NET is still a single agent backed by a Foundry or Azure OpenAI-style chat client. The current quickstart shows AIAgent created from AIProjectClient and invoked with either RunAsync or RunStreamingAsync. It also shows AgentSession for multi-turn memory, which is where things start to become genuinely useful for applications instead of toy prompts.</p>
<p>The example below is a pattern-focused draft based on the current .NET docs. It reflects the present API shape, but you should still verify exact package and namespace combinations in your chosen provider stack because the surrounding ecosystem is moving faster than the core 1.0 announcement.</p>
<pre><code class="language-csharp">using Azure.AI.Projects; 
using Azure.Identity; 
using Microsoft.Agents.AI;

var endpoint = Environment.GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT") ?? throw new InvalidOperationException("AZURE_OPENAI_ENDPOINT is not set.");

var deploymentName = Environment.GetEnvironmentVariable("AZURE_OPENAI_DEPLOYMENT_NAME") ?? "gpt-4o-mini";

Console.WriteLine(await agent.RunAsync( "My name is Patrick. I have a comprehensive policy and a cracked windscreen.", session));

Console.WriteLine(await agent.RunAsync( "What details do you still need from me?", session));
</code></pre>
<p>The important thing here is not the syntax. It is the contract. The agent owns conversation flow and reasoning. The session owns continuity. The instructions define role and boundaries. The rest of your system should decide what the agent is allowed to do. That means tools for deterministic lookups, middleware for governance, and application code for actual business operations.</p>
<h2>Sessions are not a nice-to-have, they are the centre of the design</h2>
<p>A lot of engineers still treat conversational state as a UI concern. Agent Framework does not. The storage documentation makes it clear that storage controls where conversation history lives, how much history is loaded, and how reliably sessions can be resumed. The built-in model distinguishes between local session state, where full history lives in AgentSession.state, and service-managed storage, where the service owns the conversation and the session points to it via a service session identifier.</p>
<p>That distinction has architectural consequences. Local session state is fine for narrow services, internal automation, or short-lived calls where your app can carry the history. Service-managed storage becomes more attractive when your provider offers durable server-side conversation management. The trade-off is control. If you keep state locally, you own truncation strategy, persistence, encryption, residency, and replay. If the service manages it, you gain convenience but must be much more deliberate about compliance, retention, cross-border concerns, and audit boundaries. Microsoft explicitly warns that third-party servers and agents can affect where your data flows, which is not a footnote for regulated teams. It is a design constraint.</p>
<img src="https://cdn.hashnode.com/uploads/covers/67c36038c69a4b7143c5fc49/64910f70-34d4-4b15-b0e3-a14d6a850327.png" alt="" style="display:block;margin:0 auto" />

<p>The practical lesson is simple. Do not start with “How do I make the model remember?” Start with “Who should own conversation state in this system?” That question belongs to your architecture, not to prompt engineering.</p>
<h2>Middleware is where expert teams separate themselves from demo teams</h2>
<p>Agent Framework’s middleware model is one of its strongest features. The current docs describe three distinct middleware types: agent run middleware, function-calling middleware, and IChatClient middleware. That split is not accidental. It mirrors the real places where production failures happen. Sometimes you need to reject a run before it reaches the model. Sometimes you need to wrap a tool call in approval or audit logic. Sometimes you need to instrument the raw model call itself.</p>
<p>A strong .NET design will use middleware to enforce runtime policy instead of treating prompts as law. Prompts are guidance. Middleware is policy. If a model tries to call a dangerous tool, you want code, not wording, to decide whether that happens. If a model request might leak sensitive content, you want code, not wording, to redact or block it. If a request exceeds cost thresholds or token budgets, you want code, not wording, to route it elsewhere.</p>
<p>Here is the kind of middleware pattern that makes sense in a real service.</p>
<pre><code class="language-csharp">using Microsoft.Agents.AI; 
using Microsoft.Extensions.AI;

static async Task BudgetAndLoggingMiddleware( IEnumerable messages, AgentSession? session, AgentRunOptions? options, AIAgent innerAgent, CancellationToken stopToken) { var joined = string.Join(Environment.NewLine, messages.Select(m =&gt; $"{m.Role}: {m.Text}"));

if (joined.Length &gt; 12_000)
{
    throw new InvalidOperationException("Input too large for this agent. Route to batch workflow instead.");
}

Console.WriteLine($"[{DateTimeOffset.UtcNow:u}] Agent run starting. Session: {session?.Id}");

var response = await innerAgent.RunAsync(messages, session, options, stopToken);

Console.WriteLine($"[{DateTimeOffset.UtcNow:u}] Agent run completed. Messages returned: {response.Messages.Count}");

return response;
}
</code></pre>
<p>And then you attach it with the builder pattern the framework documents for agent run middleware.</p>
<pre><code class="language-csharp">var guardedAgent = agent .AsBuilder() .Use(runFunc: BudgetAndLoggingMiddleware, runStreamingFunc: null) .Build();
</code></pre>
<p>That pattern lines up directly with the current middleware API guidance, including the agent builder flow and the fact that middleware forms a chain around agent execution.</p>
<p>The deeper point is that this makes Agent Framework feel like normal .NET. You are not throwing your engineering discipline away because the application happens to use an LLM. You are applying the same discipline through a runtime that actually gives you seams to plug into.</p>
<h2>Tools and MCP are where agent systems either become useful or dangerous</h2>
<p>Without tools, most agents are just articulate text generators. With tools, they become operational. Agent Framework supports both traditional function tools and MCP-based tools, but the capability matrix is not uniform across providers. The current tools documentation shows that function tools are broadly supported, while features such as tool approval, code interpreter, file search, web search, hosted MCP tools, local MCP tools, and image generation depend on the provider and on whether you are using chat completions, responses, assistants, Foundry, Anthropic, or something else.</p>
<p>That matrix is one of the most important things to understand before committing to a provider strategy. If your design assumes file search, hosted MCP, or approval flows, you cannot pick a provider first and discover the limitations later. The framework gives you a unified programming model, not identical capability under every backend. That is a meaningful difference. Uniform API shape is not the same as uniform execution semantics. MCP support is especially interesting because it gives Agent Framework a standard way to connect models to external tools and context sources. The current docs describe local MCP tool support and position it as a standardised way for agents to access external tools and services. Common servers include GitHub, filesystem, and SQLite examples. That makes MCP a strong fit when you want reusable tool surfaces and standardised external capability boundaries rather than a pile of one-off function definitions.</p>
<p>Here is the design rule I would use in production. If the operation is a business capability owned by your application, expose it as a deterministic function tool with narrow parameters, validation, and audit. If the capability comes from an external standardised tool ecosystem, MCP is attractive. If the operation has side effects that matter to the business, add approval, policy, or workflow checkpoints around it. Do not let “the model decided to do it” become your control plane.</p>
<h2>Workflows are the feature that turns this from an agent library into an application framework</h2>
<p>Single-agent systems are useful, but the real power of Microsoft Agent Framework is in workflows. The framework’s workflow material describes graph-based execution, sequential orchestration, handoff, edges, executors, checkpoints, and human-in-the-loop patterns. The official sequential docs show agents processing work in turn and passing their full conversation history forward. The handoff docs describe a mesh-style topology where control can be transferred between agents without a central orchestrator. This is the part of the framework that matters most for serious systems.</p>
<p>That distinction between sequential and handoff is not academic. Sequential orchestration is a pipeline. You know the order in advance. It is ideal for transforms, review chains, structured enrichment, and staged analysis. Handoff is a delegation model. Control moves based on context. It is better for expert routing, triage, and conversational ownership changes. If you pick the wrong pattern, you either over-constrain the system or let it wander. Microsoft’s handoff documentation explicitly contrasts handoff with agent-as-tools, and that is a useful separation. In handoff, ownership moves. In agent-as-tools, the primary agent remains in charge. The checkpoint model is particularly strong. The workflow checkpoint docs explain that checkpoints are created at the end of supersteps and capture executor state, pending messages, requests and responses, and shared state. You can then restore a run from a checkpoint or rehydrate into a new run. For long-running or approval-heavy business processes, this is far better than stuffing everything into one endless conversation thread.</p>
<p>Here is a workflow-shaped example in .NET that follows the current sequential orchestration pattern shown in the docs, but reframes it into a business scenario a .NET team might actually use.</p>
<pre><code class="language-csharp">using Azure.AI.Projects; 
using Azure.Identity; 
using Microsoft.Agents.AI; 
using Microsoft.Agents.AI.Workflows; 
using Microsoft.Extensions.AI;

var endpoint = Environment.GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT") ?? throw new InvalidOperationException("AZURE_OPENAI_ENDPOINT is not set.");

var deploymentName = Environment.GetEnvironmentVariable("AZURE_OPENAI_DEPLOYMENT_NAME") ?? "gpt-4o-mini";

IChatClient chatClient = new AIProjectClient(new Uri(endpoint), new AzureCliCredential()) .GetProjectOpenAIClient() .GetProjectResponsesClient() .AsIChatClient(deploymentName);

var intakeAgent = new ChatClientAgent( chatClient, """ You triage incoming underwriting submissions. Extract the key facts and list missing information. """, "IntakeAgent");

var riskAgent = new ChatClientAgent( chatClient, """ You assess underwriting risk. Classify the case as low, medium, or high risk and explain why. """, "RiskAgent");

var summaryAgent = new ChatClientAgent( chatClient, """ Produce a final summary for a human underwriter. Use plain language. Include only facts present in the conversation. """, "SummaryAgent");

var workflow = AgentWorkflowBuilder.BuildSequential([intakeAgent, riskAgent, summaryAgent]);

var input = new List { new(ChatRole.User, "Small commercial property in Cork. Prior water damage claim two years ago. Re-roofed in 2024.") };

CheckpointManager checkpointManager = CheckpointManager.CreateInMemory();

await using var run = await InProcessExecution.RunStreamingAsync(workflow, input, checkpointManager);

await run.TrySendMessageAsync(new TurnToken(emitEvents: true));

await foreach (var evt in run.WatchStreamAsync()) { switch (evt) 
    { 
        case AgentResponseUpdateEvent update:             Console.Write(update.Update.Text); 
        break;
    
        case SuperStepCompletedEvent superStep:
            Console.WriteLine($"\nCheckpoint captured at superstep.");
        break;

        case WorkflowOutputEvent output:
            Console.WriteLine("\nWorkflow completed.");
            Console.WriteLine(output.Data);
        break;
    }
}
}
</code></pre>
<p>This is where Agent Framework starts to feel like workflow infrastructure rather than prompt chaining. The run is explicit. The events are explicit. Recovery is explicit. That is what mature systems need.</p>
<h2>Hosting is protocol, not business logic</h2>
<p>The hosting story is another area where Agent Framework shows more maturity than most agent libraries. The Learn documentation separates hosting from agent behaviour and presents multiple exposure paths. In ASP.NET Core, the framework provides hosting libraries to register agents and workflows with dependency injection. The docs show AddAIAgent, AddWorkflow, in-memory session store configuration, workflow-to-agent conversion, and protocol adapters. The framework can expose agents via A2A, OpenAI-compatible endpoints, AG-UI, and Azure Functions durable hosting.</p>
<p>This is the right design. Your agent should not care whether it is called by an internal service, a browser client, another agent, or a standards-based protocol endpoint. Your hosting layer should adapt transport into the agent runtime, not the other way around. Microsoft’s own hosting material describes those libraries as protocol adapters around AIAgent, which is exactly the correct abstraction.</p>
<p>A stripped-down ASP.NET Core pattern looks like this.</p>
<pre><code class="language-csharp">using Azure.AI.Projects; 
using Azure.Identity; 
using Microsoft.Extensions.AI; 
using Microsoft.Agents.AI;

var builder = WebApplication.CreateBuilder(args);

var endpoint = builder.Configuration["AZURE_OPENAI_ENDPOINT"] ?? throw new InvalidOperationException("AZURE_OPENAI_ENDPOINT is not set.");

var deploymentName = builder.Configuration["AZURE_OPENAI_DEPLOYMENT_NAME"] ?? "gpt-4o-mini";

IChatClient chatClient = new AIProjectClient(new Uri(endpoint), new AzureCliCredential()) .GetProjectOpenAIClient() .GetProjectResponsesClient() .AsIChatClient(deploymentName);

builder.Services.AddKeyedSingleton("chat-model", chatClient);

var app = builder.Build();
</code></pre>
<p>That example mirrors the current hosting model, including keyed chat client registration and AddAIAgent. The interesting part is not the ceremony. It is what comes next. You can expose that same agent through A2A for agent-to-agent communication or through OpenAI-compatible endpoints for clients that already speak Chat Completions or Responses. The OpenAI integration docs explicitly position Chat Completions as the simpler stateless compatibility path and OpenAI-compatible hosting as a way to present your agent behind familiar APIs. The A2A integration docs position agent cards, message exchange, long-running tasks, and inter-framework interoperability as first-class concerns.</p>
<p>All this because protocol interoperability is where many agent systems will live or die. Internal teams will not all standardise on the same framework. If you can expose an agent as A2A and consume another one through the same protocol, you are buying yourself architectural room to evolve.</p>
<p>Azure Functions durable hosting is where this becomes very interesting for serverless .NET Developers</p>
<p>For teams already deep in Azure Functions, the durable hosting story is compelling. Microsoft’s Azure Functions integration docs describe durable task-based hosting with built-in HTTP endpoints, orchestration-based invocation, state persistence, and automatic scaling. The functions host can be configured with <code>ConfigureDurableAgents(options =&gt; options.AddAIAgent(agent)),</code> which turns an Agent Framework agent into a durable hosted service.</p>
<p>That is a strong fit for long-running conversations, approval workflows, batch-style agent execution, and event-driven systems where you want serverless economics but still need a durable agent runtime. It is also a natural fit for the kinds of systems .NET teams actually own, such as claims intake, underwriting triage, document analysis, support automation, and compliance review. The framework’s durable angle gives you a much cleaner story for pause and resume than trying to bolt durability onto an in-memory chat loop.</p>
<p>The design caution here is the same one I would apply to any serverless workflow. Do not make Azure Functions the place where you improvise architecture. Keep deterministic work deterministic. Keep tool calls narrow. Keep state boundaries explicit. Use checkpoints and durable execution to support business process needs, not as an excuse to blur concerns.</p>
<h2>What the framework gets right, and where you still need to be careful</h2>
<p>The biggest thing Agent Framework gets right is that it treats agent systems like software systems. You see that in sessions, middleware, checkpointing, protocol adapters, and workflow graphs. This is not a prompt toy wearing enterprise clothes. It is clearly designed for teams that need composition, hosting, recovery, and policy.</p>
<p>It also gets the provider story mostly right. By leaning on IChatClient and separating provider choice from agent and hosting patterns, Microsoft gives .NET teams a path to avoid hard lock-in. That does not remove provider differences, but it does mean your application architecture can stay more stable than your inference backend.</p>
<p>Where you still need to be careful is ecosystem maturity. The 1.0 announcement is real, and the core package is on NuGet as 1.0.0, but a number of adjacent packages and docs still carry prerelease markers or preview language. Workflows, durable task support, A2A hosting, and some integrations are still visibly in that transition zone. That does not make them unusable. It means you should lock versions carefully, validate examples against the specific provider stack you choose, and expect some documentation drift while the platform converges.</p>
<p>The other risk is cultural, not technical. Teams will still be tempted to use agents where they should use code. No framework can save you from that. Microsoft’s own guidance says as much. The right way to use Agent Framework is to narrow the surface area where the model is allowed to think, surround it with middleware and tools, and let workflows handle explicit process. If you do that, you get a powerful application model. If you do not, you get a more elaborate way to be unpredictable.</p>
<h2>Where I think Microsoft Agent Framework .NET 1.0 fits best</h2>
<p>I would use it when the problem has all four of these traits. The task has genuine ambiguity. The system benefits from tool use. The process needs state across turns or stages. The application needs real hosting and governance rather than a notebook demo. That includes support assistants with approvals, document triage pipelines, guided data collection, internal operations copilots, multi-step research workflows, and controlled delegation across specialist agents.</p>
<p>I would not use it for plain CRUD, deterministic workflows that can already be expressed cleanly in code, or “we want AI somewhere in the architecture” projects with no clear reasoning boundary. In those cases, ordinary .NET remains the better answer. That is not a knock on the framework. It is exactly the discipline the framework itself is asking you to keep.</p>
<img src="https://cdn.hashnode.com/uploads/covers/67c36038c69a4b7143c5fc49/933df6f7-46e8-400b-9816-90a9ab661cf9.png" alt="" style="display:block;margin:0 auto" />

<p>Microsoft Agent Framework .NET 1.0 is the first time Microsoft’s agent story feels like it has an application architecture behind it rather than just a set of AI demos. The real value is not that you can create a chatty assistant in a few lines. You could already do that. The value is that you now have a coherent runtime model for agents, sessions, tools, middleware, workflows, checkpoints, and hosting in the same .NET-shaped world. That is what makes the release significant.</p>
<p>If you are a .NET engineer, the right way to think about this framework is not “How do I build an agent?” It is “Where does non-deterministic reasoning belong in my architecture, and how do I constrain it so the rest of the system stays reliable?” Microsoft Agent Framework 1.0 gives you a much better answer to that question than the ecosystem had a year ago.</p>
<p>REF: <a href="https://learn.microsoft.com/en-us/agent-framework/get-started/">https://learn.microsoft.com/en-us/agent-framework/get-started/</a></p>
]]></content:encoded></item><item><title><![CDATA[What Serious .NET Performance Engineering Looks Like in 2026]]></title><description><![CDATA[Last year I wrote about unlocking performance C# and .NET. This post is a follow up to that one. I am not revisiting the old argument because the fundamentals changed. I am revisiting it because the p]]></description><link>https://dotnetdigest.com/what-serious-net-performance-engineering-looks-like-in-2026</link><guid isPermaLink="true">https://dotnetdigest.com/what-serious-net-performance-engineering-looks-like-in-2026</guid><category><![CDATA[software development]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[dotnet]]></category><category><![CDATA[software design]]></category><category><![CDATA[Microsoft]]></category><category><![CDATA[low level programming]]></category><category><![CDATA[C#]]></category><dc:creator><![CDATA[Patrick Kearns]]></dc:creator><pubDate>Sun, 22 Mar 2026 20:11:03 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/67c36038c69a4b7143c5fc49/12cb78e0-10b6-4a7d-954d-73ea461cc5ad.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Last year I wrote about <a href="https://fullstackcity.com/unlocking-performance-and-the-advantages-of-low-level-programming-in-c-net">unlocking performance C# and .NET</a>. This post is a follow up to that one. I am not revisiting the old argument because the fundamentals changed. I am revisiting it because the platform changed. Since .NET 10 shipped in November 2025, the runtime has moved again. The JIT sees through more abstractions, the GC story is more adaptive, Native AOT is more practical, diagnostics are stronger, and the platform now gives clearer guidance on synchronisation and hot path design. That means serious .NET performance work in 2026 looks different from what it did even a year ago.</p>
<p>The first thing to understand is that modern .NET performance engineering is not about writing clever code for its own sake. It is about using the lowest level technique that solves a real measured problem. That sounds obvious, but plenty of teams still get this wrong. They see a profiler flame, rewrite code with <code>Span&lt;T&gt;</code>, <code>stackalloc</code>, pooling, or even <code>unsafe</code>, and then discover the real bottleneck was lock contention, database shape, thread pool starvation, or a dependency call three layers down. In 2026, the runtime is good enough that you should assume less and measure more. The job is no longer to memorise folklore. The job is to know what the current runtime can already optimise, then go lower level only when the workload proves it is worth the complexity.</p>
<h2>What actually changed after .NET 10 shipped</h2>
<p>The headline change is not a single API. It is that the runtime got better at removing abstraction cost. The .NET 10 runtime includes improvements in JIT inlining, devirtualisation, stack allocation, loop inversion, code generation for struct arguments, and AVX10.2 support. The runtime team also calls out improved code layout, which matters because hot code density and branch behaviour directly affect real throughput on modern CPUs. This is the kind of improvement that changes advice. Some patterns that used to be suspicious are now cheaper than many developers still assume.</p>
<p>That means the old rule of thumb, "avoid every abstraction in hot paths," is now too blunt. A better rule is this: write clear, allocation aware code first, benchmark it on your actual target runtime, and only then decide whether the abstraction still costs enough to justify specialised code. That shift is important. The JIT in .NET 10 is better at turning straightforward code into efficient machine code than many teams give it credit for. Serious engineering in 2026 is not about competing with the runtime. It is about helping it when needed and getting out of its way when it already knows what to do.</p>
<img src="https://cdn.hashnode.com/uploads/covers/67c36038c69a4b7143c5fc49/c96da8bd-bf8b-4557-894c-8e038b6651ea.png" alt="" style="display:block;margin:0 auto" />

<h2>Performance engineering starts with proof</h2>
<p>If you are serious about performance, the first skill is not low level coding. It is diagnosis. Microsoft’s current diagnostics guidance is mature enough now that there is little excuse for guessing. <code>dotnet-counters</code> gives you live visibility into runtime behaviour, and the built in runtime metrics surface measurements through <code>System.Diagnostics.Metrics</code> for areas like GC, JIT, exceptions, CPU, assembly loading, and memory. That changes how you should approach production tuning. You should be asking what kind of pressure you have before you touch code. Is this allocation churn, excessive GC, a queueing problem, starvation, a bad synchronisation strategy, or just slow I/O wearing a performance costume.</p>
<p>This matters because low level code has a maintenance cost. If the bottleneck is actually database latency, a slow downstream API, oversized JSON payloads, or over-serialised work, then rewriting parsing code with spans may buy you nothing. On the other hand, if the evidence shows heavy allocation churn, high GC pause frequency, a true CPU hot loop, or avoidable copying in a high throughput pipeline, then lower level techniques become justified. The sequence should always be measure, identify, change, re-measure. Not guess, optimize, and hope.</p>
<h2>Memory is still where the real wins live</h2>
<p>Most real performance wins in managed systems still come from memory behaviour. Allocation rate influences GC pressure. Object shape affects locality. Copies inflate both CPU and memory bandwidth costs. Temporary materialisation multiplies the problem because it creates short lived garbage and disrupts cache friendliness at the same time. None of that is new. What is new is that .NET 10 expands the set of scenarios where the runtime itself can use stack allocation and escape analysis more effectively. Microsoft calls out improvements to stack allocations and code generation in .NET 10, and the runtime team has been steadily widening the cases where short lived state can stay off the heap.</p>
<p>That means the practical question in 2026 is not "should I always use <code>stackalloc</code>?" It is "is this data genuinely ephemeral, small, and local enough that stack friendly handling matters?" Parsing state, temporary token buffers, framing headers, transient slices, and short lived working memory are good candidates. Long lived domain models, response graphs, workflow state, and ordinary business objects are not. The more your data is really just a temporary view over bytes or characters, the more the low level memory tools pay off. The more it is actual business state with a meaningful lifetime, the less useful those tricks usually become.</p>
<p>A good modern example is parsing over spans rather than creating throwaway strings and arrays.</p>
<pre><code class="language-csharp">using System.Buffers.Binary;

public static class MessageHeaderParser
{
    public static bool TryRead(ReadOnlySpan&lt;byte&gt; buffer, out MessageHeader header)
    {
        header = default;

        if (buffer.Length &lt; 12)
            return false;

        var version = BinaryPrimitives.ReadInt32LittleEndian(buffer[..4]);
        var messageType = BinaryPrimitives.ReadInt32LittleEndian(buffer.Slice(4, 4));
        var payloadLength = BinaryPrimitives.ReadInt32LittleEndian(buffer.Slice(8, 4));

        header = new MessageHeader(version, messageType, payloadLength);
        return true;
    }
}

public readonly record struct MessageHeader(int Version, int MessageType, int PayloadLength);
</code></pre>
<p>This is the kind of code that earns its place in a hot parser, transport layer, ingestion service, or internal protocol library. It avoids copies, avoids allocations, and expresses exactly what the machine needs to do. The same pattern would be overkill in a controller action whose latency is dominated by network and database work.</p>
<h2>The GC story is more important now</h2>
<p>One of the most important developments in recent .NET is the GC shift around Dynamic Adaptation To Application Sizes, or DATAS. Microsoft documents that DATAS is enabled by default starting in .NET 9, and the wider .NET 9 guidance notes that garbage collection now uses dynamic adaptation to application size by default instead of traditional Server GC behaviour. In plain terms, the runtime is making more adaptive decisions about memory based on actual application needs. That is not a small tweak. It changes how some services behave under load and how memory scales with long lived data.</p>
<p>The practical implication is simple. After upgrading runtimes, you need to re-measure memory behaviour instead of trusting old instincts. Some applications will see better memory efficiency immediately. Some may need tuning. Some hand-optimised code written under older GC assumptions may no longer be justified. This is a recurring theme in serious performance work. Runtime upgrades can change the cost model enough that the best optimisation is sometimes to delete old cleverness and rely on the newer platform.</p>
<p>That said, the timeless rules still apply. If your code needlessly allocates, the GC still has to clean up the mess. DATAS does not rescue wasteful design. If a hot request path repeatedly builds temporary lists, duplicates strings, creates wrapper objects, or materialises intermediate projections it never needed, you will still pay for that. Modern GC makes good applications better. It does not make sloppy memory behaviour free.</p>
<h2>Synchronisation got more concrete</h2>
<p>There is also a more practical shift in synchronisation guidance. Starting with .NET 9 and C# 13, the <code>lock</code> statement has first class support for <code>System.Threading.Lock</code>, and Microsoft now recommends locking a dedicated <code>Lock</code> instance for best performance. This matters because contention costs are real and because many developers still treat locking as a generic language feature rather than a specific runtime choice with different performance characteristics.</p>
<p>That does not mean "use more locks." It means if you genuinely need a synchronous critical section, use the modern primitive intentionally. More importantly, it should push you to think harder about how much shared mutable state your design really requires. In high throughput systems, contention often hurts more than individual instruction cost. A service can have excellent microbenchmarks and still collapse under shared state pressure because too much work funnels through a single lock, queue, cache, or mutable structure. Serious performance engineering looks at coordination cost, not just raw CPU time.</p>
<p>Here is the kind of pattern that is reasonable in modern .NET when you do need a tight synchronous critical section.</p>
<pre><code class="language-csharp">using System.Threading;

public sealed class InMemorySequence
{
    private readonly Lock _gate = new();
    private long _value;

    public long Next()
    {
        lock (_gate)
        {
            _value++;
            return _value;
        }
    }
}
</code></pre>
<p>This is not exciting code, which is exactly why it matters. Serious performance work is often like this. It is not about showing off. It is about using the most appropriate primitive for a measured need.</p>
<h2>Channels, streaming, and back pressure.</h2>
<p>The runtime and library improvements around channels and pipelines are important because many real systems are not just request response applications. They are ingestion systems, telemetry collectors, message processors, file handlers, stream parsers, document pipelines, and background dispatchers. Microsoft’s .NET 10 performance work includes channel improvements, including reduced memory use and an unbuffered channel implementation. That is exactly the kind of improvement that matters in services where the real problem is moving data through the system without blowing up memory, latency, or coordination overhead.</p>
<p>This is where low level techniques are absolutely justified. If you are building a high throughput gateway, a webhook intake service, a file processor, a log ingestion pipeline, a realtime event processor, or anything else that repeatedly handles bytes, buffers, records, and bounded work, then spans, channels, pooling, slicing, and back pressure aware design are not niche. They are the right tools. In these workloads the gains are not theoretical. Fewer copies, fewer allocations, and better flow control can directly improve throughput, reduce memory footprint, and smooth out tail latency.</p>
<img src="https://cdn.hashnode.com/uploads/covers/67c36038c69a4b7143c5fc49/3f207d73-3f73-434f-838c-331f1d4811ab.png" alt="" style="display:block;margin:0 auto" />

<h2>Native AOT is now part of the serious toolbox</h2>
<p>A few years ago, Native AOT still felt like something many teams watched from a distance. In 2026 it belongs in any real performance conversation, but with clear boundaries. Microsoft’s Native AOT guidance is explicit that it produces self contained native executables with faster startup and lower memory usage, and that the benefits are most significant for workloads with high instance counts, such as cloud infrastructure and hyperscale services. That is the key point. Native AOT is not a general badge of performance virtue. It is a concrete tradeoff that matters most when startup time, density, and footprint have operational value.</p>
<p>That makes it a strong fit for short lived workers, edge processes, command line tools, sidecars, control plane services, serverless style apps, and narrow APIs where cold start and memory per instance genuinely matter. It is a weaker fit for dynamic, reflection heavy, plugin oriented, or framework style applications that depend on runtime discovery and flexible composition. Serious engineers do not force Native AOT where it does not belong. They use it when the workload shape clearly rewards it.</p>
<h2>The use cases that really justify low level techniques</h2>
<p>This is the part that matters most. Low level techniques are justified when the code is hot, repeated, measurable, and structurally close to the machine. That includes serialisers, parsers, protocol handlers, ingestion services, queue dispatchers, realtime stream processors, telemetry systems, compression routines, caching internals, transport libraries, and internal platform components used at very high frequency. These are the places where a few allocations saved per operation, a few copies avoided, a tighter loop, or a more appropriate synchronisation primitive can compound into meaningful throughput and latency gains.</p>
<p>They are also justified in cloud environments where density is the business case. If a change reduces memory per instance, improves cold start, or lifts requests per core in a service that runs across many containers or functions, the savings are real. This is where Native AOT, smaller working sets, pooling, and careful buffer ownership move from technical niceties into financially meaningful engineering choices. Microsoft’s own Native AOT guidance explicitly ties the strongest benefits to high instance deployments, which is exactly why these techniques matter more in infrastructure style services than in ordinary line of business modules.</p>
<p>They are not justified because a method looks elegant in a benchmark, because an article made spans look cool, or because someone wants to say they write "systems level C#." They are usually not justified in CRUD endpoints, workflow orchestration code, standard business services, admin portals, or request handlers whose real cost is external I/O. In those places, query shape, batching, caching strategy, dependency latency, and concurrency design usually matter far more than hand tuned memory work. That is the line mature teams learn to hold.</p>
<h2>A realistic checklist for 2026</h2>
<p>A good test is to ask three questions. Is this code on a hot path. Is the current cost measurable and material. Will the lower level version stay understandable enough to maintain safely. If the answer is no to any of those, you probably should not do it.</p>
<p>That sounds conservative, but it is actually the posture that lets you move faster. Modern .NET already gives you a stronger baseline. The JIT is better. The runtime is more adaptive. The tooling is good enough to see what is happening. The serious engineer in 2026 is not the person who reaches for the sharpest technique first. It is the person who can identify the actual bottleneck, apply the right level of specialisation, and stop when the added complexity stops paying for itself.</p>
<img src="https://cdn.hashnode.com/uploads/covers/67c36038c69a4b7143c5fc49/30e9404b-55e9-4615-9977-fbf3217d75d3.webp" alt="" style="display:block;margin:0 auto" />

<p>Low level programming still exists in C# and .NET. It matters a lot. But the reason it matters in 2026 is more nuanced than it was a year ago. The runtime is increasingly capable of removing abstraction cost for you. That raises the bar for manual optimisation. You now need a better reason to go low level, but when you do have that reason, the payoff can still be enormous.</p>
<p>That is what serious .NET performance engineering looks like in 2026. Measure first. Understand the runtime you are actually shipping on. Use lower level techniques where the workload earns them.</p>
]]></content:encoded></item><item><title><![CDATA[Designing High-Performance APIs in ASP.NET Core]]></title><description><![CDATA[When traffic increases, many .NET services begin to fail in predictable ways. Memory spikes appear during large uploads. Latency increases under moderate concurrency. Thread pools stall. CPU usage cli]]></description><link>https://dotnetdigest.com/designing-high-performance-apis-in-asp-net-core</link><guid isPermaLink="true">https://dotnetdigest.com/designing-high-performance-apis-in-asp-net-core</guid><category><![CDATA[Software Engineering]]></category><category><![CDATA[api]]></category><category><![CDATA[Microsoft]]></category><category><![CDATA[.NET]]></category><category><![CDATA[software architecture]]></category><dc:creator><![CDATA[Patrick Kearns]]></dc:creator><pubDate>Wed, 04 Mar 2026 20:11:52 GMT</pubDate><enclosure url="https://cdn.hashnode.com/uploads/covers/67c36038c69a4b7143c5fc49/d76739bf-baf1-47d7-8908-c9a93d5f1d43.jpg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When traffic increases, many .NET services begin to fail in predictable ways. Memory spikes appear during large uploads. Latency increases under moderate concurrency. Thread pools stall. CPU usage climbs even though the service performs very little actual work. These failures rarely come from obvious mistakes. They emerge from small architectural decisions. Buffering entire request bodies. Serialising large responses unnecessarily. Allowing hidden allocations inside middleware.</p>
<p>High-performance APIs are not created by accident. They require intentional design choices across the entire request pipeline.</p>
<p>Below we'll walk through the architecture patterns used to build high-performance APIs in <a href="http://ASP.NET">ASP.NET</a> Core. It explains how to reduce allocations, avoid buffering, stream large payloads, optimise serialisation, and design APIs that scale without rewriting them later.</p>
<p>The examples focus on real patterns used in production systems handling large documents, streaming uploads, and high-throughput services.</p>
<h2>Understanding the ASP.NET Core Request Pipeline</h2>
<p>Before optimising anything, it is important to understand how a request moves through ASP.NET Core.</p>
<p>Every HTTP request flows through a middleware pipeline before reaching an endpoint.</p>
<img src="https://miro.medium.com/1%2A6FS9RonyKlJEgOM3ADqnkQ.png" alt="https://miro.medium.com/1%2A6FS9RonyKlJEgOM3ADqnkQ.png" style="display:block;margin:0 auto" />

<img src="https://learn.microsoft.com/en-us/dotnet/architecture/modern-web-apps-azure/media/image5-9.png" alt="https://learn.microsoft.com/en-us/dotnet/architecture/modern-web-apps-azure/media/image5-9.png" style="display:block;margin:0 auto" />

<p>The pipeline begins inside Kestrel, the web server responsible for parsing HTTP messages and managing connections. After Kestrel processes the request, it passes the request through a chain of middleware components. Each middleware can inspect, modify, or short-circuit the request.</p>
<p>Eventually the request reaches an endpoint such as a controller action or minimal API handler.</p>
<p>The key performance insight is simple.</p>
<p>Every step in this pipeline can allocate memory, block threads, or buffer data. If those operations happen repeatedly under load, the entire API slows down.</p>
<p>Designing high-performance APIs therefore means minimising work at every stage of the pipeline.</p>
<h2>The Hidden Cost of Allocations</h2>
<p>Modern .NET applications are fast, but allocation patterns still matter. Every object allocated during a request contributes to memory pressure. Under load this leads to frequent garbage collection cycles. When GC pauses increase, latency increases.</p>
<p>Look at a simple endpoint returning a list of items.</p>
<pre><code class="language-csharp">[HttpGet]
public IActionResult GetProducts()
{
    var products = new List&lt;Product&gt;();

    for (int i = 0; i &lt; 1000; i++)
    {
        products.Add(new Product
        {
            Id = i,
            Name = $"Product {i}"
        });
    }

    return Ok(products);
}
</code></pre>
<p>This endpoint allocates:</p>
<ul>
<li><p>The list</p>
</li>
<li><p>One thousand Product objects</p>
</li>
<li><p>Several strings</p>
</li>
<li><p>JSON serialisation buffers</p>
</li>
</ul>
<p>None of these allocations appear large individually. Under heavy traffic, however, they accumulate quickly. One strategy is reducing temporary allocations.</p>
<p>For example, pooling buffers rather than allocating them repeatedly.</p>
<pre><code class="language-csharp">var buffer = ArrayPool&lt;byte&gt;.Shared.Rent(8192);

try
{
    await stream.ReadAsync(buffer);
}
finally
{
    ArrayPool&lt;byte&gt;.Shared.Return(buffer);
}
</code></pre>
<p>Memory pools significantly reduce allocation pressure when handling streaming workloads.</p>
<h2>Streaming Instead of Buffering</h2>
<p>One of the most common performance mistakes is buffering entire request bodies. By default many frameworks read incoming files completely into memory before processing them. This is convenient but extremely inefficient.</p>
<p>A better approach is streaming.</p>
<img src="https://umakantv.com/assets/images/buffer-vs-stream/buffered-data-transfer-flow-diagram.png" alt="https://umakantv.com/assets/images/buffer-vs-stream/buffered-data-transfer-flow-diagram.png" style="display:block;margin:0 auto" />

<img src="https://umakantv.com/assets/images/buffer-vs-stream/avg-time-comparison.png" alt="https://umakantv.com/assets/images/buffer-vs-stream/avg-time-comparison.png" style="display:block;margin:0 auto" />

<p>Buffering works like this.</p>
<p>Client uploads file → server reads entire file → server processes file.</p>
<p>Streaming works differently.</p>
<p>Client uploads file → server processes chunks as they arrive.</p>
<p>This approach prevents large memory spikes.</p>
<p><a href="http://ASP.NET">ASP.NET</a> Core makes streaming straightforward.</p>
<pre><code class="language-csharp">[HttpPost("upload")]
public async Task Upload(CancellationToken stopToken)
{
    var reader = HttpContext.Request.BodyReader;

    while (true)
    {
        var result = await reader.ReadAsync(stopToken);
        var buffer = result.Buffer;

        foreach (var segment in buffer)
        {
            ProcessChunk(segment.Span);
        }

        reader.AdvanceTo(buffer.End);

        if (result.IsCompleted)
            break;
    }
}
</code></pre>
<p>This endpoint never buffers the entire request. Data flows through the system as it arrives.</p>
<p>This approach becomes critical when handling large documents, video files, or email attachments.</p>
<h2>Leveraging System.IO.Pipelines</h2>
<p><a href="http://ASP.NET">ASP.NET</a> Core internally uses System.IO.Pipelines to achieve high throughput networking.</p>
<p>Pipelines provide a high-performance abstraction over streams that avoids unnecessary copying.</p>
<img src="https://miro.medium.com/v2/resize%3Afit%3A1400/1%2AV_tHZuLnhuIzbv1gE2-yOQ.png" alt="https://miro.medium.com/v2/resize%3Afit%3A1400/1%2AV_tHZuLnhuIzbv1gE2-yOQ.png" style="display:block;margin:0 auto" />

<img src="https://learn.microsoft.com/en-us/dotnet/standard/io/media/pipelines/resume-pause.png" alt="https://learn.microsoft.com/en-us/dotnet/standard/io/media/pipelines/resume-pause.png" style="display:block;margin:0 auto" />

<p>Pipelines introduce two main components.</p>
<p>PipeReader consumes incoming data.</p>
<p>PipeWriter produces outgoing data.</p>
<p>Because pipelines operate on memory segments rather than copying buffers repeatedly, they significantly improve throughput.</p>
<p>A simplified example looks like this.</p>
<pre><code class="language-csharp">PipeReader reader = HttpContext.Request.BodyReader;

while (true)
{
    var result = await reader.ReadAsync();
    var buffer = result.Buffer;

    SequencePosition? position;

    do
    {
        position = buffer.PositionOf((byte)'\n');

        if (position != null)
        {
            var line = buffer.Slice(0, position.Value);
            ProcessLine(line);
            buffer = buffer.Slice(buffer.GetPosition(1, position.Value));
        }

    } while (position != null);

    reader.AdvanceTo(buffer.Start, buffer.End);

    if (result.IsCompleted)
        break;
}
</code></pre>
<p>This model allows applications to parse incoming data efficiently without copying buffers repeatedly.</p>
<h2>Optimising JSON Serialisation</h2>
<p>Serialisation often becomes a major bottleneck.</p>
<p>Large responses require converting objects into JSON, which involves allocations, reflection, and encoding operations.</p>
<p><a href="http://ASP.NET">ASP.NET</a> Core uses System.Text.Json, which is significantly faster than older serialisers. However, performance still depends on how it is used. For example, returning large object graphs can dramatically increase serialisation cost.</p>
<p>Instead of returning domain objects directly, it is often better to return optimised DTOs.</p>
<pre><code class="language-csharp">public record ProductResponse(int Id, string Name);
</code></pre>
<p>Another technique is streaming JSON responses.</p>
<pre><code class="language-csharp">await foreach (var product in repository.StreamProducts())
{
    await JsonSerializer.SerializeAsync(
        Response.Body,
        product,
        cancellationToken: stopToken);
}
</code></pre>
<p>Streaming responses prevents building large in-memory object graphs before serialisation.</p>
<h2>Avoiding Blocking Operations</h2>
<p>Blocking operations are another common source of performance problems. Synchronous database calls, file operations, or network requests block threads. Under load, thread pools become exhausted.</p>
<p>The correct approach is fully asynchronous APIs.</p>
<pre><code class="language-csharp">public async Task&lt;Product?&gt; GetProductAsync(int id, CancellationToken stopToken)
{
    return await db.Products
        .Where(p =&gt; p.Id == id)
        .Select(p =&gt; new Product(p.Id, p.Name))
        .FirstOrDefaultAsync(stopToken);
}
</code></pre>
<p>The key rule is simple. Never block inside the request pipeline. Even short blocking operations compound under concurrency.</p>
<h2>Designing APIs for Concurrency</h2>
<p>High-performance APIs must assume thousands of concurrent requests. Concurrency design often determines scalability more than raw CPU performance.</p>
<img src="https://archie9211.com/images/async-processing-diagram.svg" alt="https://archie9211.com/images/async-processing-diagram.svg" style="display:block;margin:0 auto" />

<p>Async I/O allows threads to return to the thread pool while waiting for operations such as network or disk access. This dramatically increases throughput. Without async patterns, each request holds a thread for its entire lifetime. Under load this leads to thread pool starvation.</p>
<h2>Caching Strategies</h2>
<p>Caching is another powerful performance tool. Many APIs repeatedly perform expensive operations that return identical results. Caching eliminates redundant work. Common caching layers include in-memory caching, distributed caching, and CDN caching.</p>
<p>For example, using the built-in memory cache.</p>
<pre><code class="language-csharp">if (!cache.TryGetValue("products", out List&lt;Product&gt; products))
{
    products = await repository.GetProductsAsync(stopToken);

    cache.Set("products", products,
        TimeSpan.FromMinutes(5));
}
</code></pre>
<p>Caching reduces database load and improves response latency. However it must be used carefully to avoid stale data.</p>
<h2>Benchmarking and Observability</h2>
<p>Performance optimisation without measurement is guesswork. High-performance APIs rely heavily on benchmarking and telemetry.</p>
<img src="https://docs.oracle.com/en/solutions/oci-apm-for-microservices/img/apm-microservices-open-telemetry.png" alt="https://docs.oracle.com/en/solutions/oci-apm-for-microservices/img/apm-microservices-open-telemetry.png" style="display:block;margin:0 auto" />

<p>Tools such as BenchmarkDotNet, OpenTelemetry, and Application Insights allow developers to measure latency, allocations, and throughput.</p>
<p>Example benchmark setup.</p>
<pre><code class="language-csharp">[MemoryDiagnoser]
public class SerializationBenchmarks
{
    private Product product = new(1, "Test");

    [Benchmark]
    public string Serialize()
    {
        return JsonSerializer.Serialize(product);
    }
}
</code></pre>
<p>Benchmarks reveal hidden costs that are impossible to detect through intuition alone.</p>
<h2>Building APIs That Scale</h2>
<p>Designing high-performance APIs is not about micro-optimisations. It is about architecture. Streaming instead of buffering. Async instead of blocking. Efficient serialisation instead of large object graphs. When these principles are applied consistently, APIs become resilient under heavy load. Systems handling large files, high traffic, or complex workloads benefit dramatically from these techniques.</p>
<p><a href="http://ASP.NET">ASP.NET</a> Core provides the tools required to build extremely fast services. The challenge lies in using those tools intentionally. The difference between an API that works and an API that scales often comes down to a few critical design decisions.</p>
<p>Making those decisions early prevents costly rewrites later.</p>
]]></content:encoded></item><item><title><![CDATA[Enforcing Vertical Slice Architecture in .NET]]></title><description><![CDATA[As A kind of follow up to a post I wrote the other day on FullStackCity I specifically wanted to look at Vertical Slice in an example this time,
Vertical slice architecture works because it aligns cod]]></description><link>https://dotnetdigest.com/enforcing-vertical-slice-architecture-in-net</link><guid isPermaLink="true">https://dotnetdigest.com/enforcing-vertical-slice-architecture-in-net</guid><category><![CDATA[software architecture]]></category><category><![CDATA[.NET]]></category><category><![CDATA[C#]]></category><category><![CDATA[Microsoft]]></category><category><![CDATA[Software Engineering]]></category><dc:creator><![CDATA[Patrick Kearns]]></dc:creator><pubDate>Sun, 22 Feb 2026 11:40:45 GMT</pubDate><enclosure url="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/67c36038c69a4b7143c5fc49/3532be8d-5cb5-4d92-b663-ab19e90456dd.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>As A kind of follow up to a post I wrote the other day on <a href="https://fullstackcity.com/enforcing-architecture-in-net">FullStackCity</a> I specifically wanted to look at Vertical Slice in an example this time,</p>
<p>Vertical slice architecture works because it aligns code with outcomes, not layers. A slice is a thin, end-to-end path through the system for one capability, one use case, one verb. You ship "Create Order" as a coherent unit. You can read it, change it, test it, and delete it without spelunking through a maze of cross cutting abstractions.</p>
<p>The catch is that vertical slice architecture is easy to describe and surprisingly easy to violate. A developer wants to reuse a handler from another slice. A validator grabs a repository from a different feature because it is already there. A new shared project appears called Common that quietly becomes a dumping ground. You still have folders named Features, but the system has drifted back into accidental layering and hidden coupling.</p>
<p><a href="https://github.com/BenMorris/NetArchTest">NetArchTest</a> is again how you stop that drift. You take the rules you mean when you say "vertical slices" and you encode them as tests that execute in CI. If someone reaches across a slice boundary, the build breaks. If someone introduces a dependency you forbid, the build breaks. Vertical slice stops being a preference and becomes a contract.</p>
<p>This post shows how to do that in a real .NET solution. The example here is a SaaS scheduling example called <strong>ClinicFlow</strong>, a system that manages appointments, patients, and clinician calendars. The exact domain does not matter. The rules and patterns do.</p>
<h2>What you are enforcing in a vertical slice system</h2>
<p>Vertical slice is not feature folders alone. It is a set of constraints about dependency direction and coupling. If you cannot state those constraints clearly, you cannot enforce them.</p>
<p>In this article, a Slice is a namespace root like:</p>
<p><code>ClinicFlow.Features.Appointments.CreateAppointment</code><br /><code>ClinicFlow.Features.Appointments.CancelAppointment</code><br /><code>ClinicFlow.Features.Patients.RegisterPatient</code></p>
<p>Each slice is allowed to contain its own endpoint, request/response models, handler, validation, and persistence access strategy. Shared code is allowed, but it must be deliberately small and clearly named, not an accidental kitchen-sink.</p>
<p>The core constraints we will enforce are these:</p>
<p>A slice must not depend on other slices directly. A slice may depend on shared building blocks, but it should not call another slice’s handler, validator, or internal types.</p>
<p>The API layer must not reference infrastructure details. It should reference slices, and slices should own their own wiring and dependencies.</p>
<p>The domain model should not depend on infrastructure, and preferably should not depend on any specific slice. Domain is the stable core. Slices orchestrate.</p>
<p>Infrastructure should not leak into slice code except through explicit ports or well-known boundaries you choose to permit.</p>
<p>You can relax or tighten any of these. The point is to make them explicit, then enforce them.</p>
<img src="https://cloudmate-test.s3.us-east-1.amazonaws.com/uploads/covers/67c36038c69a4b7143c5fc49/3a31b644-e82c-445d-8744-0c31b51e5983.png" alt="" style="display:block;margin:0 auto" />

<p>The forbidden links are the ones NetArchTest will enforce.</p>
<h2>A concrete solution layout that is easy to test</h2>
<p>A workable .NET layout for this style looks like this:</p>
<p><code>ClinicFlow.Api</code></p>
<p><code>ClinicFlow.Features</code></p>
<p><code>ClinicFlow.Domain</code></p>
<p><code>ClinicFlow.Infrastructure</code></p>
<p><code>ClinicFlow.Shared</code></p>
<p><code>ClinicFlow.ArchitectureTests</code></p>
<p>You can also keep Features inside the API project if you want one deployable. NetArchTest works either way. What matters is that slices are discoverable by namespace and that the projects reflect dependency intent.</p>
<p>A typical slice namespace might look like this:</p>
<p><code>ClinicFlow.Features.Appointments.CreateAppointment</code></p>
<ul>
<li><p><code>Endpoint</code></p>
</li>
<li><p><code>Request</code></p>
</li>
<li><p><code>Response</code></p>
</li>
<li><p><code>Handler</code></p>
</li>
<li><p><code>Validator</code></p>
</li>
<li><p><code>Db</code> access, either via a port or direct EF Core access if you intentionally allow it</p>
</li>
</ul>
<p>You dont need MediatR for vertical slice. Many say you do but this is not the case and since its not free anymore its a cost you dont need to worry about. NetArchTest doesnt care. It inspects compiled dependencies, not runtime behaviour.</p>
<h2>Set up the architecture test project</h2>
<p>Create a test project that references the assemblies you want to check.</p>
<p><code>ClinicFlow.ArchitectureTests</code> references:</p>
<ul>
<li><p><code>ClinicFlow.Api</code> (optional but useful)</p>
</li>
<li><p><code>ClinicFlow.Features</code></p>
</li>
<li><p><code>ClinicFlow.Domain</code></p>
</li>
<li><p><code>ClinicFlow.Infrastructure</code></p>
</li>
<li><p><code>ClinicFlow.Shared</code></p>
</li>
</ul>
<p>Install the package:</p>
<pre><code class="language-plaintext">dotnet add ClinicFlow.ArchitectureTests package NetArchTest.Rules
</code></pre>
<p>Add a normal xUnit or NUnit test setup. The architecture tests run like any other test in CI.</p>
<h2>Create assembly markers so tests are stable</h2>
<p>You need reliable ways to grab the correct assemblies. Do not load by string name if you can avoid it. Add one marker type per assembly.</p>
<p>In <code>ClinicFlow.Features</code>:</p>
<pre><code class="language-csharp">namespace ClinicFlow.Features;

public sealed class FeaturesAssemblyMarker { }
</code></pre>
<p>In <code>ClinicFlow.Domain</code>:</p>
<pre><code class="language-csharp">namespace ClinicFlow.Domain;

public sealed class DomainAssemblyMarker { }
</code></pre>
<p>In <code>ClinicFlow.Infrastructure</code>:</p>
<pre><code class="language-csharp">namespace ClinicFlow.Infrastructure;

public sealed class InfrastructureAssemblyMarker { }
</code></pre>
<p>In <code>ClinicFlow.Shared</code>:</p>
<pre><code class="language-csharp">namespace ClinicFlow.Shared;

public sealed class SharedAssemblyMarker { }
</code></pre>
<p>This keeps the tests robust across renames.</p>
<h2>Rule 1 - Domain must not depend on Infrastructure</h2>
<p>This is not unique to vertical slice, but it is still the first rule you should enforce because it stops the most expensive kind of coupling.</p>
<pre><code class="language-csharp">using NetArchTest.Rules;
using Xunit;
using ClinicFlow.Domain;

public sealed class CleanCoreTests
{
    [Fact]
    public void Domain_Should_Not_Depend_On_Infrastructure()
    {
        var result = Types.InAssembly(typeof(DomainAssemblyMarker).Assembly)
            .ShouldNot()
            .HaveDependencyOn("ClinicFlow.Infrastructure")
            .GetResult();

        Assert.True(result.IsSuccessful, FormatFailure(result));
    }

    private static string FormatFailure(TestResult result)
        =&gt; "Architecture rule failed:\n" + string.Join("\n", result.FailingTypeNames ?? Array.Empty&lt;string&gt;());
}
</code></pre>
<p>This catches both direct references and indirect references that become compile-time dependencies.</p>
<h2>Rule 2 - Slices must not depend on other slices</h2>
<p>This is the rule that makes vertical slices real.</p>
<p>The tricky part is that slice is not a language concept. You define it. Here we define a slice as:</p>
<p><code>ClinicFlow.Features.&lt;Area&gt;.&lt;UseCase&gt;</code></p>
<p>So anything under <code>ClinicFlow.Features.Appointments.CreateAppointment</code> is one slice. Anything under <code>ClinicFlow.Features.Appointments.CancelAppointment</code> is another slice.</p>
<p>We want to prevent types in one slice from referencing types in another slice.</p>
<p>NetArchTest can check dependencies for a set of types, but we need to group types by slice and then check each slice against all other slices.</p>
<p>This is one place where a little helper code makes the tests readable and scalable.</p>
<pre><code class="language-csharp">using System.Reflection;
using NetArchTest.Rules;
using Xunit;
using ClinicFlow.Features;

public sealed class SliceIsolationTests
{
    private const string FeaturesRoot = "ClinicFlow.Features.";

    [Fact]
    public void Slices_Should_Not_Depend_On_Other_Slices()
    {
        var featuresAssembly = typeof(FeaturesAssemblyMarker).Assembly;
        var sliceRoots = GetSliceRoots(featuresAssembly);

        foreach (var sliceRoot in sliceRoots)
        {
            foreach (var otherSliceRoot in sliceRoots)
            {
                if (sliceRoot == otherSliceRoot)
                    continue;

                var result = Types.InAssembly(featuresAssembly)
                    .That()
                    .ResideInNamespaceStartingWith(sliceRoot)
                    .ShouldNot()
                    .HaveDependencyOn(otherSliceRoot)
                    .GetResult();

                Assert.True(
                    result.IsSuccessful,
                    $"Slice '{sliceRoot}' must not depend on slice '{otherSliceRoot}'.\n" +
                    string.Join("\n", result.FailingTypeNames ?? Array.Empty&lt;string&gt;())
                );
            }
        }
    }

    private static IReadOnlyList&lt;string&gt; GetSliceRoots(Assembly assembly)
    {
        var namespaces = assembly.GetTypes()
            .Where(t =&gt; t.Namespace is not null &amp;&amp; t.Namespace.StartsWith(FeaturesRoot, StringComparison.Ordinal))
            .Select(t =&gt; t.Namespace!)
            .Distinct()
            .ToArray();

        var sliceRoots = new HashSet&lt;string&gt;(StringComparer.Ordinal);

        foreach (var ns in namespaces)
        {
            var root = TryGetSliceRoot(ns);
            if (root is not null)
                sliceRoots.Add(root);
        }

        return sliceRoots.OrderBy(x =&gt; x, StringComparer.Ordinal).ToArray();
    }

    private static string? TryGetSliceRoot(string @namespace)
    {
        if (!@namespace.StartsWith(FeaturesRoot, StringComparison.Ordinal))
            return null;

        var parts = @namespace.Split('.');
        if (parts.Length &lt; 4)
            return null;

        // parts[0]=ClinicFlow, [1]=Features, [2]=Area, [3]=UseCase
        return $"{parts[0]}.{parts[1]}.{parts[2]}.{parts[3]}";
    }
}
</code></pre>
<p>This test dynamically discovers slice roots by scanning namespaces in the Features assembly. It then asserts no slice has a compile time dependency on another slice root namespace.</p>
<p>If someone imports a request model from a different use case, or calls another handler directly, the test fails.</p>
<p>This is the main vertical slice enforcement win.</p>
<p>If your team sees this diagram and still wants cross slice calls, at least you are arguing about a real rule, not vibes.</p>
<h2>Rule 3 - Only Shared is allowed as the cross slice seam</h2>
<p>Slice isolation often triggers the next question: how do slices share anything?</p>
<p>The answer is that sharing should be explicit and constrained. You choose a seam project or seam namespace and you allow dependencies on it.</p>
<p>In this example, the allowed seam is <code>ClinicFlow.Shared</code>. That project might contain:</p>
<ul>
<li><p>Result types</p>
</li>
<li><p>Error primitives</p>
</li>
<li><p>Simple contracts</p>
</li>
<li><p>Ports (interfaces) that infrastructure implements</p>
</li>
<li><p>Cross-cutting abstractions that are stable and intentionally small</p>
</li>
</ul>
<p>What you do not want is slices depending on other slices through Shared accidentally, by moving random things into Shared until the tests pass. The enforcement has to push you toward good structure, not just pass.</p>
<p>So you add a rule that slices may depend on Domain and Shared, but should not depend on Infrastructure directly unless you intentionally allow it.</p>
<p>Here is the strict version:</p>
<pre><code class="language-csharp">using NetArchTest.Rules;
using Xunit;
using ClinicFlow.Features;

public sealed class SliceDependencyDirectionTests
{
    [Fact]
    public void Features_Should_Not_Depend_On_Infrastructure()
    {
        var result = Types.InAssembly(typeof(FeaturesAssemblyMarker).Assembly)
            .ShouldNot()
            .HaveDependencyOn("ClinicFlow.Infrastructure")
            .GetResult();

        Assert.True(result.IsSuccessful, FormatFailure(result));
    }

    private static string FormatFailure(TestResult result)
        =&gt; "Architecture rule failed:\n" + string.Join("\n", result.FailingTypeNames ?? Array.Empty&lt;string&gt;());
}
</code></pre>
<p>This forces infrastructure access to happen through ports in Shared, or through composition in the API host.</p>
<p>If you prefer "feature owns persistence" and you allow EF Core in slices, do not pretend you are enforcing something you are not. Instead, define a narrower rule that still protects you, for example: slices may depend on EF Core, but not on Infrastructure project namespaces, or not on specific infrastructure implementations. The enforcement should match your intended style.</p>
<h2>Rule 4 - API endpoints must not depend on Infrastructure</h2>
<p>Even if your slices are clean, the API host can still become a mess if endpoints or controllers pull in infrastructure services directly. You want the host to be wiring and transport, not business logic.</p>
<p>This test assumes your API project namespace starts with <code>ClinicFlow.Api</code>.</p>
<pre><code class="language-csharp">using NetArchTest.Rules;
using Xunit;
using ClinicFlow.Api;

public sealed class ApiBoundaryTests
{
    [Fact]
    public void Api_Should_Not_Depend_On_Infrastructure()
    {
        var result = Types.InAssembly(typeof(ApiAssemblyMarker).Assembly)
            .ShouldNot()
            .HaveDependencyOn("ClinicFlow.Infrastructure")
            .GetResult();

        Assert.True(result.IsSuccessful, FormatFailure(result));
    }

    private static string FormatFailure(TestResult result)
        =&gt; "Architecture rule failed:\n" + string.Join("\n", result.FailingTypeNames ?? Array.Empty&lt;string&gt;());
}
</code></pre>
<p>If you use minimal APIs and everything is in Program.cs, this still works because Program.cs compiles into the same assembly and will be scanned.</p>
<p>If you intentionally place composition root registrations in API that reference Infrastructure, you can scope the rule to only endpoints. For example, put endpoints into <code>ClinicFlow.Api.Endpoints</code> and test just that namespace.</p>
<h2>Rule 5 - Slices must follow your naming conventions</h2>
<p>Naming sounds cosmetic until you are debugging production at 2 a.m. Consistent naming is navigational infrastructure. It is worth enforcing.</p>
<p>If your rule is "each slice has exactly one handler named <code>Handler</code> or ending with <code>Handler</code>," enforce it.</p>
<p>For a pattern like <code>CreateAppointmentHandler</code>, you can do:</p>
<pre><code class="language-csharp">using NetArchTest.Rules;
using Xunit;
using ClinicFlow.Features;

public sealed class NamingConventionTests
{
    [Fact]
    public void Handlers_Should_End_With_Handler()
    {
        var result = Types.InAssembly(typeof(FeaturesAssemblyMarker).Assembly)
            .That()
            .HaveNameEndingWith("Handler")
            .Should()
            .ResideInNamespaceContaining("Features")
            .GetResult();

        Assert.True(result.IsSuccessful, FormatFailure(result));
    }

    private static string FormatFailure(TestResult result)
        =&gt; "Architecture rule failed:\n" + string.Join("\n", result.FailingTypeNames ?? Array.Empty&lt;string&gt;());
}
</code></pre>
<p>You can tighten this by filtering only namespaces that match your handler conventions, such as requiring handlers to live inside a slice root and not in random helper namespaces.</p>
<h2>Rule 6 - No "Shared dumping ground" namespace patterns</h2>
<p>This is the rule most teams skip, and it is why their slice isolation becomes a game of whack-a-mole. They move types into Shared until tests pass, then Shared becomes the new monolith inside the monolith.</p>
<p>You can enforce "Shared must be thin" by banning certain patterns. For example, forbid Shared from depending on Features, and keep Shared from referencing Infrastructure.</p>
<pre><code class="language-csharp">using NetArchTest.Rules;
using Xunit;
using ClinicFlow.Shared;

public sealed class SharedKernelTests
{
    [Fact]
    public void Shared_Should_Not_Depend_On_Features()
    {
        var result = Types.InAssembly(typeof(SharedAssemblyMarker).Assembly)
            .ShouldNot()
            .HaveDependencyOn("ClinicFlow.Features")
            .GetResult();

        Assert.True(result.IsSuccessful, FormatFailure(result));
    }

    [Fact]
    public void Shared_Should_Not_Depend_On_Infrastructure()
    {
        var result = Types.InAssembly(typeof(SharedAssemblyMarker).Assembly)
            .ShouldNot()
            .HaveDependencyOn("ClinicFlow.Infrastructure")
            .GetResult();

        Assert.True(result.IsSuccessful, FormatFailure(result));
    }

    private static string FormatFailure(TestResult result)
        =&gt; "Architecture rule failed:\n" + string.Join("\n", result.FailingTypeNames ?? Array.Empty&lt;string&gt;());
}
</code></pre>
<p>This keeps the seam from becoming a back door.</p>
<h2>A realistic vertical slice example to illustrate "what not to do"</h2>
<p>Imagine you have two slices:</p>
<p><code>ClinicFlow.Features.Appointments.CreateAppointment</code></p>
<p><code>ClinicFlow.Features.Appointments.CancelAppointment</code></p>
<p>A developer wants to reuse cancellation logic in the create flow to prevent double booking and they do this:</p>
<p><code>CreateAppointmentHandler</code> references <code>CancelAppointmentHandler</code> to clear previous pending slots.</p>
<p>It compiles. It might even work. It is also a structural error because it makes slice A depend on slice B. The system now has hidden coupling between use cases. Refactoring becomes riskier because changes in CancelAppointment can break CreateAppointment.</p>
<p>The slice isolation rule above catches this immediately because <code>CreateAppointment</code> now has a dependency on the other slice root namespace. The build fails and the developer is forced into a better design.</p>
<p>What is the better design? Usually one of these, move shared logic into Domain if it is domain policy, move shared logic into Shared if it is a stable cross cutting primitive, or duplicate small orchestration logic if it is slice specific. Duplication is not a sin in vertical slice when it is small and it avoids coupling. The cost of duplication is often lower than the cost of accidental dependency networks.</p>
<p>NetArchTest does not tell you which option to pick. It just makes the coupling visible and expensive, which is what you want.</p>
<h2>Enforcing "slice owns its endpoint" without becoming dogmatic</h2>
<p>Many vertical slice teams want a simple rule, endpoints should live inside slices, not in a central Controllers folder. This makes each slice truly end-to-end.</p>
<p>If you follow this, you can enforce that API contains no controllers at all, or that API endpoint types must reside under <code>ClinicFlow.Features</code>.</p>
<p>One practical approach is to keep only hosting and wiring in <code>ClinicFlow.Api</code>, and put endpoints in <code>ClinicFlow.Features.*.*.Endpoint</code>.</p>
<p>You can then enforce that the API assembly does not contain types with names like <code>*Controller</code> or does not contain any types in <code>ClinicFlow.Api.Controllers</code>.</p>
<pre><code class="language-csharp">using NetArchTest.Rules;
using Xunit;
using ClinicFlow.Api;

public sealed class EndpointPlacementTests
{
    [Fact]
    public void Api_Should_Not_Contain_Controllers()
    {
        var result = Types.InAssembly(typeof(ApiAssemblyMarker).Assembly)
            .ShouldNot()
            .HaveNameEndingWith("Controller")
            .GetResult();

        Assert.True(result.IsSuccessful, FormatFailure(result));
    }

    private static string FormatFailure(TestResult result)
        =&gt; "Architecture rule failed:\n" + string.Join("\n", result.FailingTypeNames ?? Array.Empty&lt;string&gt;());
}
</code></pre>
<p>If you do use controllers intentionally, skip this rule. Enforce what you actually mean, not what sounds pure.</p>
<h2>How to introduce these rules into an existing solution without chaos</h2>
<p>If your codebase already has cross-slice references, turning on strict slice isolation will fail instantly. That is fine, but you need a strategy to adopt it without stalling delivery.</p>
<p>A pragmatic approach is to start with a small set of hard rules that you do not compromise on, then expand over time.</p>
<p>Begin with the clean core rule, Domain must not depend on Infrastructure. This is usually achievable quickly and it stops the worst violations.</p>
<p>Then enforce that Shared cannot depend on Features and Infrastructure. This prevents escape hatches.</p>
<p>Then enable slice isolation gradually. You can do this by limiting the test to a subset of feature areas first, such as enforcing only within <code>ClinicFlow.Features.Appointments.*</code> until that area is clean, then expanding to other areas.</p>
<p>If you need that selective enforcement, you can filter sliceRoots before running the pairwise checks. Keep it temporary and tracked. Architecture tests should converge toward full enforcement, not become a permanent set of exceptions.</p>
<h2>What NetArchTest cannot do and how you compensate</h2>
<p>NetArchTest enforces compile time dependency structure. It does not understand runtime behaviour. It will not catch "we are coupled through a database table" or "we are coupled through message schemas" unless that coupling surfaces as code dependencies.</p>
<p>So you use NetArchTest for what it is excellent at, preventing direct type coupling across boundaries, stopping forbidden references, and enforcing structural conventions. You complement it with other guardrails, project reference restrictions, package boundaries, code review focus, and sometimes Roslyn analysers for rules that NetArchTest cannot express cleanly.</p>
<p>The net effect is that architecture becomes harder to accidentally violate than to follow. That is the goal.</p>
<p>Vertical slice architecture is not a folder structure. It is a dependency discipline. If slices can call each other freely, you have not built vertical slices, you have built a new naming scheme for a layered system and a few folders with opinions!</p>
<p>NetArchTest gives you a simple, mechanical enforcement tool. You define what a slice is in your solution, you codify what is allowed and forbidden, and you run those rules on every build. Over time, this prevents the exact kind of coupling that makes refactoring painful and delivery slow.</p>
]]></content:encoded></item><item><title><![CDATA[Hexagonal Architecture in Modern .NET]]></title><description><![CDATA[Architecture discussions often start with diagrams. They focus on shapes, layers, or folder structures. That is not where real architectural problems come from. They come from change. Modern .NET systems live in a constantly shifting environment. Dat...]]></description><link>https://dotnetdigest.com/hexagonal-architecture-in-modern-net</link><guid isPermaLink="true">https://dotnetdigest.com/hexagonal-architecture-in-modern-net</guid><category><![CDATA[Hexagonal Architecture]]></category><category><![CDATA[software architecture]]></category><category><![CDATA[software engineer]]></category><dc:creator><![CDATA[Patrick Kearns]]></dc:creator><pubDate>Mon, 16 Feb 2026 22:40:43 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1771281471631/c6559c35-3fa8-46d1-aabb-36cef85b3601.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Architecture discussions often start with diagrams. They focus on shapes, layers, or folder structures. That is not where real architectural problems come from. They come from change. Modern .NET systems live in a constantly shifting environment. Databases evolve. Cloud infrastructure changes. New integrations appear. Performance requirements force caching and messaging layers to be introduced. Over time, systems must adapt or they become brittle.</p>
<p>Hexagonal architecture provides a way to protect the core business logic of a system from the volatility of the outside world. If your system has a well defined domain surrounded by uncertain infrastructure then this pattern could well save you hours of pain.</p>
<h2 id="heading-the-modern-net-reality">The modern .NET reality</h2>
<p>Today’s .NET applications look very different from the classic MVC layered systems of the past.</p>
<p>We now build systems using minimal APIs, dependency injection everywhere, EF Core, background workers, distributed caching, and messaging infrastructure. Many Developers also adopt vertical slice architecture, where each feature is implemented as an independent unit. This evolution introduces a new risk. Modern applications interact with far more external systems than before. Without strong boundaries, business logic quickly becomes tightly coupled to infrastructure.</p>
<p>You start seeing patterns like this.</p>
<pre><code class="lang-csharp"><span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task <span class="hljs-title">CreateUser</span>(<span class="hljs-params">CreateUserRequest request</span>)</span>
{
    <span class="hljs-keyword">var</span> user = <span class="hljs-keyword">new</span> User(request.Email);

    _dbContext.Users.Add(user);

    <span class="hljs-keyword">await</span> _dbContext.SaveChangesAsync();

    <span class="hljs-keyword">await</span> _redisCache.SetStringAsync(<span class="hljs-string">$"user:<span class="hljs-subst">{user.Id}</span>"</span>, user.Email);
}
</code></pre>
<p>This looks harmless. But the domain logic is now directly tied to EF Core and Redis. Changing persistence or caching strategies would require modifying business logic.</p>
<p>Hexagonal architecture prevents this coupling.</p>
<h2 id="heading-the-core-principle">The core principle</h2>
<p>The central rule is simple.</p>
<p>All dependencies must point inward toward the domain.</p>
<p>The domain must never depend on infrastructure.</p>
<p>Instead of directly using concrete technologies, the domain defines ports, which are abstractions describing what it needs from the outside world.</p>
<p>Adapters implement these ports using real infrastructure.</p>
<p>This ensures the domain remains stable even as infrastructure evolves.</p>
<p><img src="https://www.happycoders.eu/wp-content/uploads/2023/01/hexagonal-architecture.v2-600x431.png" alt="Hexagonal architecture with business logic in the core (“application”), ports, adapters, and external components
" /></p>
<h2 id="heading-a-real-code-example">A real code example</h2>
<p>Consider a simple use case: creating a user.</p>
<p>In hexagonal architecture, the domain defines what it needs without referencing infrastructure.</p>
<h3 id="heading-step-1-define-a-port-in-the-domain">Step 1. Define a port in the domain</h3>
<pre><code class="lang-csharp"><span class="hljs-keyword">public</span> <span class="hljs-keyword">interface</span> <span class="hljs-title">IUserRepository</span>
{
    <span class="hljs-function">Task <span class="hljs-title">AddAsync</span>(<span class="hljs-params">User user, CancellationToken stopToken</span>)</span>;
}
</code></pre>
<p>This interface belongs to the domain. It represents a capability, not a technology.</p>
<p>The domain does not know whether this will be implemented using EF Core, a REST API, or a message queue.</p>
<h3 id="heading-step-2-implement-the-port-in-infrastructure">Step 2. Implement the port in infrastructure</h3>
<pre><code class="lang-csharp"><span class="hljs-function"><span class="hljs-keyword">internal</span> <span class="hljs-keyword">sealed</span> class <span class="hljs-title">EfUserRepository</span>(<span class="hljs-params">UserDbContext db</span>)
    : IUserRepository</span>
{
    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task <span class="hljs-title">AddAsync</span>(<span class="hljs-params">User user, CancellationToken stopToken</span>)</span>
    {
        db.Users.Add(user);
        <span class="hljs-keyword">await</span> db.SaveChangesAsync(stopToken);
    }
}
</code></pre>
<p>This adapter translates domain operations into concrete persistence logic.</p>
<p>The domain remains completely unaware of EF Core.</p>
<h3 id="heading-step-3-use-the-port-in-an-application-handler">Step 3. Use the port in an application handler</h3>
<p>This is where modern .NET patterns come in.</p>
<p>A vertical slice handler orchestrates the use case.</p>
<pre><code class="lang-csharp"><span class="hljs-function"><span class="hljs-keyword">internal</span> <span class="hljs-keyword">sealed</span> class <span class="hljs-title">CreateUserHandler</span>(<span class="hljs-params">
    IUserRepository repository</span>)</span>
{
    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task <span class="hljs-title">Handle</span>(<span class="hljs-params">CreateUserCommand command,
        CancellationToken stopToken</span>)</span>
    {
        <span class="hljs-keyword">var</span> user = User.Create(command.Email);

        <span class="hljs-keyword">await</span> repository.AddAsync(user, stopToken);
    }
}
</code></pre>
<p>The handler depends only on abstractions defined by the domain.</p>
<p>Infrastructure details are invisible.</p>
<h2 id="heading-how-minimal-apis-fit-naturally">How minimal APIs fit naturally</h2>
<p>Modern .NET minimal APIs integrate cleanly with this approach.</p>
<p>The endpoint simply delegates to the handler.</p>
<pre><code class="lang-csharp">app.MapPost(<span class="hljs-string">"/users"</span>, <span class="hljs-keyword">async</span> (
    CreateUserCommand cmd,
    CreateUserHandler handler,
    CancellationToken stopToken) =&gt;
{
    <span class="hljs-keyword">await</span> handler.Handle(cmd, stopToken);
    <span class="hljs-keyword">return</span> Results.Ok();
});
</code></pre>
<p>Notice what is missing.</p>
<p>There is no EF Core. No Redis. No infrastructure logic. The endpoint remains thin and focused on HTTP concerns.</p>
<h2 id="heading-introducing-additional-infrastructure-without-touching-the-domain">Introducing additional infrastructure without touching the domain</h2>
<p>This is where hexagonal architecture shines. Suppose performance requirements demand caching.You do not change domain logic. Instead, you introduce a new adapter.</p>
<pre><code class="lang-csharp"><span class="hljs-function"><span class="hljs-keyword">internal</span> <span class="hljs-keyword">sealed</span> class <span class="hljs-title">CachedUserRepository</span>(<span class="hljs-params">
    IUserRepository inner,
    IDistributedCache cache</span>)
    : IUserRepository</span>
{
    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task <span class="hljs-title">AddAsync</span>(<span class="hljs-params">User user, CancellationToken stopToken</span>)</span>
    {
        <span class="hljs-keyword">await</span> inner.AddAsync(user, stopToken);

        <span class="hljs-keyword">await</span> cache.SetStringAsync(
            <span class="hljs-string">$"user:<span class="hljs-subst">{user.Id}</span>"</span>,
            user.Email,
            stopToken);
    }
}
</code></pre>
<p>The domain remains untouched. The application logic remains untouched. Only the infrastructure layer evolves.</p>
<h2 id="heading-how-this-works-with-vertical-slice-architecture">How this works with vertical slice architecture</h2>
<p>Vertical slice architecture and hexagonal architecture address different concerns. Vertical slice focuses on organising code around features. Hexagonal focuses on protecting the domain.</p>
<p>Together they create a powerful combination.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1771280294192/e101beab-b0e5-4338-8e6f-f9456f2592dd.png" alt class="image--center mx-auto" /></p>
<p>The handler orchestrates the use case. The domain enforces business rules. Ports define boundaries. Adapters implement infrastructure.</p>
<p>This structure scales extremely well in large systems.</p>
<h2 id="heading-in-real-enterprise-systems">in real enterprise systems</h2>
<p>In small applications, tight coupling may never cause serious problems. In long-lived enterprise systems, it always does. Systems evolve. Databases change. Caching layers are introduced. Messaging becomes necessary. Integrations multiply. Without hexagonal boundaries, each change spreads throughout the codebase. With hexagonal architecture, change remains localized to adapters.</p>
<p>This drastically reduces risk over time.</p>
<h2 id="heading-the-balance">The balance</h2>
<p>The most common way Developers go too far with hexagonal architecture is by turning it into an abstraction factory instead of a protection mechanism. They start creating ports for everything, not just for the parts of the system that are truly volatile. You end up seeing interfaces wrapping simple EF Core queries, adapters that add no real value, and multiple layers of indirection that make the code harder to follow without actually improving flexibility. At that point, the architecture is no longer protecting the domain, it is just adding friction.</p>
<p>Another sign of overdoing it is when the cost of change inside the application becomes higher than the cost of change in infrastructure. For example, if adding a simple new feature requires creating an interface, an adapter, a decorator, a registration class, and several test doubles, you have crossed the line. The purpose of hexagonal architecture is to reduce the impact of external change, not to make everyday development slower or more complex. Developers also run into trouble when they apply hexagonal boundaries to unstable parts of the system. Early in a project, business rules are still evolving rapidly. If you lock those areas behind rigid abstractions too soon, you actually make the system harder to adapt. Hexagonal works best when the domain concepts are reasonably well understood and likely to remain stable over time. Protecting something that is still in flux just creates unnecessary churn.</p>
<p>The healthy balance is to be selective. Apply hexagonal boundaries where infrastructure volatility is high and domain stability is strong. Leave simple CRUD paths, reporting queries, and transient workflows more direct and pragmatic. In other words, treat hexagonal architecture like armour. You put it around the parts of the system that must endure long-term pressure, not around everything by default. It provides a simple but powerful principle for modern .NET systems. Keep business logic at the centre. Define clear boundaries using ports. Implement infrastructure at the edges using adapters. This approach allows systems to evolve safely as requirements and technologies change.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://www.youtube.com/watch?v=k_GkYMd8Ouc">https://www.youtube.com/watch?v=k_GkYMd8Ouc</a></div>
]]></content:encoded></item><item><title><![CDATA[The Real Observability Story in .NET 10]]></title><description><![CDATA[For years, .NET’s observability has been described as “good, but manual”. The building blocks have existed since .NET 5 and matured significantly in .NET 6 through .NET 8. Activity, ActivitySource, DiagnosticSource, EventCounters, Meter, ILogger, and...]]></description><link>https://dotnetdigest.com/the-real-observability-story-in-net-10</link><guid isPermaLink="true">https://dotnetdigest.com/the-real-observability-story-in-net-10</guid><category><![CDATA[observability]]></category><category><![CDATA[Microsoft]]></category><category><![CDATA[.NET 10]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[software architecture]]></category><dc:creator><![CDATA[Patrick Kearns]]></dc:creator><pubDate>Fri, 30 Jan 2026 19:35:36 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1769801612514/1afed61d-d194-44c1-8c06-98eae1c58e9a.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>For years, .NET’s observability has been described as “good, but manual”. The building blocks have existed since .NET 5 and matured significantly in .NET 6 through .NET 8. Activity, ActivitySource, DiagnosticSource, EventCounters, Meter, ILogger, and later OpenTelemetry hooks have all been there. What was missing was not capability, but coherence. You could instrument almost anything, but you had to know exactly where to hook in, what to name things, and how to correlate them.</p>
<p>.NET 10 does not introduce a single headline observability feature. Instead, it tightens the system in ways that only become obvious once you are running real production workloads. The changes are subtle, but they reduce friction, remove ambiguity, and make it much harder to accidentally build an unobservable system.</p>
<p>This post explains what has actually changed, why it matters, and how it alters the way you should think about instrumentation going forward.</p>
<h3 id="heading-from-available-to-expected">From Available to Expected</h3>
<p>Before .NET 10, observability APIs were opt-in in a practical sense. Even when libraries emitted activities or metrics, there was no strong expectation that applications would consume them consistently. Naming conventions varied. Correlation worked if you were careful. Metrics often required explicit wiring that teams postponed until after the first incident.</p>
<p>.NET 10 quietly moves observability from an optional concern to an assumed capability. Framework-level components now emit richer, more consistent telemetry by default, and the APIs push you toward OpenTelemetry-aligned patterns rather than bespoke conventions.</p>
<p>The important consequence is this, instrumentation is no longer something you add later. If you build on top of the .NET 10 defaults, you are observable unless you actively opt out.</p>
<h3 id="heading-activitysource-grows-up">ActivitySource Grows Up</h3>
<p>ActivitySource has been the backbone of distributed tracing in .NET since .NET 5, but it suffered from two long-standing issues. First, many libraries created sources without clear ownership or consistent naming. Second, there was no strong alignment with OpenTelemetry schemas, which made downstream analysis brittle.</p>
<p>.NET 10 addresses this by allowing ActivitySource to be explicitly associated with a telemetry schema. This seems minor, but it has a big effect on trace quality in multi-service systems.</p>
<p>Consider a service emitting a custom activity in .NET 8:</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">private</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">readonly</span> ActivitySource Source =
    <span class="hljs-keyword">new</span>(<span class="hljs-string">"Billing.Service"</span>);

<span class="hljs-keyword">using</span> <span class="hljs-keyword">var</span> activity = Source.StartActivity(<span class="hljs-string">"Invoice.Generate"</span>);
</code></pre>
<p>This works, but downstream systems have no guarantee about attribute names, semantic meaning, or compatibility with standard dashboards.</p>
<p>In .NET 10, you can anchor this source to a known schema:</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">private</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">readonly</span> ActivitySource Source =
    <span class="hljs-keyword">new</span>(
        name: <span class="hljs-string">"Billing.Service"</span>,
        version: <span class="hljs-string">"1.0.0"</span>,
        telemetrySchemaUrl: <span class="hljs-string">"https://opentelemetry.io/schemas/1.21.0"</span>
    );
</code></pre>
<p>Now the runtime, exporters, and backends have a shared understanding of what this telemetry represents. Over time, this makes cross-service analysis far more reliable, especially in heterogeneous systems where not everything is written in .NET.</p>
<p>This is one of those changes that does nothing for a single service, but pays dividends once you operate dozens.</p>
<h3 id="heading-metrics-are-no-longer-second-class">Metrics Are No Longer Second-Class</h3>
<p>Metrics in .NET have historically lagged tracing in adoption. The Meter API has existed, but framework-level metrics were sparse, inconsistent, or too low-level to be directly useful.</p>
<p>.NET 10 expands built-in metrics across <a target="_blank" href="http://ASP.NET">ASP.NET</a> Core, identity pipelines, and request handling in ways that reduce the need for custom instrumentation. You get clearer visibility into request lifetimes, authentication overhead, and internal pipeline behaviour without writing any code.</p>
<p>This is important because metrics answer different questions than traces. Traces tell you why something failed. Metrics tell you that something is degrading before it fails.</p>
<p>In practical terms, .NET 10 pushes teams toward a healthier balance between tracing and metrics. You can rely on the platform for baseline signals, and reserve custom meters for domain-specific measurements that actually matter.</p>
<h3 id="heading-logs-traces-and-metrics-finally-line-up">Logs, Traces, and Metrics Finally Line Up</h3>
<p>One of the most common observability failures in .NET systems is broken correlation. Logs exist, traces exist, metrics exist, but stitching them together requires luck and institutional knowledge.</p>
<p>.NET 10 improves correlation by tightening the integration between logging scopes, activities, and meters. When an activity is active, logs written through ILogger now consistently inherit trace context without additional configuration. Metrics emitted inside an activity can be associated with the same logical operation.</p>
<p>The result is a simpler mental model. If code runs inside an activity, everything it emits belongs to that operation unless you explicitly say otherwise.</p>
<p>This may sound obvious, but in earlier versions it was easy to break accidentally, especially in asynchronous or event-driven flows.</p>
<h3 id="heading-observability-in-asynchronous-and-event-driven-systems">Observability in Asynchronous and Event-Driven Systems</h3>
<p>This is where .NET 10’s improvements matter most. Modern .NET systems rely heavily on asynchronous messaging, background processing, and serverless execution. These are precisely the areas where observability has traditionally fallen apart.</p>
<p>In .NET 10, framework components that bridge execution boundaries do a better job of preserving context. Activities flow more reliably across async boundaries. Background work triggered by <a target="_blank" href="http://ASP.NET">ASP.NET</a> Core or hosted services retains correlation information. This reduces the number of “orphaned” traces that start in the middle of nowhere.</p>
<p>For systems built around queues, event handlers, or orchestrators, this change alone can cut investigation time dramatically.</p>
<h3 id="heading-a-concrete-example-request-to-background-work">A Concrete Example: Request to Background Work</h3>
<p>Take a simple flow where an HTTP request enqueues background processing.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1769801211265/a1e7136a-d2d9-4859-a70d-432e0a085e31.png" alt class="image--center mx-auto" /></p>
<p>In earlier versions of .NET, it was common for the trace to stop at the queue boundary unless you manually propagated context. Logs from the worker would appear disconnected, even though they were causally related.</p>
<p>With .NET 10’s improved defaults and OpenTelemetry alignment, the expectation is that this context flows naturally if you use supported integrations. The trace becomes continuous, and logs from the worker can be correlated back to the originating request.</p>
<p>This does not remove the need for good design, but it raises the baseline.</p>
<h3 id="heading-blazor-maui-and-client-side-visibility">Blazor, MAUI, and Client-Side Visibility</h3>
<p>Observability has traditionally focused on servers. .NET 10 extends the same principles into client-side frameworks, particularly Blazor WebAssembly and MAUI.</p>
<p>These environments now emit richer diagnostic information through the same Activity and Meter APIs. That means client-side rendering delays, layout measurements, and lifecycle events can be observed using the same tooling you already use on the server.</p>
<p>The significance here is consistency. You no longer need a separate mental model for “frontend telemetry” and “backend telemetry” It is all the same system, expressed at different layers.</p>
<h3 id="heading-what-has-not-changed">What Has Not Changed</h3>
<p>It is important to be clear about what .NET 10 does not do. It does not replace OpenTelemetry. It does not force a specific backend. It does not automatically make your system observable if you ignore instrumentation entirely.</p>
<p>What it does is remove excuses. The platform now provides sensible defaults, stronger conventions, and better alignment with industry standards. If your system is still opaque, it is almost certainly a design choice rather than a tooling limitation.</p>
<h3 id="heading-how-this-should-change-your-architecture">How This Should Change Your Architecture</h3>
<p>The most important takeaway is not an API detail. It is a shift in responsibility.</p>
<p>In earlier versions of .NET, observability was something platform teams bolted on. In .NET 10, it becomes part of application design. Activities define boundaries. Metrics describe behaviour. Logs explain decisions.</p>
<p>If you are building modular monoliths, this aligns perfectly with explicit boundaries. Each module owns its ActivitySource. Each boundary emits metrics that describe its health. Cross-module calls become traceable without leaking internals.</p>
<p>If you are building distributed systems, the same principles apply at a larger scale.</p>
<h3 id="heading-net-10s-observability">.NET 10’s Observability</h3>
<p>The improvements are easy to miss because they are not flashy. There is no single feature to demo. No screenshot that sells the idea. What you get instead is a platform that assumes you care about understanding your system in production. It nudges you toward better defaults, stronger conventions, and fewer foot-guns. For people that already value observability, .NET 10 reduces effort and increases signal quality. For those that have postponed it, the platform is quietly removing the reasons why.</p>
<p>That is exactly the kind of change that matters long after the release notes are forgotten.</p>
]]></content:encoded></item><item><title><![CDATA[Why .NET 10 Makes Modular Monoliths More Viable Than Microservices]]></title><description><![CDATA[Microservices were never meant to be the default starting point.
They were a response to a very specific set of problems, large teams, independent deployment requirements, organisational scaling, and systems that had already outgrown a single codebas...]]></description><link>https://dotnetdigest.com/why-net-10-makes-modular-monoliths-more-viable-than-microservices</link><guid isPermaLink="true">https://dotnetdigest.com/why-net-10-makes-modular-monoliths-more-viable-than-microservices</guid><category><![CDATA[Modular Monolith]]></category><category><![CDATA[.NET 10]]></category><category><![CDATA[software architecture]]></category><category><![CDATA[development]]></category><dc:creator><![CDATA[Patrick Kearns]]></dc:creator><pubDate>Sat, 24 Jan 2026 15:03:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1769266791566/9b447fe1-c8e0-4730-bc02-a8b17e6b6bdc.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Microservices were never meant to be the default starting point.</p>
<p>They were a response to a very specific set of problems, large teams, independent deployment requirements, organisational scaling, and systems that had already outgrown a single codebase. Somewhere along the way, that nuance was lost. Microservices became a goal rather than a tool. The result is familiar. Systems that are operationally complex long before they are functionally complex. people spending more time managing infrastructure, retries, contracts, and observability than delivering business capability. Debugging becomes an exercise in tracing distributed failures instead of understanding domain logic.</p>
<p>This isn’t because microservices are flawed. It’s because they are expensive, and that cost is paid every single day you operate them.</p>
<p>What has changed over the last few runtime releases, and becomes much harder to ignore in .NET 10, is that the alternative has become significantly stronger.</p>
<p>Not the old-style monolith. A modern modular monolith, built around explicit boundaries, asynchronous workflows, and in-process messaging. One that keeps the architectural discipline people originally reached for microservices to achieve, without paying the network tax.</p>
<p>.NET 10 materially changes the cost of doing the right thing <em>inside a process</em>.</p>
<h2 id="heading-the-cost">The Cost</h2>
<p>Architecture is ultimately about trade-offs, and trade-offs are driven by cost. Not just financial cost, but cognitive cost, operational cost, and failure cost. A network boundary is not just slower than a met<mark>h</mark>od call. It introduces an entirely different failure model. Once you cross a process boundary, you must assume partial failure as the default. You design for retries, idempotency, backoff, circuit breaking, message duplication, and timeouts. Even when everything is healthy, latency is measured in milliseconds rather than nanoseconds.</p>
<p>An in-process boundary, by contrast, fails fast and predictably. There is no transport, no serialisation, no handshake, no packet loss, no retry storm. Historically, Developers accepted the network cost because in-process architectures tended to degrade into unstructured, tightly coupled systems.</p>
<p>That trade-off has shifted.</p>
<p>.NET 10 makes it cheap to build <em>structured</em> in-process systems. Async execution is cheaper. Coordination is cheaper. Cancellation is cheaper. Observability is cheaper. These are not headline features, but together they change what is viable.</p>
<h2 id="heading-what-net-10-changes-in-practice">What .NET 10 Changes in Practice</h2>
<p>The most important improvements in .NET 10 are not APIs you call directly. They are changes in behaviour. Async state machines allocate less and resume more efficiently. The ThreadPool reacts more intelligently under mixed workloads. Cancellation propagation is faster and more predictable. Diagnostic activity flows with less overhead. These things compound.</p>
<p>In earlier runtimes, building internal pipelines of asynchronous work could quickly put pressure on the ThreadPool, increase allocation rates, and create pathological scheduling behaviour under load. This pushed teams toward external queues, message brokers, or separate services simply to regain stability.</p>
<p>In .NET 10, that pressure is largely gone. Internal asynchronous workflows are now cheap enough to be the default rather than the exception.</p>
<p>This is great because <strong>modular monoliths live or die on internal messaging</strong>.</p>
<h2 id="heading-from-http-chains-to-in-process-workflows">From HTTP Chains to In-Process Workflows</h2>
<p>Take a common microservices setup. A request enters an API gateway, flows through multiple services, and each hop introduces latency and failure risk.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1769264923757/0fc9bbda-41aa-4e0f-834d-42f2917c967b.png" alt class="image--center mx-auto" /></p>
<p>Every arrow here represents a network call, even if all services are deployed in the same cluster. Each hop requires retries, timeouts, and tracing just to understand what happened when something goes wrong.</p>
<p>Now compare this to a modular monolith using in-process messaging.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1769264973375/e1774799-94b1-4ed5-bdd1-10bb0e557a92.png" alt class="image--center mx-auto" /></p>
<p>There is still decoupling. There are still explicit boundaries. There is still asynchronous execution. What’s missing is the network.</p>
<p>In .NET 10, this pattern scales far further than it used to.</p>
<h2 id="heading-in-process-messaging-that-holds-together">In-Process Messaging That Holds Together</h2>
<p>The key mistake that doomed so many monoliths was direct coupling. Modules reached into each other’s internals because it was easy. The solution is not to add HTTP calls. It is to add <em>structure</em>.</p>
<p>A simple example illustrates the point.</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">public</span> <span class="hljs-keyword">interface</span> <span class="hljs-title">IDomainEvent</span> { }

<span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">sealed</span> record <span class="hljs-title">UserCreated</span>(<span class="hljs-params">Guid UserId</span>) : IDomainEvent</span>;
</code></pre>
<p>Modules do not call each other directly. They publish events.</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">public</span> <span class="hljs-keyword">sealed</span> <span class="hljs-keyword">class</span> <span class="hljs-title">UserService</span>
{
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">readonly</span> IEventDispatcher _dispatcher;

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-title">UserService</span>(<span class="hljs-params">IEventDispatcher dispatcher</span>)</span>
    {
        _dispatcher = dispatcher;
    }

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task <span class="hljs-title">CreateUserAsync</span>(<span class="hljs-params">User user, CancellationToken ct</span>)</span>
    {
        <span class="hljs-comment">// Persist user</span>
        <span class="hljs-keyword">await</span> _dispatcher.PublishAsync(<span class="hljs-keyword">new</span> UserCreated(user.Id), ct);
    }
}
</code></pre>
<p>Handlers live in other modules, completely unaware of who raised the event.</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">public</span> <span class="hljs-keyword">sealed</span> <span class="hljs-keyword">class</span> <span class="hljs-title">BillingUserCreatedHandler</span> 
    : <span class="hljs-title">IEventHandler</span>&lt;<span class="hljs-title">UserCreated</span>&gt;
{
    <span class="hljs-function"><span class="hljs-keyword">public</span> Task <span class="hljs-title">Handle</span>(<span class="hljs-params">UserCreated evt, CancellationToken ct</span>)</span>
    {
        <span class="hljs-comment">// Initialise billing account</span>
        <span class="hljs-keyword">return</span> Task.CompletedTask;
    }
}
</code></pre>
<p>In .NET 10, patterns like this are cheap enough to use everywhere. The runtime no longer punishes you for doing the right thing. Cancellation flows correctly. Backpressure is manageable. Observability can be layered on using <code>Activity</code> without paying a large overhead. You end up with a system that behaves like a distributed system, but runs like a single process.</p>
<h2 id="heading-failure-semantics-are-the-real-cost-of-distribution">Failure Semantics Are the Real Cost of Distribution</h2>
<p>When you cross a network boundary, performance is not the biggest thing you give up. Predictability is.</p>
<p>An in-process call either succeeds or fails. If it fails, it fails immediately and deterministically. An exception is thrown, the stack unwinds, and the system remains in a known state. You can reason about it locally. You can test it easily. You can usually fix it by reading the code.</p>
<p>A network call does not fail like this.</p>
<p>When a request times out, you do not know whether the operation failed, partially succeeded, or completed successfully but lost its response. The caller is left in an ambiguous state, and ambiguity is poison to simple reasoning.</p>
<p>That ambiguity is why distributed systems require an entirely different set of patterns. Retries are no longer an optimisation but a necessity. Idempotency stops being a nice-to-have and becomes mandatory. Compensating actions appear, not because the domain demands them, but because the transport does. Observability shifts from helpful to essential, because without it you cannot reconstruct what actually happened.</p>
<p>This is the point where many systems quietly cross a line.</p>
<p>Once failure becomes ambiguous, every interaction must be designed as if it might run twice, or not at all, or succeed in isolation while failing in aggregate. The business logic does not change, but the mental model does. You stop writing code that describes intent, and start writing code that defensively survives uncertainty.</p>
<p>This is not free. It increases cognitive load, test complexity, and operational overhead long before it delivers any architectural benefit.</p>
<p>By contrast, in-process boundaries preserve simple failure semantics. If a module throws, the operation stops. If a transaction fails, state is rolled back. There is no need to ask whether a retry is safe, because retries are not implicit. There is no need for compensating logic, because partial success cannot leak past the boundary.</p>
<p>This difference explains more about architectural complexity than latency ever could.</p>
<p>It explains why distributed systems need sagas. It explains why message deduplication exists. It explains why debugging often involves log correlation rather than code inspection. Most importantly, it explains why introducing a network boundary too early permanently changes how the system must be written.</p>
<p>A modular monolith delays that cost.</p>
<p>It allows you to structure your system around clear boundaries and asynchronous workflows while retaining deterministic failure behaviour. You still model concurrency. You still handle cancellation. You still design for resilience. But you do so without the added burden of transport-level uncertainty.</p>
<p>This is the real advantage .NET 10 amplifies.</p>
<p>Not that it makes systems faster, but that it makes disciplined in-process design cheap enough to remain attractive. As long as your boundaries live inside a process, failure stays local, reasoning stays simple, and complexity grows with the domain rather than the infrastructure.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1769266241622/d8e98a54-5780-401f-8503-1f3704feca58.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-why-this-changes-the-architecture-decision">Why This Changes the Architecture Decision</h2>
<p>Microservices force you to pay the full distributed systems cost up front. Modular monoliths let you defer that cost until it is justified.</p>
<p>With .NET 10, the ceiling for how far you can push an in-process architecture is much higher. You can scale vertically, then horizontally, and still preserve clean module boundaries. You can deploy as a single unit without sacrificing internal autonomy. You can reason about behaviour without reconstructing a distributed trace for every bug. Most importantly, you can keep the system understandable.</p>
<p>That is the real win.</p>
<p>Microservices are still the right answer in some cases. Organisational scale, regulatory boundaries, and independent vendor ownership all justify them. But for many systems, they were adopted as a workaround for limitations that no longer exist.</p>
<p>.NET 10 doesn’t eliminate the need for microservices. It makes not needing them a far more realistic option!</p>
]]></content:encoded></item><item><title><![CDATA[Replacing switch Statements with Action Delegates in C#]]></title><description><![CDATA[Most of us start with switch statements and keep using them long after they’ve stopped being comfortable.
That’s not because switch is bad. It’s because switch quietly becomes a coordination mechanism, not a control structure. Over time it starts to ...]]></description><link>https://dotnetdigest.com/replacing-switch-statements-with-action-delegates-in-c</link><guid isPermaLink="true">https://dotnetdigest.com/replacing-switch-statements-with-action-delegates-in-c</guid><category><![CDATA[C#]]></category><category><![CDATA[Microsoft]]></category><category><![CDATA[.NET]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[Switch case]]></category><category><![CDATA[delegate]]></category><dc:creator><![CDATA[Patrick Kearns]]></dc:creator><pubDate>Mon, 12 Jan 2026 19:38:56 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1768246626420/d5b7ded6-1c4d-4f0c-9822-8fac49cd0f66.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Most of us start with <code>switch</code> statements and keep using them long after they’ve stopped being comfortable.</p>
<p>That’s not because <code>switch</code> is bad. It’s because <code>switch</code> quietly becomes a <em>coordination mechanism</em>, not a control structure. Over time it starts to own decisions, workflows, validation rules, side effects, and error handling. When that happens, the code still compiles, still works, and still looks reasonable in isolation. It just stops scaling.</p>
<p>Below we’ll look at a different approach, replacing <code>switch</code> statements wit<code>h action delegates and function maps. Not as a stylistic preference, but as a way to make behaviour explicit, composable, and testable.</code></p>
<p>We’ll look at where <code>switch</code> breaks down, how delegates change the shape of your code, and how this pattern fits naturally with modern C#, CQRS, and vertical-slice architectures.</p>
<h2 id="heading-why-switch-starts-to-hurt">Why <code>switch</code> Starts to Hurt</h2>
<p>At small scale, a <code>switch</code> is harmless. You inspect a value and choose a branch.</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">switch</span> (command.Type)
{
    <span class="hljs-keyword">case</span> CommandType.Create:
        HandleCreate(command);
        <span class="hljs-keyword">break</span>;

    <span class="hljs-keyword">case</span> CommandType.Update:
        HandleUpdate(command);
        <span class="hljs-keyword">break</span>;

    <span class="hljs-keyword">case</span> CommandType.Delete:
        HandleDelete(command);
        <span class="hljs-keyword">break</span>;

    <span class="hljs-keyword">default</span>:
        <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> InvalidOperationException();
}
</code></pre>
<p>This looks fine. The problem is not this code. The problem is what it turns into six months later.</p>
<p>The handlers grow logic. Validation leaks into the switch. Logging sneaks in. A permission check gets added for one case but not the others. Then you need async. Then you need retries. Then a new command type arrives and the only safe place to put it is inside the same switch.</p>
<p>What you now have is implicit orchestration. The switch decides <em>what</em> happens and <em>how</em> it happens, but the rules are scattered across cases.</p>
<p>At that point the switch is no longer branching. It’s coordinating.</p>
<h2 id="heading-the-structural-problem-with-switch">The Structural Problem with <code>switch</code></h2>
<p>A <code>switch</code> statement encodes behaviour in a closed structure. Every new case requires modifying the same block. That violates the Open/Closed Principle in practice, even if it doesn’t in theory.</p>
<p>More importantly, <code>switch</code> ties <strong>decision logic</strong> and <strong>execution logic</strong> together. You cannot reason about behaviour without scanning the entire statement.</p>
<p>Here’s the mental model most switches create:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1768244347683/238ae638-d6a0-402d-b185-707ae3b641a3.png" alt class="image--center mx-auto" /></p>
<p>The switch is the hub. Everything depends on it.</p>
<p>Now contrast that with a delegate-based model.</p>
<h2 id="heading-introducing-action-delegates-as-behaviour-maps">Introducing Action Delegates as Behaviour Maps</h2>
<p>An <code>Action</code> or <code>Func</code> delegate lets you treat behaviour as data. Instead of asking “which branch should I execute?”, you ask “which behaviour applies to this key?”</p>
<p>The simplest version looks like this:</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">private</span> <span class="hljs-keyword">readonly</span> Dictionary&lt;CommandType, Action&lt;Command&gt;&gt; _handlers =
    <span class="hljs-keyword">new</span>()
    {
        [<span class="hljs-meta">CommandType.Create</span>] = HandleCreate,
        [<span class="hljs-meta">CommandType.Update</span>] = HandleUpdate,
        [<span class="hljs-meta">CommandType.Delete</span>] = HandleDelete
    };
</code></pre>
<p>Execution becomes trivial:</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">if</span> (!_handlers.TryGetValue(command.Type, <span class="hljs-keyword">out</span> <span class="hljs-keyword">var</span> handler))
{
    <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> InvalidOperationException(<span class="hljs-string">$"Unsupported command type <span class="hljs-subst">{command.Type}</span>"</span>);
}

handler(command);
</code></pre>
<p>At first glance this looks like a lateral move. The real difference appears as the code evolves.</p>
<h2 id="heading-behaviour-becomes-explicit">Behaviour Becomes Explicit</h2>
<p>Each delegate is now a first-class unit of behaviour. It can be passed around, wrapped, composed, or replaced.</p>
<p>That means validation, logging, permissions, retries, and metrics no longer need to live inside a <code>switch</code>.</p>
<p>For example, you can introduce cross-cutting concerns without touching the dispatch logic.</p>
<pre><code class="lang-csharp">_handlers[CommandType.Create] =
    command =&gt; WithLogging(() =&gt; HandleCreate(command));
</code></pre>
<p>Or with a helper:</p>
<pre><code class="lang-csharp"><span class="hljs-function">Action&lt;Command&gt; <span class="hljs-title">WithLogging</span>(<span class="hljs-params">Action&lt;Command&gt; inner</span>)</span>
{
    <span class="hljs-keyword">return</span> command =&gt;
    {
        logger.LogInformation(<span class="hljs-string">"Handling {Type}"</span>, command.Type);
        inner(command);
    };
}
</code></pre>
<p>You cannot do this cleanly with a <code>switch</code> without duplicating code or introducing flags and conditionals.</p>
<h2 id="heading-from-branching-to-dispatching">From Branching to Dispatching</h2>
<p>Really, this is not a refactor. It’s a shift in mindset.</p>
<p>A <code>switch</code> is branching logic. A delegate map is dispatch logic.</p>
<p>That difference is important when your system grows.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1768245219937/9292af85-596b-45ad-a811-a36e8af1273f.png" alt class="image--center mx-auto" /></p>
<p>The dispatcher doesn’t care what handlers do. It only cares that a handler exists.</p>
<p>This separation is subtle but powerful. The dispatcher becomes stable. Handlers evolve independently.</p>
<h2 id="heading-async-and-error-handling-stop-being-special">Async and Error Handling Stop Being Special</h2>
<p>One of the first pain points with large <code>switch</code> statements is async behaviour.</p>
<p>You end up with something like this:</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">switch</span> (command.Type)
{
    <span class="hljs-keyword">case</span> CommandType.Create:
        <span class="hljs-keyword">await</span> HandleCreateAsync(command);
        <span class="hljs-keyword">break</span>;

    <span class="hljs-keyword">case</span> CommandType.Update:
        <span class="hljs-keyword">await</span> HandleUpdateAsync(command);
        <span class="hljs-keyword">break</span>;

    <span class="hljs-keyword">case</span> CommandType.Delete:
        HandleDelete(command);
        <span class="hljs-keyword">break</span>;
}
</code></pre>
<p>Now half the cases are async, half are not, and the switch controls execution semantics.</p>
<p>With delegates, everything normalises naturally.</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">private</span> <span class="hljs-keyword">readonly</span> Dictionary&lt;CommandType, Func&lt;Command, Task&gt;&gt; _handlers =
    <span class="hljs-keyword">new</span>()
    {
        [<span class="hljs-meta">CommandType.Create</span>] = HandleCreateAsync,
        [<span class="hljs-meta">CommandType.Update</span>] = HandleUpdateAsync,
        [<span class="hljs-meta">CommandType.Delete</span>] = command =&gt;
        {
            HandleDelete(command);
            <span class="hljs-keyword">return</span> Task.CompletedTask;
        }
    };
</code></pre>
<p>Dispatching becomes uniform.</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">await</span> _handlers[command.Type](command);
</code></pre>
<p>Error handling becomes composable instead of structural.</p>
<h2 id="heading-validation-as-behaviour">Validation as Behaviour</h2>
<p>Validation inside a switch usually looks like conditional noise.</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">case</span> CommandType.Create:
    <span class="hljs-keyword">if</span> (<span class="hljs-keyword">string</span>.IsNullOrEmpty(command.Name))
        <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> ValidationException();

    HandleCreate(command);
    <span class="hljs-keyword">break</span>;
</code></pre>
<p>That ties validation to control flow.</p>
<p>With delegates, validation becomes part of the behaviour itself.</p>
<pre><code class="lang-csharp">_handlers[CommandType.Create] =
    command =&gt;
    {
        ValidateCreate(command);
        HandleCreate(command);
    };
</code></pre>
<p>Even better, you can <em>decorate</em> behaviour.</p>
<pre><code class="lang-csharp"><span class="hljs-function">Action&lt;Command&gt; <span class="hljs-title">WithValidation</span>(<span class="hljs-params">
    Action&lt;Command&gt; inner,
    Action&lt;Command&gt; validate</span>)</span>
{
    <span class="hljs-keyword">return</span> command =&gt;
    {
        validate(command);
        inner(command);
    };
}
</code></pre>
<p>Now composition is explicit and reusable.</p>
<h2 id="heading-a-cqrs-friendly-pattern">A CQRS-Friendly Pattern</h2>
<p>If you’re using CQRS or vertical slices, this approach fits naturally.</p>
<p>Instead of a central <code>switch</code> on command type, each command registers its own handler.</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">public</span> <span class="hljs-keyword">interface</span> <span class="hljs-title">ICommandHandler</span>
{
    CommandType Type { <span class="hljs-keyword">get</span>; }
    <span class="hljs-function">Task <span class="hljs-title">Handle</span>(<span class="hljs-params">Command command</span>)</span>;
}
</code></pre>
<p>Registration becomes data-driven.</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">var</span> handlers = handlerInstances
    .ToDictionary(h =&gt; h.Type, h =&gt; h.Handle);
</code></pre>
<p>Dispatch becomes trivial.</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">await</span> handlers[command.Type](command);
</code></pre>
<h2 id="heading-testing-improves-without-trying">Testing Improves Without Trying</h2>
<p>Switch-heavy code is awkward to test because behaviour is entangled.</p>
<p>Delegate-based code isolates behaviour by default.</p>
<p>You can test a handler in isolation. You can test the dispatcher with a fake delegate. You can inject alternative behaviours for edge cases.</p>
<h2 id="heading-when-switch-is-still-fine">When <code>switch</code> Is Still Fine</h2>
<p>This is not an anti-switch campaign!.</p>
<p>A <code>switch</code> is stil fine when:</p>
<ul>
<li><p>The logic is trivial.</p>
</li>
<li><p>The cases are truly symmetric.</p>
</li>
<li><p>The behaviour will not grow.</p>
</li>
<li><p>There are no cross-cutting concerns.</p>
</li>
</ul>
<p>Configuration parsing, small enums, formatting decisions. These are legitimate uses.</p>
<p>The moment behaviour grows, or starts to differ meaningfully between cases, delegates win.</p>
<h2 id="heading-a-migration-strategy-that-doesnt-hurt">A Migration Strategy That Doesn’t Hurt</h2>
<p>You don’t need a big rewrite.</p>
<p>A practical approach is <strong>to extract the switch into a delegate map,</strong> then evolve from there.</p>
<p>Start here:</p>
<pre><code class="lang-csharp">Action&lt;Command&gt; handler = command.Type <span class="hljs-keyword">switch</span>
{
    CommandType.Create =&gt; HandleCreate,
    CommandType.Update =&gt; HandleUpdate,
    CommandType.Delete =&gt; HandleDelete,
    _ =&gt; <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> InvalidOperationException()
};

handler(command);
</code></pre>
<p>Then lift it into a dictionary when it starts to grow.</p>
<p>This lets you keep modern switch expressions while escaping their limitations.</p>
<p>Replacing <code>switch</code> statements with action delegates is perfect for making behaviour explicit, reducing coordination points, and letting systems grow without central bottlenecks.</p>
<p>You end up with code that is:</p>
<ul>
<li><p>Easier to extend without modification</p>
</li>
<li><p>Easier to reason about in isolation</p>
</li>
<li><p>Easier to test without scaffolding</p>
</li>
<li><p>Easier to compose with cross-cutting concerns</p>
</li>
</ul>
<p>Most importantly, you stop asking “what does this switch do?” and start asking “what behaviours exist in this system?”</p>
<p>That shift changes how you design code.</p>
]]></content:encoded></item><item><title><![CDATA[The Hidden Contract Between ASP.NET Core and Kestrel]]></title><description><![CDATA[Most ASP.NET Core developers think about HTTP requests in terms of controllers, minimal APIs, or middleware. Very few think about Kestrel.
When production issues show up a, the root cause often lives below the abstraction boundary. Not in your endpoi...]]></description><link>https://dotnetdigest.com/the-hidden-contract-between-aspnet-core-and-kestrel</link><guid isPermaLink="true">https://dotnetdigest.com/the-hidden-contract-between-aspnet-core-and-kestrel</guid><category><![CDATA[kestrel]]></category><category><![CDATA[Microsoft]]></category><category><![CDATA[asp.net core]]></category><dc:creator><![CDATA[Patrick Kearns]]></dc:creator><pubDate>Sat, 10 Jan 2026 10:04:50 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1768039391425/dfc4d85a-047a-4ce8-a8d9-635553353978.avif" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Most <a target="_blank" href="http://ASP.NET">ASP.NET</a> Core developers think about HTTP requests in terms of controllers, minimal APIs, or middleware. Very few think about Kestrel.</p>
<p>When production issues show up a, the root cause often lives below the abstraction boundary. Not in your endpoint code, but in the contract between <a target="_blank" href="http://ASP.NET">ASP.NET</a> Core and Kestrel.</p>
<p>This post is about that contract. The parts you never explicitly agreed to, but rely on every day.</p>
<h2 id="heading-kestrel-is-not-just-a-web-server">Kestrel Is Not “Just a Web Server”</h2>
<p>Kestrel is not a thin wrapper around sockets. It is an async, backpressure-aware, streaming HTTP engine built on top of <a target="_blank" href="http://System.IO"><code>System.IO</code></a><code>.Pipelines</code>.</p>
<p><a target="_blank" href="http://ASP.NET">ASP.NET</a> Core sits <em>on top</em> of Kestrel. It does not own the network. It consumes a stream of bytes that Kestrel controls.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1768037314390/f98608cf-cb2d-43ba-9987-852ab3e20601.png" alt class="image--center mx-auto" /></p>
<p>.</p>
<p>Once you see the layers clearly, a lot of “mystery behaviour” stops being mysterious.</p>
<h2 id="heading-the-request-body-is-a-stream-not-a-value">The Request Body Is a Stream, Not a Value</h2>
<p>One of the most important details many developers miss is this:</p>
<blockquote>
<p>The request body is not a byte array. It is a stream backed by Kestrel’s pipeline.</p>
</blockquote>
<p>When you access <code>HttpRequest.Body</code>, you are not reading from memory you own. You are reading from a pipeline that Kestrel is filling from the network.</p>
<p>If you do not read it, it does not magically disappear.</p>
<h2 id="heading-what-happens-if-you-dont-read-the-request-body">What Happens If You Don’t Read the Request Body</h2>
<p>Look at an endpoint that exits early.</p>
<pre><code class="lang-rust">app.MapPost(<span class="hljs-string">"/upload"</span>, <span class="hljs-keyword">async</span> context =&gt;
{
    <span class="hljs-keyword">if</span> (!context.Request.Headers.ContainsKey(<span class="hljs-string">"X-Valid"</span>))
        <span class="hljs-keyword">return</span> Results.BadRequest();
});
</code></pre>
<p>This looks harmless. In reality, you have just violated the contract.</p>
<p>Kestrel is still receiving bytes from the client. Those bytes are still being buffered. Until the request completes, Kestrel cannot safely reuse that connection.</p>
<p>Under load, this can lead to:</p>
<ul>
<li><p>socket buffers filling up</p>
</li>
<li><p>memory pressure inside Kestrel</p>
</li>
<li><p>connection starvation</p>
</li>
<li><p>slow clients affecting unrelated requests</p>
</li>
</ul>
<p>This is not theoretical. It shows up in production as “random” slowdowns.</p>
<h2 id="heading-backpressure-is-real-and-youre-part-of-it">Backpressure Is Real (And You’re Part of It)</h2>
<p>Backpressure in Kestrel is not an abstract concept, and it is not something that happens entirely below your code. Kestrel actively regulates how fast data flows from the network into memory, and that regulation depends directly on how your application consumes the request body. The server can only make forward progress when your code reads data from the pipeline.</p>
<p>If an endpoint reads the request body slowly, Kestrel has to slow down how much data it accepts from the client. If the endpoint blocks while reading, the thread pool becomes involved and progress slows even further. If the endpoint never reads the body at all, Kestrel is left buffering data it cannot safely release, and backpressure builds immediately.</p>
<p>In all of these cases, the system behaves exactly as designed, but the impact is often misunderstood. The slowdown is not confined to the single endpoint that caused it. Because connections, buffers, and threads are shared, backpressure introduced by one piece of code can ripple outward and affect unrelated requests. This is why consuming request bodies correctly is not just a local concern, but a system-wide responsibility.</p>
<p>If your endpoint:</p>
<ul>
<li><p>reads the body slowly</p>
</li>
<li><p>blocks while reading</p>
</li>
<li><p>never reads the body at all</p>
</li>
</ul>
<p>then Kestrel cannot make progress.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1768037512461/90c2d494-d2f2-4044-9610-8f09783a370a.png" alt class="image--center mx-auto" /></p>
<p>This is why one slow or misbehaving endpoint can degrade overall throughput.</p>
<h2 id="heading-why-slow-clients-affect-fast-ones">Why Slow Clients Affect Fast Ones</h2>
<p>HTTP/1.1 connections are reused. Even with HTTP/2, streams still share underlying resources.</p>
<p>If a client sends data slowly and your code reads it synchronously or inefficiently, the pipeline backs up. Kestrel’s buffers grow. Thread pool work piles up.</p>
<p>From the outside, it looks like unrelated requests are getting slower.</p>
<p>From the inside, the system is doing exactly what it was designed to do.</p>
<h2 id="heading-the-body-lifetime-contract">The Body Lifetime Contract</h2>
<p>There is an implicit rule that is rarely stated clearly:</p>
<blockquote>
<p>If <a target="_blank" href="http://ASP.NET">ASP.NET</a> Core hands you a request body, you are expected to either consume it or explicitly discard it.</p>
</blockquote>
<p>Discarding is not automatic.</p>
<p>If you return early, you should drain the body.</p>
<pre><code class="lang-rust"><span class="hljs-keyword">await</span> context.Request.Body.CopyToAsync(Stream.Null);
</code></pre>
<p>This feels unnecessary until you see what happens under load without it.</p>
<h2 id="heading-reading-the-body-changes-scheduling">Reading the Body Changes Scheduling</h2>
<p>Reading from <code>Request.Body</code> is not just a logical operation, it is a scheduling decision. The moment you start consuming the body you are interacting directly with Kestrel’s IO pipeline, not with an in-memory buffer that already exists. That distinction is important because the read is tied to real network IO and real completion signals from the operating system. When you <code>await</code> a body read, Kestrel is free to yield the current thread while it waits for data to arrive. The thread is returned to the pool, other work can run, and nothing is blocked while the socket waits. When the OS signals that more data is available, the read completes and the continuation is scheduled back onto the thread pool. From the outside this looks simple, but internally it is carefully balanced to keep throughput high under load.</p>
<p>If, instead, you block while reading the body, you break that balance. The thread remains occupied while waiting on IO that cannot complete any faster. Under low load this often goes unnoticed. Under sustained load it leads to thread pool starvation, increased latency, and cascading slowdowns in unrelated requests.</p>
<p>This is one of the main ways teams end up with “async” code that still behaves like synchronous code under pressure. The code compiles, the tests pass, and everything looks fine until the system is forced to operate at scale. At that point, the difference between awaiting IO and blocking on it becomes visible, and the cost is paid by the entire process, not just the endpoint that caused it.</p>
<h2 id="heading-kestrel-does-not-buffer-infinite-data">Kestrel Does Not Buffer Infinite Data</h2>
<p>Kestrel does not buffer data indefinitely. It enforces internal limits to protect the process and the machine it is running on. Those limits are usually generous enough that you never notice them during development or light testing, which is why many teams are surprised when they are reached in production. When those limits are hit, Kestrel’s behaviour changes in ways that can be difficult to diagnose from application code alone. Requests may appear to stall partway through processing. Connections can be closed earlier than expected. Clients may see resets or timeouts without any obvious error being logged in the application itself. From the perspective of your controller or endpoint, everything looks normal, because the failure is happening below that abstraction boundary.</p>
<p>This is one of the reasons understanding Kestrel’s role becomes more important than understanding middleware order as systems grow. Middleware only governs how requests are handled once they are flowing. Kestrel governs whether those requests can continue flowing at all. When pressure builds at the server and socket level, no amount of tidy endpoint code can compensate for ignoring how data is buffered, consumed, and released underneath.</p>
<h2 id="heading-why-this-is-hard-to-debug">Why This Is Hard to Debug</h2>
<p>These problems are hard to debug because they almost never appear in local testing. Local clients are fast, payloads are usually small, and connections tend to be short-lived. Under those conditions, the system rarely experiences enough pressure for Kestrel’s internal behaviour to matter. Everything looks healthy, and any inefficiencies are effectively masked.</p>
<p>Production environments are very different. Clients vary widely in behaviour and quality. Payload sizes increase. Connections are reused far more aggressively. Network latency becomes uneven and unpredictable. These factors combine to push the server into states that simply never occur on a developer machine.</p>
<p>It is only under these real conditions that the abstraction starts to leak. The same code that looked perfectly fine in development can suddenly exhibit stalls, timeouts, or cascading slowdowns, not because the application logic changed, but because the underlying assumptions about IO and buffering no longer hold.</p>
<hr />
<h2 id="heading-a-better-mental-model">A Better Mental Model</h2>
<p>Stop thinking of <a target="_blank" href="http://ASP.NET">ASP.NET</a> Core as “handling requests”.</p>
<p>Start thinking of it as <strong>consuming a stream that Kestrel owns</strong>.</p>
<p>Your responsibility is to:</p>
<ul>
<li><p>read it correctly</p>
</li>
<li><p>read it promptly</p>
</li>
<li><p>or explicitly discard it</p>
</li>
</ul>
<p>Everything else flows from that.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1768038824353/039a536f-1814-4530-95e3-16df88066232.png" alt class="image--center mx-auto" /></p>
<p>Skip the middle step, and the system pays the price.</p>
<h2 id="heading-why-this-is-important-for-senior-engineers">Why This is Important for Senior Engineers</h2>
<p>These issues don’t show up in tutorials. By the time they surface, they are expensive. Understanding the contract between <a target="_blank" href="http://ASP.NET">ASP.NET</a> Core and Kestrel gives you a lever most teams don’t even know exists.</p>
<p><a target="_blank" href="http://ASP.NET">ASP.NET</a> Core is an excellent framework. Kestrel is an excellent server. But neither can protect you from ignoring the rules of streaming IO. Once you internalise that the request body is <em>not yours</em> until you consume it, a lot of production behaviour suddenly makes sense.</p>
]]></content:encoded></item><item><title><![CDATA[High-Performance Networking with .NET 10 - MsQuic, Sockets, and the New I/O Reality]]></title><description><![CDATA[If you want to write a .NET networking post today, the interesting work is in latency control, backpressure, memory reuse, and what happens when you push tens of thousands of concurrent connections through a real service. .NET 10 is a good moment to ...]]></description><link>https://dotnetdigest.com/high-performance-networking-with-net-10-msquic-sockets-and-the-new-io-reality</link><guid isPermaLink="true">https://dotnetdigest.com/high-performance-networking-with-net-10-msquic-sockets-and-the-new-io-reality</guid><category><![CDATA[msquic]]></category><category><![CDATA[networking]]></category><category><![CDATA[Microsoft]]></category><category><![CDATA[.NET]]></category><category><![CDATA[software architecture]]></category><category><![CDATA[Software Engineering]]></category><dc:creator><![CDATA[Patrick Kearns]]></dc:creator><pubDate>Mon, 29 Dec 2025 14:12:30 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1767017377276/cecb16bd-0896-486c-9509-a93f77fcd56d.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>If you want to write a .NET networking post today, the interesting work is in latency control, backpressure, memory reuse, and what happens when you push tens of thousands of concurrent connections through a real service. .NET 10 is a good moment to revisit this because the runtime and libraries keep shaving allocations and adding higher-level primitives that are finally worth using in production.</p>
<p>This post will build a small but serious component, a low-latency ingest gateway that can accept messages over QUIC (HTTP/3 style transport), fall back to TCP when needed, and keep CPU and allocations predictable under load. You will use System.Net.Quic (MsQuic under the hood on supported platforms), compare it against raw sockets for a tight loop protocol, and then decide where Kestrel fits when you actually want HTTP. For readers who only used QUIC via “turn on HTTP/3”, you will make QUIC feel like a tool, not magic.</p>
<h3 id="heading-what-changed-in-net-10-that-is-interesting-for-networking-people">What changed in .NET 10 that is interesting for networking people</h3>
<p>The .NET team’s own networking write-up for .NET 10 is worth anchoring on because it calls out improvements across HTTP, sockets, and new WebSocket APIs. Thats interesting because networking performance is often death by a thousand small costs rather than one big fix.</p>
<p>Two specific areas you can lean on in a high-signal post:</p>
<ul>
<li><p>HTTP/3 in ASP.NET Core and Kestrel is no longer just an experiment for most teams. If you are building mobile-facing or variable network clients, QUIC’s connection migration is a real advantage you can measure.</p>
</li>
<li><p>Library-level networking work continues to reduce allocations and improve hot paths. That changes the trade off between “DIY sockets” and “use the platform”.</p>
</li>
</ul>
<h3 id="heading-the-architecture-you-will-build">The architecture you will build</h3>
<p>You are going to build an ingest gateway with two listeners:</p>
<p>A QUIC listener that speaks a tiny binary protocol. It accepts a bidirectional stream per request, reads a length-prefixed frame, validates a small header, and forwards the payload to a channel for processing.</p>
<p>A TCP listener that speaks the same protocol for environments where QUIC is blocked, or where you do not control the client stack.</p>
<p>The important part is not the protocol. It is what you do around it, pooling, backpressure, cancellation, and timing. If your gateway collapses under slow clients or bursts, the protocol is irrelevant.</p>
<h3 id="heading-quic-in-net-without-the-training-wheels">QUIC in .NET without the training wheels</h3>
<p>Kestrel HTTP/3 is great when you want HTTP. But if your goal is “fast, cheap, predictable framing”, QUIC streams are a better match than HTTP request parsing. The point of this section is to show QUIC as a general transport primitive.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://dotnetdigest.com/msquic-the-transport-shift-that-will-redefine-distributed-net-systems">https://dotnetdigest.com/msquic-the-transport-shift-that-will-redefine-distributed-net-systems</a></div>
<p> </p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://dotnetdigest.com/building-a-quic-service-in-net-with-msquic">https://dotnetdigest.com/building-a-quic-service-in-net-with-msquic</a></div>
<p> </p>
<p>Here is a minimal QUIC server loop using <code>System.Net.Quic</code>. This is intentionally small so you can expand it into something production-level.</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">using</span> System.Buffers;
<span class="hljs-keyword">using</span> System.Net;
<span class="hljs-keyword">using</span> System.Net.Quic;
<span class="hljs-keyword">using</span> System.Net.Security;
<span class="hljs-keyword">using</span> System.Security.Cryptography.X509Certificates;
<span class="hljs-keyword">using</span> System.Threading.Channels;

<span class="hljs-keyword">public</span> <span class="hljs-keyword">sealed</span> <span class="hljs-keyword">class</span> <span class="hljs-title">QuicIngestServer</span>
{
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">readonly</span> Channel&lt;ReadOnlyMemory&lt;<span class="hljs-keyword">byte</span>&gt;&gt; _inbox;

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-title">QuicIngestServer</span>(<span class="hljs-params">Channel&lt;ReadOnlyMemory&lt;<span class="hljs-keyword">byte</span>&gt;&gt; inbox</span>)</span> =&gt; _inbox = inbox;

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task <span class="hljs-title">RunAsync</span>(<span class="hljs-params">IPEndPoint endPoint, X509Certificate2 cert, CancellationToken stopToken</span>)</span>
    {
        <span class="hljs-keyword">var</span> ssl = <span class="hljs-keyword">new</span> SslServerAuthenticationOptions
        {
            ServerCertificate = cert,
            ApplicationProtocols = [<span class="hljs-keyword">new</span> SslApplicationProtocol(<span class="hljs-string">"ingest-v1"</span>)]
        };

        <span class="hljs-keyword">var</span> options = <span class="hljs-keyword">new</span> QuicListenerOptions
        {
            ListenEndPoint = endPoint,
            ApplicationProtocols = ssl.ApplicationProtocols,
            ConnectionOptionsCallback = (_, _, _) =&gt;
                ValueTask.FromResult(<span class="hljs-keyword">new</span> QuicServerConnectionOptions
                {
                    ServerAuthenticationOptions = ssl
                })
        };

        <span class="hljs-keyword">await</span> <span class="hljs-keyword">using</span> <span class="hljs-keyword">var</span> listener = <span class="hljs-keyword">await</span> QuicListener.ListenAsync(options, stopToken);

        <span class="hljs-keyword">while</span> (!stopToken.IsCancellationRequested)
        {
            QuicConnection connection = <span class="hljs-keyword">await</span> listener.AcceptConnectionAsync(ct);
            _ = Task.Run(() =&gt; HandleConnectionAsync(connection, stopToken, stopToken);
        }
    }

    <span class="hljs-function"><span class="hljs-keyword">private</span> <span class="hljs-keyword">async</span> Task <span class="hljs-title">HandleConnectionAsync</span>(<span class="hljs-params">QuicConnection connection, CancellationToken stopToken</span>)</span>
    {
        <span class="hljs-keyword">await</span> <span class="hljs-keyword">using</span> (connection)
        {
            <span class="hljs-keyword">while</span> (!stopToken.IsCancellationRequested)
            {
                QuicStream stream;
                <span class="hljs-keyword">try</span>
                {
                    stream = <span class="hljs-keyword">await</span> connection.AcceptInboundStreamAsync(stopToken);
                }
                <span class="hljs-keyword">catch</span>
                {
                    <span class="hljs-keyword">return</span>;
                }

                _ = Task.Run(() =&gt; HandleStreamAsync(stream, stopToken), stopToken);
            }
        }
    }

    <span class="hljs-function"><span class="hljs-keyword">private</span> <span class="hljs-keyword">async</span> Task <span class="hljs-title">HandleStreamAsync</span>(<span class="hljs-params">QuicStream stream, CancellationToken stopToken</span>)</span>
    {
        <span class="hljs-keyword">await</span> <span class="hljs-keyword">using</span> (stream)
        {
            <span class="hljs-keyword">byte</span>[] lenBuf = ArrayPool&lt;<span class="hljs-keyword">byte</span>&gt;.Shared.Rent(<span class="hljs-number">4</span>);
            <span class="hljs-keyword">try</span>
            {
                <span class="hljs-keyword">await</span> ReadExactAsync(stream, lenBuf.AsMemory(<span class="hljs-number">0</span>, <span class="hljs-number">4</span>), stopToken);
                <span class="hljs-keyword">int</span> len = BitConverter.ToInt32(lenBuf, <span class="hljs-number">0</span>);

                <span class="hljs-keyword">if</span> (len &lt;= <span class="hljs-number">0</span> || len &gt; <span class="hljs-number">1</span>_000_000) <span class="hljs-keyword">return</span>;

                <span class="hljs-keyword">byte</span>[] payload = ArrayPool&lt;<span class="hljs-keyword">byte</span>&gt;.Shared.Rent(len);
                <span class="hljs-keyword">try</span>
                {
                    <span class="hljs-keyword">await</span> ReadExactAsync(stream, payload.AsMemory(<span class="hljs-number">0</span>, len), stopToken);

                    <span class="hljs-comment">// Backpressure lives here.</span>
                    <span class="hljs-comment">// If downstream is slow, this will naturally throttle readers.</span>
                    <span class="hljs-keyword">await</span> _inbox.Writer.WriteAsync(payload.AsMemory(<span class="hljs-number">0</span>, len), stopToken);
                }
                <span class="hljs-keyword">finally</span>
                {
                    ArrayPool&lt;<span class="hljs-keyword">byte</span>&gt;.Shared.Return(payload);
                }
            }
            <span class="hljs-keyword">finally</span>
            {
                ArrayPool&lt;<span class="hljs-keyword">byte</span>&gt;.Shared.Return(lenBuf);
            }
        }
    }

    <span class="hljs-function"><span class="hljs-keyword">private</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">async</span> Task <span class="hljs-title">ReadExactAsync</span>(<span class="hljs-params">QuicStream stream, Memory&lt;<span class="hljs-keyword">byte</span>&gt; buffer, CancellationToken stopToken</span>)</span>
    {
        <span class="hljs-keyword">int</span> read = <span class="hljs-number">0</span>;
        <span class="hljs-keyword">while</span> (read &lt; buffer.Length)
        {
            <span class="hljs-keyword">int</span> n = <span class="hljs-keyword">await</span> stream.ReadAsync(buffer.Slice(read), stopToken);
            <span class="hljs-keyword">if</span> (n == <span class="hljs-number">0</span>) <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> InvalidOperationException(<span class="hljs-string">"Peer closed stream early."</span>);
            read += n;
        }
    }
}
</code></pre>
<p>What makes this cool is what you do next:</p>
<p>You measure stream-per-request versus stream-reuse and show where head-of-line blocking goes away compared to TCP. You show what happens when you accept 20,000 connections, then start killing networks on the client side. You tie the observed behaviour back to QUIC features like multiplexed streams and, for HTTP/3 scenarios, connection migration.</p>
<h3 id="heading-tcp-sockets-still-matter-but-the-rules-change">TCP sockets still matter, but the rules change</h3>
<p>Raw sockets are still the baseline for lowest overhead. The mistake most posts make is pretending sockets are always faster in real apps. They can be, but only if you also own everything around them, buffers, framing, concurrency, and pressure.</p>
<p>In your TCP version, you should make one hard point, you do not <code>await ReceiveAsync</code> forever and hope. You implement backpressure with bounded channels, you cap per-connection memory, and you drop slow consumers deliberately. When you do that, raw sockets stop being mysterious and start being comparable.</p>
<p>If you want to keep the post tight, do not paste a full socket server. Instead, show the two hottest sections:</p>
<ol>
<li><p>the framing read loop that uses <code>Socket.ReceiveAsync</code> into pooled buffers</p>
</li>
<li><p>the dispatcher that enforces a bounded queue and applies policy when full</p>
</li>
</ol>
<p>Then compare those two hot sections to the QUIC stream handler above. That is the comparison people care about.</p>
<h3 id="heading-where-kestrel-fits-in-a-performance-discussion">Where Kestrel fits in a performance discussion</h3>
<p>Kestrel is not slow, its just doing more. It gives you TLS, HTTP parsing, routing, middleware, logging hooks, and observability integration. The post should explain that if you want HTTP, you should not throw them away for a custom protocol unless you can prove the win in your own workload.</p>
<p>If you do want HTTP/3, show the minimum config to enable it and then measure it. The Microsoft doc on HTTP/3 in Kestrel is the authoritative reference here, including QUIC behaviour like connection migration.</p>
<p>A clean way to structure this section is to treat Kestrel as a control layer and QUIC streams or TCP sockets as a data layer. Your control layer can be normal HTTP, OpenAPI, auth, throttling rules, and config updates. Your data layer can be the ultra-lean protocol used for high-rate ingest.</p>
<h3 id="heading-websockets-in-net-10-the-bit-most-people-will-miss">WebSockets in .NET 10 - the bit most people will miss</h3>
<p>A lot of systems still use WebSockets for realtime feeds because the client framework is easy. .NET 10 added new WebSocket APIs that change how you can shape streaming code, and this is exactly the kind of thing that will not be done to death yet. It is called out explicitly in the .NET 10 networking improvements material and in coverage of the release. If you include WebSockets, do it with purpose, show one scenario where WebSockets is the right choice for browser clients, then show your QUIC protocol for service-to-service.</p>
<h3 id="heading-how-to-benchmark-this">How to benchmark this</h3>
<p>If you publish one graph and call it a day, senior readers will ignore you. Do three measurements and explain what they mean:</p>
<p>Throughput - messages per second at fixed payload sizes.<br />Tail latency - p95 and p99 under burst and under slow clients.<br />CPU cost - cores consumed at target throughput.</p>
<p>Then use two test frames:</p>
<p>A steady load that fits in cache and shows best case.<br />A bursty load with induced jitter, where your backpressure and queue policy is the real differentiator.</p>
<p>Finally, explicitly call out your environment and settings: Linux vs Windows, container limits, TLS on or off, payload sizes, and number of concurrent connections. Without those, your results are just marketing.</p>
<p>End with a concrete rule:</p>
<p>If you need HTTP and broad compatibility, use Kestrel, enable HTTP/3 where it helps, and focus on the app-level bottlenecks.</p>
<p>If you need predictable low-latency ingest between controlled clients and services, QUIC streams are now practical in .NET, and they remove entire classes of TCP pain while staying fast.</p>
]]></content:encoded></item><item><title><![CDATA[Why & When to use the Volatile Keyword in .Net]]></title><description><![CDATA[You have probably seen the volatile keyword at least once if you’ve been writing .NET for a while. You may also have noticed that it appears far less often than lock, Interlocked, or ConcurrentDictionary. Thats not accidental. volatile exists to solv...]]></description><link>https://dotnetdigest.com/why-and-when-to-use-the-volatile-keyword-in-net</link><guid isPermaLink="true">https://dotnetdigest.com/why-and-when-to-use-the-volatile-keyword-in-net</guid><category><![CDATA[low level programming]]></category><category><![CDATA[C#]]></category><category><![CDATA[.NET]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[Microsoft]]></category><dc:creator><![CDATA[Patrick Kearns]]></dc:creator><pubDate>Mon, 22 Dec 2025 22:44:58 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1766443396560/6c50013e-0eec-456c-888b-8fef9ccaa052.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>You have probably seen the <code>volatile</code> keyword at least once if you’ve been writing .NET for a while. You may also have noticed that it appears far less often than <code>lock</code>, <code>Interlocked</code>, or <code>ConcurrentDictionary</code>. Thats not accidental. <code>volatile</code> exists to solve a very specific class of problems, and using it without fully understanding the memory model can make your code worse rather than better.</p>
<p>Below we’ll look into what <code>volatile</code> actually does in .NET, how it interacts with the CLR memory model, why it exists at all in a managed runtime, and when you should and should not use it. By the end, you should be able to read any use of <code>volatile</code> in code and immediately judge whether it is correct, unnecessary, or just dangerous.</p>
<h2 id="heading-why-volatile-exists-in-a-managed-runtime">Why <code>volatile</code> Exists in a Managed Runtime</h2>
<p>At first glance, <code>volatile</code> feels like a low-level construct that should not exist in a high-level managed language. After all, C# runs on a virtual machine, uses garbage collection, and abstracts away most hardware details. Multithreading however, breaks many assumptions about abstraction.</p>
<p>Modern CPUs aggressively reorder instructions, cache values in registers, and delay writes to main memory. The CLR and JIT compiler are free to reorder instructions as long as single-threaded semantics are preserved. None of this is a problem until multiple threads start reading and writing shared memory.</p>
<p>The key issue is visibility. One thread may update a field, but another thread may continue to see an old value indefinitely if no synchronisation happens. The CPU is allowed to keep a cached copy of a value and never re-read it unless something forces it to do so.</p>
<p><code>volatile</code> exists to provide a guarantee without introducing full mutual exclusion.</p>
<h2 id="heading-a-simple-problem-that-looks-safe-but-isnt">A Simple Problem That Looks Safe But Isn’t</h2>
<pre><code class="lang-csharp"><span class="hljs-keyword">class</span> <span class="hljs-title">Worker</span>
{
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">bool</span> _stop;

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">Run</span>(<span class="hljs-params"></span>)</span>
    {
        <span class="hljs-keyword">while</span> (!_stop)
        {
            DoWork();
        }
    }

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">Stop</span>(<span class="hljs-params"></span>)</span>
    {
        _stop = <span class="hljs-literal">true</span>;
    }
}
</code></pre>
<p>This looks correct. One thread calls <code>Run</code>, another thread calls <code>Stop</code>, and eventually the loop should exit. The problem is that the CLR and CPU are allowed to cache <code>_stop</code> in a register. The JIT may even hoist the read out of the loop entirely.</p>
<p>From the point of view of the runtime, there is nothing in this method that forces <code>_stop</code> to be reloaded from memory on each iteration. The loop may never terminate.</p>
<p>This pattern causes real production bugs all the time.</p>
<h2 id="heading-what-volatile-actually-guarantees-in-net">What <code>volatile</code> Actually Guarantees in .NET</h2>
<p>When you mark a field as <code>volatile</code>, you are asking the CLR to enforce specific memory ordering rules around reads and writes of that field.</p>
<p>In .NET, <code>volatile</code> provides the following guarantees:</p>
<p>A read of a volatile field always reads from main memory and cannot be satisfied from a register or stale cache.</p>
<p>A write to a volatile field is immediately visible to other threads that subsequently read that field.</p>
<p>Reads and writes to a volatile field act as memory barriers with acquire and release semantics respectively.</p>
<p>What <code>volatile</code> does not guarantee is equally important. It does not make compound operations atomic. It does not provide mutual exclusion. It does not prevent race conditions involving multiple fields.</p>
<h2 id="heading-fixing-the-stop-flag-example-correctly">Fixing the Stop Flag Example Correctly</h2>
<p>If we update the earlier example to use <code>volatile</code>, it becomes:</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">class</span> <span class="hljs-title">Worker</span>
{
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">volatile</span> <span class="hljs-keyword">bool</span> _stop;

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">Run</span>(<span class="hljs-params"></span>)</span>
    {
        <span class="hljs-keyword">while</span> (!_stop)
        {
            DoWork();
        }
    }

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">Stop</span>(<span class="hljs-params"></span>)</span>
    {
        _stop = <span class="hljs-literal">true</span>;
    }
}
</code></pre>
<p>This version is now correct. Each iteration of the loop is required to reread <code>_stop</code> from memory. When <code>Stop</code> sets <code>_stop</code> to true, the write is immediately visible to the running thread.</p>
<p>This is one of the few classic scenarios where <code>volatile</code> is exactly the right tool.</p>
<h2 id="heading-the-net-memory-model-and-reordering">The .NET Memory Model and Reordering</h2>
<p>To understand why <code>volatile</code> works, you need to understand instruction reordering.</p>
<p>Both the JIT compiler and the CPU are allowed to reorder instructions as long as the observable behaviour of a single thread does not change. This includes moving reads earlier, delaying writes, or combining operations.</p>
<pre><code class="lang-csharp">_ready = <span class="hljs-literal">true</span>;
_value = <span class="hljs-number">42</span>;
</code></pre>
<p>Without synchronization, another thread might observe <code>_ready</code> as true while still seeing an old value of <code>_value</code>. The write to <code>_value</code> may not have been flushed to memory yet.</p>
<p>This is where memory barriers come into play. A volatile write introduces a release barrier. A volatile read introduces an acquire barrier. Together, they ensure that writes before the volatile write become visible before the volatile value itself becomes visible.</p>
<h2 id="heading-using-volatile-for-safe-publication">Using <code>volatile</code> for Safe Publication</h2>
<p>One legitimate use of <code>volatile</code> is safe publication of immutable state.</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">class</span> <span class="hljs-title">ConfigHolder</span>
{
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">volatile</span> Config? _config;

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">Initialize</span>(<span class="hljs-params"></span>)</span>
    {
        <span class="hljs-keyword">var</span> config = LoadConfig();
        _config = config;
    }

    <span class="hljs-function"><span class="hljs-keyword">public</span> Config <span class="hljs-title">Get</span>(<span class="hljs-params"></span>)</span>
    {
        <span class="hljs-keyword">while</span> (_config == <span class="hljs-literal">null</span>)
        {
            Thread.Yield();
        }

        <span class="hljs-keyword">return</span> _config;
    }
}
</code></pre>
<p>Here, <code>_config</code> is written once and then read many times. The <code>volatile</code> keyword ensures that once a thread sees <code>_config</code> as non-null, it also sees the fully constructed <code>Config</code> object.</p>
<p>This only works because <code>Config</code> is immutable after construction. If the object were mutated after publication, <code>volatile</code> would not protect you.</p>
<h2 id="heading-why-volatile-is-not-a-lock-replacement">Why <code>volatile</code> Is Not a Lock Replacement</h2>
<p>A common mistake is to treat <code>volatile</code> as a lightweight lock. This is incorrect.</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">private</span> <span class="hljs-keyword">volatile</span> <span class="hljs-keyword">int</span> _count;

<span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">Increment</span>(<span class="hljs-params"></span>)</span>
{
    _count++;
}
</code></pre>
<p>This code is broken. The increment operation is a read-modify-write sequence. <code>volatile</code> ensures visibility, but it does not make the operation atomic. Two threads can read the same value and both write back the same incremented result.</p>
<p>If you need atomicity, you need <code>Interlocked</code> or a lock:</p>
<pre><code class="lang-csharp">Interlocked.Increment(<span class="hljs-keyword">ref</span> _count);
</code></pre>
<p>or</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">lock</span> (_sync)
{
    _count++;
}
</code></pre>
<h2 id="heading-volatile-vs-interlocked"><code>volatile</code> vs <code>Interlocked</code></h2>
<p><code>Interlocked</code> operations provide both atomicity and memory ordering guarantees. Every <code>Interlocked</code> method acts as a full memory barrier. This makes them strictly stronger than <code>volatile</code>.</p>
<p>If you are already using <code>Interlocked</code>, adding <code>volatile</code> is unnecessary and misleading. The presence of <code>volatile</code> suggests to the reader that visibility is the only concern, when in fact atomicity is also involved.</p>
<p>As a rule, if you need read-modify-write semantics, <code>volatile</code> is the wrong tool.</p>
<h2 id="heading-volatile-and-reference-types"><code>volatile</code> and Reference Types</h2>
<p>When a reference is marked as volatile, the volatility applies to the reference itself, not to the object it points to.</p>
<p>This is subtle but critical.</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">private</span> <span class="hljs-keyword">volatile</span> MyState _state;
</code></pre>
<p>This ensures that reads and writes of <code>_state</code> are visible across threads. It does not ensure that mutations of fields inside <code>MyState</code> are synchronised.</p>
<p>This pattern is safe only if <code>MyState</code> is immutable, or if all mutations are otherwise synchronised.</p>
<h2 id="heading-performance-characteristics-of-volatile">Performance Characteristics of <code>volatile</code></h2>
<p>A volatile read or write is more expensive than a normal read or write. It prevents certain compiler and CPU optimisations and may introduce memory fences.</p>
<p>That said, volatile operations are still far cheaper than locks. In hot paths where contention is low and the access pattern is simple, <code>volatile</code> can be the correct performance trade-off.</p>
<p>Performance however, should never be the first reason to choose <code>volatile</code>. Correctness must come first.</p>
<h2 id="heading-when-volatile-is-the-wrong-abstraction">When <code>volatile</code> Is the Wrong Abstraction</h2>
<p>If you find yourself needing multiple volatile fields that must be updated together, you are already in trouble. <code>volatile</code> provides no way to coordinate state changes across multiple variables. If you need invariants, ordering, or compound updates, you need higher-level synchronisation. This might be a lock, a concurrent collection, or a lock-free algorithm using <code>Interlocked</code>. Using <code>volatile</code> in these situations often results in code that works in tests and fails under real load.</p>
<h2 id="heading-volatile-in-modern-net-codebases"><code>volatile</code> in Modern .NET Codebases</h2>
<p>In modern .NET, <code>volatile</code> is most commonly seen in low-level infrastructure code, runtime components, or carefully designed lock-free algorithms. It is rare in business logic, and that is a good thing. If you encounter <code>volatile</code> in application code, treat it as a signal. Either the developer deeply understood the memory model, or they were guessing. There is rarely a middle ground. When reviewing such code, always ask a single question. What specific visibility problem is this solving, and why is a stronger abstraction not used?</p>
<p>If you cannot answer that confidently, the code is likely wrong.</p>
<h2 id="heading-a-mental-model-that-actually-works">A Mental Model That Actually Works</h2>
<p>The safest way to think about <code>volatile</code> is this. It is not about thread safety. It is about communication.</p>
<p>A volatile field is a communication channel between threads that ensures messages are seen in the correct order. It does not protect data. It does not serialise access. It only ensures that when one thread says something, another thread hears it. Used sparingly and precisely, <code>volatile</code> is a sharp and effective tool. Used casually, it is a foot-gun.</p>
<p>Most developers will go their entire careers without needing <code>volatile</code>. That is not a failure. It is a sign that higher-level constructs exist and should be preferred. When you do need it, you need to understand it fully. Partial understanding is worse than none at all. If you ever find yourself adding <code>volatile</code> to “fix” a threading bug without being able to explain exactly why it works, stop. Step back. Choose a safer abstraction. Concurrency bugs are some of the hardest bugs you will ever debug. <code>volatile</code> can help prevent them, but only if you respect just how narrow its guarantees really are.</p>
]]></content:encoded></item><item><title><![CDATA[Building a QUIC service in .NET with MsQuic]]></title><description><![CDATA[If you only ever touch QUIC through HTTP/3, it is easy to miss how different the transport really is. HTTP keeps you thinking in requests, responses, headers, and status codes. MsQuic forces you to think in connections, streams, flow control, and lif...]]></description><link>https://dotnetdigest.com/building-a-quic-service-in-net-with-msquic</link><guid isPermaLink="true">https://dotnetdigest.com/building-a-quic-service-in-net-with-msquic</guid><category><![CDATA[Mmicrosoft]]></category><category><![CDATA[C#]]></category><category><![CDATA[.NET]]></category><category><![CDATA[networking]]></category><category><![CDATA[transport_layer]]></category><dc:creator><![CDATA[Patrick Kearns]]></dc:creator><pubDate>Sun, 21 Dec 2025 11:12:19 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1766315926411/727e5372-3958-4254-90e0-c99b3e4df15e.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>If you only ever touch QUIC through HTTP/3, it is easy to miss how different the transport really is. HTTP keeps you thinking in requests, responses, headers, and status codes. MsQuic forces you to think in connections, streams, flow control, and lifetimes. Once you accept that shift, you can build systems that simply do not map cleanly onto HTTP at all.</p>
<p>I wrote about MsQUIC a few months ago <a target="_blank" href="https://dotnetdigest.com/msquic-the-transport-shift-that-will-redefine-distributed-net-systems">here</a>. This time we are going to build a real QUIC based service in .NET using MsQuic directly. The service is stateful, supports multiple concurrent bidirectional streams per client, and allows clients to reconnect and resume logical sessions. Along the way, we will look at concrete code patterns that actually work under load, not just demos that compile.</p>
<p>This assumes you already understand async and await, memory ownership, TLS, and basic networking. We will focus on how MsQuic changes the shape of the code you write.</p>
<h3 id="heading-starting-with-the-right-mental-model">Starting with the right mental model</h3>
<p>The most important thing to understand before writing any code is that MsQuic is callback driven and aggressively concurrent. You do not call ReadAsync and wait. MsQuic calls you and tells you that something happened. Your job is to translate those events into a form that your application logic can consume safely.</p>
<p>If you try to fight this and pretend MsQuic is just another stream abstraction, you will lose. You should treat MsQuic as an event source and build a thin translation layer that feeds async friendly primitives such as Channels or Pipes.</p>
<p>Once you adopt that model, the rest of the design starts to fall into place.</p>
<h3 id="heading-initialising-msquic-correctly-in-a-net-host">Initialising MsQuic correctly in a .NET host</h3>
<p>MsQuic has two global concepts you must get right from the beginning, the registration and the configuration. These are not cheap objects and they are not request scoped. They belong at the same level as your host itself.</p>
<p>In a typical .NET application, you would initialise these during startup and keep them alive for the lifetime of the process.</p>
<p>A simplified example looks like this.</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">using</span> Microsoft.Quic;
<span class="hljs-keyword">using</span> System.Net.Security;

<span class="hljs-keyword">var</span> registration = <span class="hljs-keyword">new</span> QuicRegistration(
    <span class="hljs-keyword">new</span> QuicRegistrationOptions
    {
        AppName = <span class="hljs-string">"SuperQuicService"</span>,
        ExecutionProfile = QuicExecutionProfile.LowLatency
    });

<span class="hljs-keyword">var</span> serverCertificate = LoadCertificate();

<span class="hljs-keyword">var</span> configuration = <span class="hljs-keyword">new</span> QuicConfiguration(
    registration,
    <span class="hljs-keyword">new</span> QuicConfigurationOptions
    {
        AlpnProtocols = <span class="hljs-keyword">new</span>[] { <span class="hljs-string">"super-quic"</span> },
        MaxInboundBidirectionalStreams = <span class="hljs-number">100</span>,
        MaxInboundUnidirectionalStreams = <span class="hljs-number">0</span>,
        IdleTimeout = TimeSpan.FromMinutes(<span class="hljs-number">2</span>),
        ServerAuthenticationOptions = <span class="hljs-keyword">new</span> SslServerAuthenticationOptions
        {
            ApplicationProtocols = <span class="hljs-keyword">new</span> List&lt;SslApplicationProtocol&gt;
            {
                <span class="hljs-keyword">new</span> SslApplicationProtocol(<span class="hljs-string">"super-quic"</span>)
            },
            ServerCertificate = serverCertificate
        }
    });
</code></pre>
<p>The important detail here is the intent. You are explicitly choosing stream limits. You are explicitly choosing an idle timeout. You are explicitly controlling ALPN. These decisions affect how your service behaves under stress.</p>
<p>Once this configuration is in use, every connection created from it inherits these characteristics. You cannot patch this later without restarting the process.</p>
<h3 id="heading-listening-for-incoming-connections">Listening for incoming connections</h3>
<p>With configuration in place, you can create a listener. This listener accepts incoming QUIC connections and hands them to you via callbacks.</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">var</span> listener = <span class="hljs-keyword">new</span> QuicListener(
    registration,
    configuration,
    <span class="hljs-keyword">new</span> IPEndPoint(IPAddress.Any, <span class="hljs-number">5555</span>));

listener.Start();
</code></pre>
<p>At this point, nothing interesting has happened yet. The real work begins when a client connects.</p>
<p>MsQuic will surface an incoming connection as a QuicConnection instance. You should immediately associate it with application state.</p>
<h3 id="heading-attaching-application-state-to-a-connection">Attaching application state to a connection</h3>
<p>When a new connection arrives, you should create a connection state object that represents everything your application knows about that peer. This is where many designs go wrong by putting too much logic into the state object itself.</p>
<p>A good connection state is mostly data.</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">sealed</span> <span class="hljs-keyword">class</span> <span class="hljs-title">ConnectionState</span>
{
    <span class="hljs-keyword">public</span> Guid ConnectionId { <span class="hljs-keyword">get</span>; } = Guid.NewGuid();
    <span class="hljs-keyword">public</span> ConcurrentDictionary&lt;<span class="hljs-keyword">long</span>, StreamState&gt; Streams { <span class="hljs-keyword">get</span>; } = <span class="hljs-keyword">new</span>();
    <span class="hljs-keyword">public</span> CancellationTokenSource Lifetime { <span class="hljs-keyword">get</span>; } = <span class="hljs-keyword">new</span>();
    <span class="hljs-keyword">public</span> SessionState? Session { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">set</span>; }
}
</code></pre>
<p>When you accept a connection, you create one of these and associate it with the QuicConnection using a GCHandle or a dictionary keyed by the connection handle.</p>
<p>From this point on, every stream event for this connection can find its owning state.</p>
<h3 id="heading-accepting-and-handling-streams">Accepting and handling streams</h3>
<p>In QUIC, streams are the unit of work. A client can open many of them concurrently, and they are independent of one another.</p>
<p>When MsQuic notifies you of a new incoming stream, you create a stream handler. This handler owns the lifetime of that stream and is responsible for reading and writing data.</p>
<p>Here is a simplified pattern that works well in practice.</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">sealed</span> <span class="hljs-keyword">class</span> <span class="hljs-title">StreamState</span>
{
    <span class="hljs-keyword">public</span> <span class="hljs-keyword">long</span> StreamId { <span class="hljs-keyword">get</span>; }
    <span class="hljs-keyword">public</span> QuicStream Stream { <span class="hljs-keyword">get</span>; }
    <span class="hljs-keyword">public</span> Pipe Pipe { <span class="hljs-keyword">get</span>; } = <span class="hljs-keyword">new</span>();

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-title">StreamState</span>(<span class="hljs-params"><span class="hljs-keyword">long</span> streamId, QuicStream stream</span>)</span>
    {
        StreamId = streamId;
        Stream = stream;
    }
}
</code></pre>
<p>The key idea here is the Pipe. MsQuic delivers buffers via callbacks. Pipes give you an async reader and writer pair that integrate naturally with modern .NET code.</p>
<p>When MsQuic tells you data has arrived, you write it into the PipeWriter. Your application logic reads from the PipeReader at its own pace.</p>
<h3 id="heading-translating-msquic-receive-callbacks-into-async-reads">Translating MsQuic receive callbacks into async reads</h3>
<p>When data arrives on a stream, MsQuic invokes a callback with one or more buffers. You must copy or reference that data and then tell MsQuic when you are done with it.</p>
<p>A typical receive handler might look like this.</p>
<pre><code class="lang-csharp"><span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">OnStreamDataReceived</span>(<span class="hljs-params">StreamState state, ReadOnlySpan&lt;<span class="hljs-keyword">byte</span>&gt; data</span>)</span>
{
    <span class="hljs-keyword">var</span> writer = state.Pipe.Writer;

    writer.Write(data);
    <span class="hljs-keyword">var</span> result = writer.FlushAsync().GetAwaiter().GetResult();

    <span class="hljs-keyword">if</span> (result.IsCompleted)
    {
        state.Stream.ShutdownRead();
    }
}
</code></pre>
<p>This code looks simple, but it hides an important behaviour. If the reader is slow, FlushAsync will eventually apply backpressure. That backpressure propagates all the way back to MsQuic, which slows the sender down. You are no longer lying to the transport.</p>
<p>On the reading side, your application logic can now be written as straightforward async code.</p>
<pre><code class="lang-csharp"><span class="hljs-function"><span class="hljs-keyword">async</span> Task <span class="hljs-title">ProcessStreamAsync</span>(<span class="hljs-params">StreamState state</span>)</span>
{
    <span class="hljs-keyword">var</span> reader = state.Pipe.Reader;

    <span class="hljs-keyword">while</span> (<span class="hljs-literal">true</span>)
    {
        <span class="hljs-keyword">var</span> result = <span class="hljs-keyword">await</span> reader.ReadAsync();
        <span class="hljs-keyword">var</span> buffer = result.Buffer;

        <span class="hljs-keyword">if</span> (buffer.Length &gt; <span class="hljs-number">0</span>)
        {
            <span class="hljs-keyword">await</span> HandleApplicationMessage(buffer);
        }

        reader.AdvanceTo(buffer.End);

        <span class="hljs-keyword">if</span> (result.IsCompleted)
            <span class="hljs-keyword">break</span>;
    }
}
</code></pre>
<p>This is the point where MsQuic stops feeling alien. You have turned callbacks into something that fits naturally into the async model you already understand.</p>
<h3 id="heading-defining-an-application-protocol-on-top-of-streams">Defining an application protocol on top of streams</h3>
<p>At this point, you have raw byte streams. You still need a protocol.</p>
<p>In our service, the first stream a client opens is a control stream. The client sends a session token. The server responds with either a resume acknowledgement or a new session id.</p>
<p>A simple framing format might be length prefixed JSON messages. That is not flash, but it is effective.</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">record</span> <span class="hljs-title">ControlMessage</span>(<span class="hljs-title">string</span> <span class="hljs-title">Type</span>, <span class="hljs-title">string</span>? <span class="hljs-title">Token</span>);

<span class="hljs-function"><span class="hljs-keyword">async</span> Task <span class="hljs-title">HandleControlStream</span>(<span class="hljs-params">StreamState state</span>)</span>
{
    <span class="hljs-keyword">var</span> reader = state.Pipe.Reader;

    <span class="hljs-keyword">var</span> message = <span class="hljs-keyword">await</span> ReadJsonMessage&lt;ControlMessage&gt;(reader);

    <span class="hljs-keyword">if</span> (message.Type == <span class="hljs-string">"resume"</span>)
    {
        state.Connection.Session = ResumeSession(message.Token);
    }
    <span class="hljs-keyword">else</span>
    {
        state.Connection.Session = CreateNewSession();
    }

    <span class="hljs-keyword">await</span> SendJsonMessage(state.Stream, state.Connection.Session);
}
</code></pre>
<p>Once the session is established, subsequent streams implicitly belong to it. The transport does not know or care. This is purely application logic layered on top of QUIC.</p>
<h3 id="heading-supporting-reconnection-and-resumability">Supporting reconnection and resumability</h3>
<p>QUIC can survive IP changes, but not process restarts. If you want resumability across reconnects, you must build it yourself.</p>
<p>The pattern that works is simple. Session state lives independently of connections. Connections attach to sessions.</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">sealed</span> <span class="hljs-keyword">class</span> <span class="hljs-title">SessionState</span>
{
    <span class="hljs-keyword">public</span> <span class="hljs-keyword">string</span> Token { <span class="hljs-keyword">get</span>; }
    <span class="hljs-keyword">public</span> ConcurrentDictionary&lt;<span class="hljs-keyword">long</span>, <span class="hljs-keyword">object</span>&gt; LogicalState { <span class="hljs-keyword">get</span>; } = <span class="hljs-keyword">new</span>();
    <span class="hljs-keyword">public</span> DateTime LastSeen { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">set</span>; }

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-title">SessionState</span>(<span class="hljs-params"><span class="hljs-keyword">string</span> token</span>)</span>
    {
        Token = token;
        LastSeen = DateTime.UtcNow;
    }
}
</code></pre>
<p>When a connection drops, you do not immediately destroy the session. You mark it as detached. If a client reconnects and presents the same token within a timeout window, you reattach.</p>
<p>Reconnecting and reopening streams is cheap. You are not fighting TCP TIME_WAIT or connection storms.</p>
<h3 id="heading-writing-back-to-the-client-without-blocking-everything-else">Writing back to the client without blocking everything else</h3>
<p>Sending data in MsQuic is also asynchronous and stream scoped. Each stream has its own flow control. Writing too much on one stream does not block others.</p>
<p>A typical send method.</p>
<pre><code class="lang-csharp"><span class="hljs-function"><span class="hljs-keyword">async</span> Task <span class="hljs-title">SendAsync</span>(<span class="hljs-params">QuicStream stream, ReadOnlyMemory&lt;<span class="hljs-keyword">byte</span>&gt; data</span>)</span>
{
    <span class="hljs-keyword">await</span> stream.WriteAsync(data, endStream: <span class="hljs-literal">false</span>);
}
</code></pre>
<p>Because streams are independent, you can fire off writes on multiple streams concurrently without fear of head of line blocking. This is one of the core advantages of QUIC that you only feel when you work at this level.</p>
<h3 id="heading-handling-shutdowns-cleanly">Handling shutdowns cleanly</h3>
<p>One of the hardest parts of any transport code is shutdown. Streams may still be active. Connections may be half closed. Callbacks may still be in flight.</p>
<p>The rule with MsQuic is simple. Never assume callbacks have stopped until the connection is fully closed. Use cancellation tokens to signal intent, but always code defensively.</p>
<p>When shutting down a connection, cancel its lifetime token, stop accepting new streams, and let existing streams drain if possible. Then close the connection.</p>
<p>If you get this wrong, you will see use after free bugs and random crashes. This is where discipline matters.</p>
<h3 id="heading-observability-with-real-identifiers">Observability with real identifiers</h3>
<p>Because you are not using HTTP, you must create your own observability story.</p>
<p>A simple but effective approach is to generate a connection id and a stream id and include them in every log entry.</p>
<pre><code class="lang-csharp">logger.LogInformation(
    <span class="hljs-string">"Received data on connection {ConnectionId}, stream {StreamId}"</span>,
    connectionState.ConnectionId,
    streamState.StreamId);
</code></pre>
<p>When something goes wrong under load, these identifiers give you a narrative. Without them, you are blind.</p>
<p>Metrics matter just as much. Active connections, active streams, and backpressure events tell you far more about system health than CPU usage ever will.</p>
<h3 id="heading-testing-with-real-quic-traffic">Testing with real QUIC traffic</h3>
<p>The only tests that really matter here are integration tests that use real MsQuic connections.</p>
<p>You can spin up the server in process and connect using a QuicConnection from a test project. Because everything is async and UDP based, these tests are fast enough to run in CI.What you should not do is mock QuicStream or QuicConnection. That gives you confidence in code paths that will never behave the same way under real conditions.</p>
<p>If you choose MsQuic, you choose realism.</p>
<h3 id="heading-when-this-level-of-control-is-worth-it">When this level of control is worth it</h3>
<p>Direct MsQuic usage is not for every service. It is worth it when you need fine grained control over concurrency, latency, and state. It is worth it when HTTP semantics get in your way. It is worth it when you own both ends of the connection, Its not worth it for simple CRUD APIs or public facing endpoints where ecosystem compatibility matters more than raw capability.The key is intentionality. MsQuic is a powerful tool. Used deliberately, it lets you build systems that were previously awkward or impossible. Used casually, it will punish you.</p>
<p>Working directly with MsQuic in .NET forces you to think like a systems programmer again, but with modern language tools at your disposal. You deal with lifetimes, concurrency, and flow control explicitly, but you also get async, memory safety, and rich diagnostics.</p>
<p>If you are building serious distributed systems and you are willing to meet the transport layer on its own terms, this approach opens doors that HTTP keeps closed. That is the real payoff.</p>
]]></content:encoded></item><item><title><![CDATA[Building a Zero Trust Container Platform in Azure]]></title><description><![CDATA[Zero trust changes the way you design container platforms. You stop assuming anything is safe. Every identity must prove itself. Every resource must be isolated. Every boundary must be explicit. A modern Azure platform gives you the pieces, but it do...]]></description><link>https://dotnetdigest.com/building-a-zero-trust-container-platform-in-azure</link><guid isPermaLink="true">https://dotnetdigest.com/building-a-zero-trust-container-platform-in-azure</guid><category><![CDATA[Azure]]></category><category><![CDATA[azure-devops]]></category><category><![CDATA[Security]]></category><category><![CDATA[securityawareness]]></category><category><![CDATA[.NET]]></category><dc:creator><![CDATA[Patrick Kearns]]></dc:creator><pubDate>Fri, 05 Dec 2025 14:25:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1765020145518/07d4b914-555e-48eb-88ff-44bceee5c5ce.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Zero trust changes the way you design container platforms. You stop assuming anything is safe. Every identity must prove itself. Every resource must be isolated. Every boundary must be explicit. A modern Azure platform gives you the pieces, but it does not give you a safe system by default. You must assemble the environment layer by layer until nothing passes without proof.</p>
<p>A mature zero trust platform in Azure uses two complementary environments. Azure Container Apps provides a managed control plane that handles scaling, ingress, certificates, identity, and workload isolation without the operational drag of Kubernetes. AKS provides granular control for workloads that need custom networking, privileged sidecars, message brokers, private ingress controllers, or latency sensitive services. The two environments serve different purposes. When you design them together, you give each workload the environment that matches its shape. The platform becomes flexible without sacrificing discipline.</p>
<p>The foundation begins with identity. Zero trust starts with the principle that no workload should use secrets. A container should never store a connection string, API key, or password. Instead it should authenticate using workload identities that Azure can verify cryptographically. This means every Container App uses a managed identity. Every AKS workload uses workload identity federation. Every pipeline that pushes an image authenticates to Azure Container Registry using OIDC with no stored credentials. When everything authenticates with signed identity tokens, you close the first major attack path. A compromised container cannot steal a secret that does not exist.</p>
<p>Once identity is in place, you tighten the supply chain. Containers are only trustworthy if you control the images. Azure Container Registry supports content trust. Your CI pipeline signs images using Notation. When an image reaches the registry, it arrives with a signature bound to the digest. The signature becomes the source of truth. Container Apps and AKS then enforce signature verification. Any unsigned image is rejected. Any image with mismatched metadata is blocked. The orchestrator no longer trusts the registry alone. It trusts the cryptographic identity that your pipeline produced. This step removes entire classes of supply chain attacks. A poisoned dependency cannot overwrite an image digest. A malicious actor cannot replace an image in the registry without breaking the signature.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1764942536878/092e9f4c-3218-415c-9d2f-339f5edad340.png" alt class="image--center mx-auto" /></p>
<p>You also need to remove uncertainty from the build itself. A secure image must be reproducible. You achieve this by locking base images by digest. You avoid floating tags. You build .NET services with deterministic restore and locked NuGet dependencies. You use multi stage builds that isolate build tools from runtime. You trim the application. You produce a small, predictable executable in a distroless runtime image. Each image has a clear lineage. When you audit the container, you can explain why every file exists. Predictability becomes a security guarantee because unpredictable containers hide the unknown.</p>
<p>Networking forms the next layer. You do not allow workloads to talk freely. Zero trust networking begins with private environments. Container Apps runs inside an internal environment contained by a locked down VNET. It has no public ingress. Ingress only flows through an Application Gateway with WAF enabled or through an Azure Front Door instance with strict routing rules. AKS sits inside a sibling network segment. It uses Azure CNI with dedicated subnets. Pods receive IP addresses that Azure understands. Network security groups restrict movement between subnets. Private endpoints control access to databases, service buses, and internal APIs. No workload can reach the public internet unless you explicitly allow it. Outbound traffic either flows through Azure Firewall or gets blocked. This turns the network into a verification layer. Traffic paths become predictable. Unauthorised access attempts become visible.</p>
<p>Two orchestrators require one policy engine. Azure Policy sits above both environments. It enforces that images must be signed. It enforces that container registries must be private. It enforces HTTPS ingress. It denies hostPath mounts. It blocks privileged containers in AKS. It blocks sidecars that run with elevated permissions. Policy becomes the first guard. Anything that violates the platform rules never reaches production. Developers gain clarity because the platform no longer accepts unsafe workloads. Security becomes a constraint baked into the environment.</p>
<p>A zero trust architecture also needs runtime controls. Containers fail. Some failures indicate ordinary bugs. Others indicate an attack. You cannot tell the difference if the environment lacks visibility. Container Apps provides request telemetry, environment logs, and revision history. AKS provides node-level and pod-level audit events. You push these logs into a central workspace. From there you create behavioural baselines. A healthy container starts within a known window. It opens a predictable set of outbound ports. It accesses a small range of internal dependencies. Anything outside these patterns triggers an inspection. This method works because a hardened environment has fewer legitimate behaviours. When you reduce the normal path, anomalies become clearer.</p>
<p>The relationship between AKS and Container Apps becomes important here. Container Apps handles most services because it reduces the operational load. AKS carries the specialised workloads. You do not run general web APIs in AKS unless you need capabilities the managed platform cannot provide. When you shift workloads into Container Apps by default, the blast radius becomes smaller. The majority of your applications run inside a managed sandbox with sealed boundaries. AKS becomes a controlled environment instead of a dependency for everything. Zero trust benefits from this split because you reduce complexity where you can and apply granular control where you must.</p>
<p>The next boundary is the connection between developer machines and the platform. Zero trust assumes developers should never authenticate directly to production clusters. They authenticate to Azure. Azure issues tokens. Tokens allow controlled, short-lived access. You use role assignments to limit what a human can do. You log every action. You avoid kubectl wherever possible by using GitOps for deployment. When AKS receives changes from a Git repository rather than from direct human commands, unapproved modifications cannot sneak in quietly.</p>
<p>To make the architecture clearer, it helps to map the trust flow.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1764942414287/a63e3e57-5042-4f22-9a73-f3f11bc38c41.png" alt class="image--center mx-auto" /></p>
<p>The diagram shows every step requires proof. Nothing runs without identity. Nothing runs without signature. Nothing reaches infrastructure without policy.</p>
<p>At this point, you introduce defence in depth. You assume a container can still be compromised. You assume an attacker may reach a running process. The question becomes how far they can move. In Container Apps they face an immutable runtime with no shell. They cannot write to the root filesystem. They cannot mount disk. They cannot issue outbound calls unless you allowed them. Capabilities are limited because the managed platform does not expose them. In AKS you set your seccomp profiles. You drop capabilities. You run each pod with a read only root filesystem. You enforce non root users. You use the pod security admission controller to reject workloads that ask for dangerous privileges. By controlling the runtime, you reduce the space an attacker can explore.</p>
<p>You also protect data access paths. Databases, queues, caches, and internal APIs insist on workload identity. They do not accept keys. They do not trust internal networks alone. They evaluate tokens that Azure issues. This blocks lateral movement through credential theft. Even if a container is compromised, the attacker cannot extract secrets because none exist. They cannot reuse the identity token because it expires quickly. They cannot impersonate the service across long time windows because identity is bound to the workload that Azure created. A .NET application inside this environment becomes simpler to defend. It starts quickly. It reads configuration from Azure. It connects using managed identity. It logs to the central workspace. It runs inside a runtime image that contains nothing except the application and the libraries it needs. It cannot spawn a shell. It cannot load native tooling. It cannot reach the internet. The container’s behaviour becomes predictable. That predictability is the security boundary. The architecture becomes stronger when you integrate continuous verification. You scan images in the registry to detect new vulnerabilities. You monitor supply chain sources. You rebuild images regularly. You track dependency changes through SBOM diffs. When a vulnerability appears, you know which artefacts are affected. You eliminate guesswork. Your platform remains ahead of supply chain attacks because each layer is small, signed, and auditable.</p>
<p>With zero trust you build a chain where each link reinforces the others. Identity reinforces networking. Networking reinforces policy. Policy reinforces the build pipeline. The build pipeline reinforces the registry. The registry reinforces the orchestrator. The orchestrator reinforces runtime boundaries. If one link weakens, the others slow the attacker down. Designing a zero trust container platform in Azure becomes easier when you accept that most workloads belong in Container Apps and only specialised workloads belong in AKS. This reduces the complexity of your attack surface. It also forces clarity into your architecture. You give developers a safe platform by default. You give infrastructure engineers the tools to handle advanced cases. You give the organisation a trustworthy environment built on clear boundaries instead of hope.</p>
<p>Once the platform reaches this stage, operations become quieter. Incidents become easier to diagnose because the unknowns shrink. Random behaviour stands out. Compromised workloads expose themselves through abnormal patterns.</p>
<p>Containers become predictable.</p>
<p>Predictability becomes security.</p>
<p>This is how you build a zero trust container platform in Azure. Not with one tool. Not with one setting. With a chain of small, verifiable steps that transform the platform into something attackers cannot navigate.</p>
]]></content:encoded></item><item><title><![CDATA[Async deadlocks in C#]]></title><description><![CDATA[I recently wrote about async locking and thought deadlocks swhould probably have its own article. Async deadlocks are some of the most misunderstood failures in modern .NET systems. They rarely look like traditional deadlocks. There is no thread froz...]]></description><link>https://dotnetdigest.com/async-deadlocks-in-c</link><guid isPermaLink="true">https://dotnetdigest.com/async-deadlocks-in-c</guid><category><![CDATA[deadlocks]]></category><category><![CDATA[asynchronous]]></category><category><![CDATA[C#]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[software architecture]]></category><dc:creator><![CDATA[Patrick Kearns]]></dc:creator><pubDate>Sun, 30 Nov 2025 11:14:20 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1764501324160/f81758e8-97e5-4b32-9aa1-6254871590aa.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I recently wrote about <a target="_blank" href="https://dotnetdigest.com/async-locking-in-c">async locking</a> and thought deadlocks swhould probably have its own article. Async deadlocks are some of the most misunderstood failures in modern .NET systems. They rarely look like traditional deadlocks. There is no thread frozen in a debugger. No obvious circular wait. CPU normally looks fine. Memory is stable. Requests just… stop completing.</p>
<p>By the time you realise what is happening, it has already become a production incident.</p>
<h2 id="heading-async-deadlocks-are-logical-not-mechanical">Async deadlocks are logical, not mechanical</h2>
<p>Traditional deadlocks are mechanical.</p>
<p>A thread holds lock A and waits for lock B.<br />Another thread holds lock B and waits for lock A.<br />Nothing moves.</p>
<p>Async deadlocks are <em>logical</em>.</p>
<p>A logical unit of work suspends, awaiting progress that cannot happen because the rest of the system is waiting on that suspended work to finish.</p>
<p>No single thread is blocked forever. The system is blocked <em>as a whole</em>.</p>
<p>This is why they are harder to see.</p>
<h2 id="heading-the-historical-trap-synchronizationcontext">The historical trap - SynchronizationContext</h2>
<p>The most infamous async deadlock pattern in C# comes from UI and server frameworks that install a <code>SynchronisationContext</code>.</p>
<p>WPF.<br />WinForms.<br />Classic <a target="_blank" href="http://ASP.NET">ASP.NET</a>.</p>
<p>The rule these environments enforce is simple.</p>
<p>Only one logical operation at a time is allowed to run on the main context.</p>
<p>Look at this.</p>
<pre><code class="lang-csharp"><span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">string</span> <span class="hljs-title">GetValue</span>(<span class="hljs-params"></span>)</span>
{
    <span class="hljs-keyword">return</span> GetValueAsync().Result;
}

<span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task&lt;<span class="hljs-keyword">string</span>&gt; <span class="hljs-title">GetValueAsync</span>(<span class="hljs-params"></span>)</span>
{
    <span class="hljs-keyword">await</span> Task.Delay(<span class="hljs-number">100</span>);
    <span class="hljs-keyword">return</span> <span class="hljs-string">"done"</span>;
}
</code></pre>
<p>This looks innocent. Its not.</p>
<p>The async method captures the current synchronisation context, because that is the default behaviour. After <code>await</code>, it will try to resume on the original context.</p>
<p>The calling thread blocks on <code>.Result</code>.</p>
<p>That thread <em>is the synchronisation context</em>.</p>
<p>So the continuation waits for a context that is blocked waiting for the continuation.</p>
<p>That is a deadlock.</p>
<h2 id="heading-why-you-still-see-this-in-senior-codebases">Why you still see this in senior codebases</h2>
<p>The dangerous thing about this class of deadlock is that it does not always appear in development.</p>
<p><a target="_blank" href="http://ASP.NET">ASP.NET</a> Core does not install a classic synchronisation context. Many console apps do not. Unit tests often run without one.</p>
<p>So the code “works”</p>
<p>Then it gets copied to a UI application, a background service with a context, or an old <a target="_blank" href="http://ASP.NET">ASP.NET</a> app.</p>
<p>And it deadlocks instantly.</p>
<p>Async deadlocks caused by context capture are environment sensitive, which makes them particularly nasty.</p>
<h2 id="heading-configureawaitfalse-is-not-magic-dust"><code>ConfigureAwait(false)</code> is not magic dust</h2>
<p>You already know the advice. “Just add <code>ConfigureAwait(false)</code>"</p>
<p>That advice is incomplete and sometimes wrong. <code>ConfigureAwait(false)</code> solves <strong>one</strong> kind of async deadlock, context capture inversion. It does not solve async deadlocks in general. If you do not understand <em>why</em> it helps, you will apply it inconsistently and still get burned.</p>
<p>What <code>ConfigureAwait(false)</code> actually says is,. “Do not attempt to resume on the captured synchronization context”, That is all. It does nothing about locks, ordering, resource contention, or logical dependency cycles. In library code, it is usually correct. In application code, it is situational.</p>
<p>Blind usage creates other classes of bugs, especially involving thread affine APIs.</p>
<h2 id="heading-async-deadlocks-that-have-nothing-to-do-with-synchronisation-contexts">Async deadlocks that have nothing to do with synchronisation contexts</h2>
<p>This is where most experienced developers get caught.</p>
<p>Look at this pattern.</p>
<pre><code class="lang-csharp"><span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task <span class="hljs-title">HandleAsync</span>(<span class="hljs-params"></span>)</span>
{
    <span class="hljs-keyword">await</span> _lock.WaitAsync();
    <span class="hljs-keyword">try</span>
    {
        <span class="hljs-keyword">await</span> OperationAAsync();
    }
    <span class="hljs-keyword">finally</span>
    {
        _lock.Release();
    }
}

<span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task <span class="hljs-title">OperationAAsync</span>(<span class="hljs-params"></span>)</span>
{
    <span class="hljs-keyword">await</span> OperationBAsync();
}

<span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task <span class="hljs-title">OperationBAsync</span>(<span class="hljs-params"></span>)</span>
{
    <span class="hljs-keyword">await</span> _lock.WaitAsync();
    <span class="hljs-keyword">try</span>
    {
        <span class="hljs-comment">// work away</span>
    }
    <span class="hljs-keyword">finally</span>
    {
        _lock.Release();
    }
}
</code></pre>
<p>No <code>.Result</code>.<br />No <code>.Wait()</code>.<br />No synchronisation context.</p>
<p>And yet, under the right call path, this deadlocks <em>logically</em>.</p>
<p><code>HandleAsync</code> holds the lock and awaits a call that tries to reacquire the same lock. The continuation cannot proceed until the outer method releases the lock. The outer method cannot release the lock until the inner method completes.</p>
<p>Nothing blocks a thread, but nothing can make progress.</p>
<h2 id="heading-assumptions-are-deadly-in-async-code">Assumptions are deadly in async code</h2>
<p>Many people unconsciously assume that async code behaves like synchronous code.</p>
<p>It does not.</p>
<p>Async methods yield control explicitly. When they resume, they do so independently of their callers’ logical execution. If you acquire an async lock, <em>any awaited call inside that section must be treated as potentially reentrant</em>. If that awaited call tries to acquire the same lock, directly or indirectly, you have created a self deadlock.</p>
<p>deadlocks are some of the most misunderstood failure modes in modern .NET systems. They rarely look like traditional deadlocks.</p>
<p>But there is no obvious smoking gun.</p>
<h2 id="heading-bounded-resources-amplify-async-deadlocks">Bounded resources amplify async deadlocks</h2>
<p>Most async deadlocks become visible only under load.</p>
<p>Why?</p>
<p>Because async operations still rely on bounded resources.</p>
<p>Thread pool workers.<br />Database connections.<br />HTTP sockets.</p>
<p>A subtle deadlock can turn into a full system stall when all workers are waiting on continuations that cannot be scheduled because dependents cannot release resources.</p>
<p>For example, blocking on async work inside a limited connection pool is effectively a deadlock amplifier.</p>
<h2 id="heading-timeouts-are-not-a-fix-they-are-a-symptom-masker">Timeouts are not a fix, they are a symptom masker</h2>
<p>Adding timeouts often makes async deadlocks worse, not better. Timeouts break dependency cycles non deterministically. One operation fails, another proceeds, state becomes inconsistent. Now you have partial execution and recovery problems layered on top of concurrency bugs. Timeouts belong at <em>system boundaries</em>, not inside coordination logic.</p>
<p>If an async operation needs a timeout to avoid deadlocking, the structure is wrong.</p>
<h2 id="heading-diagnosing-async-deadlocks-in-production">Diagnosing async deadlocks in production</h2>
<p>This is hard. There is no sugar coating it.</p>
<p>Here are some practical signals.</p>
<p>Requests pile up, but CPU is low.<br />Thread pool usage is stable, not saturated.<br />Awaited tasks are pending for unusually long durations.<br />Logs show method entry but not exit.</p>
<p>At this point, stack traces lie. The logical call chain is split across continuations.</p>
<p>Good telemetry is essential.</p>
<p>You need to log <em>when locks are acquired</em>, <em>when they are released</em>, and <em>how long they are held</em>.</p>
<p>Most systems dont do this.</p>
<h2 id="heading-the-most-reliable-prevention-technique">The most reliable prevention technique</h2>
<p>The most reliable way to avoid async deadlocks is structural.</p>
<p>Do not hold locks across awaits unless you are only protecting in memory state.<br />Do not call back into components that can reenter your locking boundaries.<br />Define strict ownership and acquisition order for async locks.<br />Push ordering concerns into queues or channels when sequence matters.</p>
<p>The safest async systems arent flash..</p>
<p>They are explicit about ownership.</p>
<p>They minimise shared state.</p>
<p>They avoid cycles.</p>
]]></content:encoded></item><item><title><![CDATA[Async locking in C#]]></title><description><![CDATA[If you write C# and you use async and await seriously, you will eventually run into locking problems that feel unfamiliar. The rules you learned around lock, Monitor, and critical sections start to fall apart once continuations, thread hopping, and c...]]></description><link>https://dotnetdigest.com/async-locking-in-c</link><guid isPermaLink="true">https://dotnetdigest.com/async-locking-in-c</guid><category><![CDATA[asynchronous]]></category><category><![CDATA[C#]]></category><category><![CDATA[.NET]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[software architecture]]></category><category><![CDATA[software development]]></category><category><![CDATA[Microsoft]]></category><dc:creator><![CDATA[Patrick Kearns]]></dc:creator><pubDate>Sat, 29 Nov 2025 11:08:55 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1764414446701/328d1e3f-b1df-4029-aaf0-c6a586eea117.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>If you write C# and you use <code>async</code> and <code>await</code> seriously, you will eventually run into locking problems that feel unfamiliar. The rules you learned around <code>lock</code>, <code>Monitor</code>, and critical sections start to fall apart once continuations, thread hopping, and cooperative scheduling enter the scene. For mid-level engineers and above its important to have a good understanding <em>why</em> classical locking does not translate cleanly to asynchronous code, what the real failures look like in production systems, and how to apply the correct async-safe patterns without turning your codebase into a coordination nightmare.</p>
<p>Im not going to repeat the usual advice like “don’t use <code>lock</code> with async”. Instead, we’ll break down <em>why</em>, then work through practical patterns that hold up under load.</p>
<h2 id="heading-the-mental-model-shift-async-requires">The mental model shift async requires</h2>
<p>Traditional locking in C# assumes a simple invariant.</p>
<p>A thread enters a critical section.<br />That thread leaves the critical section.<br />The runtime enforces mutual exclusion in between.</p>
<p>Async code violates the first assumption.</p>
<p>When execution hits an <code>await</code>, you are no longer in control of <em>which</em> thread executes the remainder of the method. The continuation might resume on a different worker thread, later, or not at all if the operation is cancelled or faults.</p>
<p>This immediately breaks the idea of thread ownership.</p>
<p>A <code>lock</code> does not protect <em>logical execution</em>. It protects a thread bound region. Async code is not thread bound.</p>
<p>This is literally the root issue everything else builds on.</p>
<h2 id="heading-why-lock-await-is-fundamentally-broken">Why <code>lock</code> + <code>await</code> is fundamentally broken</h2>
<p>Take the naïve example everyone eventually tries.</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">lock</span> (_sync)
{
    <span class="hljs-keyword">await</span> DoWorkAsync();
}
</code></pre>
<p>The compiler rejects this, which is good. But understanding <em>why</em> matters.</p>
<p>A <code>lock</code> expands into <code>Monitor.Enter</code> and <code>Monitor.Exit</code>. The runtime <em>must</em> see a well defined pairing between those calls on the same thread.</p>
<p>When the method suspends at <code>await</code>, control returns to the caller and the thread is released back to the pool <em>while the monitor is still held</em>. When the continuation resumes, it may resume on a different thread.</p>
<p>There is no legal way for the runtime to re-associate the original lock with a different thread. That is why the compiler blocks this construct entirely.</p>
<p>If this were allowed, you would deadlock entire thread pools all the time.</p>
<p>So the rule is deeper than “you aren’t allowed to do this”. The rule is that <strong>thread-based mutual exclusion and asynchronous execution live at different layers of the abstraction stack</strong>.</p>
<h2 id="heading-the-real-problem-you-are-actually-trying-to-solve">The real problem you are actually trying to solve</h2>
<p>Most people think they need a lock.</p>
<p>In reality, they usually need one of three things:</p>
<ol>
<li><p>Exclusive access to a resource across async boundaries</p>
</li>
<li><p>Ordering guarantees between asynchronous operations</p>
</li>
<li><p>Protection against concurrent mutation of shared state</p>
</li>
</ol>
<p><code>lock</code> only solves #3, and only in synchronous code.</p>
<p>Async locking is about solving these problems <em>without</em> blocking threads.</p>
<p>Blocking is expensive. Thread pool starvation is real. Async scalability depends on allowing threads to return to the pool whenever work is waiting on I/O.</p>
<p>So any solution that blocks threads to maintain exclusivity defeats the reason you used async in the first place.</p>
<h2 id="heading-semaphoreslim-the-async-primitive"><code>SemaphoreSlim</code> - the async primitive</h2>
<p>The most widely usable async-compatible locking primitive in the BCL is <code>SemaphoreSlim</code>.</p>
<p>Not because it is perfect, but because it meets two critical requirements.</p>
<p>It can be awaited asynchronously.<br />It does not depend on thread identity.</p>
<p>At its simplest, a semaphore with an initial and maximum count of 1 behaves like a mutex.</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">private</span> <span class="hljs-keyword">readonly</span> SemaphoreSlim _mutex = <span class="hljs-keyword">new</span>(<span class="hljs-number">1</span>, <span class="hljs-number">1</span>);

<span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task <span class="hljs-title">UpdateAsync</span>(<span class="hljs-params"></span>)</span>
{
    <span class="hljs-keyword">await</span> _mutex.WaitAsync();
    <span class="hljs-keyword">try</span>
    {
        <span class="hljs-keyword">await</span> DoWorkAsync();
    }
    <span class="hljs-keyword">finally</span>
    {
        _mutex.Release();
    }
}
</code></pre>
<p>This works because <code>WaitAsync</code> does not block a thread. It returns a <code>Task</code> that completes when the semaphore becomes available.</p>
<p>Ownership is logical, not thread affine.</p>
<h2 id="heading-why-semaphoreslim-is-still-easy-to-misuse">Why <code>SemaphoreSlim</code> is still easy to misuse</h2>
<p>Although <code>SemaphoreSlim</code> is async safe, that does not mean it is idiot proof.</p>
<h3 id="heading-forgotten-release">Forgotten <code>Release</code></h3>
<p>Unlike <code>lock</code>, the compiler cannot protect you here. If control flow exits early or an exception escapes without hitting <code>Release</code>, you have a permanent leak.</p>
<p>This is semantically closer to manually managing file handles than using a <code>lock</code> block.</p>
<h3 id="heading-cancellation-edge-cases">Cancellation edge cases</h3>
<p>If you pass a <code>CancellationToken</code> to <code>WaitAsync</code>, and the wait is cancelled after the semaphore has been acquired but before you reach your <code>try</code> block, you can leak the semaphore without realising it.</p>
<p>This is rare, but in high throughput or fault heavy systems, it happens.</p>
<h3 id="heading-over-serialisation">Over serialisation</h3>
<p>Using a single semaphore for a logically partitionable resource can crater your throughput without showing any obvious bug symptoms.</p>
<p>If your protected state can be sharded, it should be.</p>
<h2 id="heading-the-disposable-lock-pattern">The disposable lock pattern</h2>
<p>To reduce the surface area for mistakes, many people encapsulate async locking in a disposable abstraction.</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">public</span> <span class="hljs-keyword">sealed</span> <span class="hljs-keyword">class</span> <span class="hljs-title">AsyncLock</span>
{
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">readonly</span> SemaphoreSlim _semaphore = <span class="hljs-keyword">new</span>(<span class="hljs-number">1</span>, <span class="hljs-number">1</span>);

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task&lt;IDisposable&gt; <span class="hljs-title">LockAsync</span>(<span class="hljs-params"></span>)</span>
    {
        <span class="hljs-keyword">await</span> _semaphore.WaitAsync();
        <span class="hljs-keyword">return</span> <span class="hljs-keyword">new</span> Releaser(_semaphore);
    }

    <span class="hljs-keyword">private</span> <span class="hljs-keyword">sealed</span> <span class="hljs-keyword">class</span> <span class="hljs-title">Releaser</span> : <span class="hljs-title">IDisposable</span>
    {
        <span class="hljs-keyword">private</span> <span class="hljs-keyword">readonly</span> SemaphoreSlim _semaphore;

        <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-title">Releaser</span>(<span class="hljs-params">SemaphoreSlim semaphore</span>)</span>
        {
            _semaphore = semaphore;
        }

        <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">Dispose</span>(<span class="hljs-params"></span>)</span>
        {
            _semaphore.Release();
        }
    }
}
</code></pre>
<p>The usage becomes structurally similar to <code>lock</code>.</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">using</span> (<span class="hljs-keyword">await</span> _asyncLock.LockAsync())
{
    <span class="hljs-keyword">await</span> DoWorkAsync();
}
</code></pre>
<p>This pattern improves correctness dramatically by making the release deterministic via <code>IDisposable</code>.</p>
<p>One subtle point though - <code>Dispose</code> is synchronous. That is acceptable because releasing a semaphore is not an async operation.</p>
<h2 id="heading-why-async-locks-should-protect-as-little-as-possible">Why async locks should protect <em>as little as possible</em></h2>
<p>Async locks are cheaper than blocking locks, but they are not free.</p>
<p>Every awaited wait adds allocation pressure and scheduling overhead. Worse still, long critical sections increase tail latency non linearly.</p>
<p>A good async locking rule of thumb is this:</p>
<p>Protect <strong>state mutation</strong>, not <strong>work</strong>.</p>
<p>This pattern is bad:</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">await</span> _mutex.WaitAsync();
<span class="hljs-keyword">try</span>
{
    <span class="hljs-keyword">await</span> CallExternalApiAsync();
    UpdateSharedState();
}
<span class="hljs-keyword">finally</span>
{
    _mutex.Release();
}
</code></pre>
<p>This is better:</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">var</span> result = <span class="hljs-keyword">await</span> CallExternalApiAsync();

<span class="hljs-keyword">await</span> _mutex.WaitAsync();
<span class="hljs-keyword">try</span>
{
    UpdateSharedState(result);
}
<span class="hljs-keyword">finally</span>
{
    _mutex.Release();
}
</code></pre>
<p>Hold the lock only while touching shared state. Everything else should happen outside.</p>
<h2 id="heading-async-locks-are-about-coordination-not-safety">Async locks are about coordination, not safety</h2>
<p>This distinction is important.</p>
<p>Async locks do not protect you from unsafe code, torn writes, or low-level memory visibility issues. The C# memory model still applies.</p>
<p>Async locks coordinate <em>logical concurrency</em>, not instruction level.</p>
<p>If you are working with low-level mutable structures, you still need to think in terms of memory barriers and atomic operations.</p>
<p>Most application code does not need this. Infrastructure code often does.</p>
<h2 id="heading-concurrentdictionary-does-not-eliminate-the-need-for-async-locking">ConcurrentDictionary does not eliminate the need for async locking</h2>
<p>A common misconception is that thread-safe collections remove the need for async coordination.</p>
<p>They dont.</p>
<p>They prevent corruption of the data structure itself.</p>
<p>This is not atomic:</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">if</span> (!_dict.ContainsKey(key))
{
    <span class="hljs-keyword">var</span> <span class="hljs-keyword">value</span> = <span class="hljs-keyword">await</span> BuildValueAsync();
    _dict[key] = <span class="hljs-keyword">value</span>;
}
</code></pre>
<p>Two concurrent callers can both observe the key as missing and both build the value.</p>
<p>Async locking is often about preventing duplicated work, not just preventing corruption.</p>
<h2 id="heading-the-async-lock-per-key-pattern">The “async lock per key” pattern</h2>
<p>In high throughput systems, global locks are scalability killers.</p>
<p>A common refinement is per key locking.</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">private</span> <span class="hljs-keyword">readonly</span> ConcurrentDictionary&lt;<span class="hljs-keyword">string</span>, AsyncLock&gt; _locks = <span class="hljs-keyword">new</span>();

<span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task <span class="hljs-title">ProcessAsync</span>(<span class="hljs-params"><span class="hljs-keyword">string</span> key</span>)</span>
{
    <span class="hljs-keyword">var</span> asyncLock = _locks.GetOrAdd(key, _ =&gt; <span class="hljs-keyword">new</span> AsyncLock());

    <span class="hljs-keyword">using</span> (<span class="hljs-keyword">await</span> asyncLock.LockAsync())
    {
        <span class="hljs-keyword">await</span> DoWorkAsync(key);
    }
}
</code></pre>
<p>You still need an eviction strategy, otherwise the dictionary grows forever. That is a design problem, not a syntax problem.</p>
<h2 id="heading-async-locking-and-database-code">Async locking and database code</h2>
<p>One of the most common async locking mistakes is trying to serialise database operations in memory.</p>
<p>This is often a smell.</p>
<p>Databases already implement concurrency control. If you find yourself protecting database writes with in process async locks, ask yourself why.</p>
<p>Valid reasons exist, such as enforcing application level invariants or throttling external side effects. But locking to “avoid race conditions” usually means the invariant belongs in the database via constraints or transactions.</p>
<p>Async locks should sit above persistence, not try to re-implement it.</p>
<h2 id="heading-when-async-locking-is-the-wrong-solution-entirely">When async locking is the wrong solution entirely</h2>
<p>There are problems async locks cannot solve cleanly.</p>
<h3 id="heading-ordering-problems">Ordering problems</h3>
<p>If operations must occur in a strict sequence, queues or channels are a better fit.</p>
<h3 id="heading-backpressure">Backpressure</h3>
<p>A lock does not apply backpressure. It just queues waiters. If load spikes, waiters accumulate, latency explodes, and you still process everything.</p>
<p>Bounded channels or rate limiters are usually better here.</p>
<h3 id="heading-cross-process-coordination">Cross process coordination</h3>
<p>Async locks are in-process only. They do not coordinate across instances, containers, or machines.</p>
<p>If you need distributed locking, you are in a different design space altogether.</p>
<h2 id="heading-testing-async-locking-behaviour">Testing async locking behaviour</h2>
<p>Unit tests rarely surface locking bugs. You need stress.</p>
<p>The simplest test is not clever.</p>
<p>Spawn many concurrent tasks.<br />Hammer the code.<br />Assert invariants hold.</p>
<p>Example.</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">await</span> Task.WhenAll(
    Enumerable.Range(<span class="hljs-number">0</span>, <span class="hljs-number">1000</span>)
        .Select(_ =&gt; UpdateAsync())
);
</code></pre>
<p>Then run it repeatedly.</p>
<p>Async locking bugs are statistical. They fail under load, not under logic inspection.</p>
<h2 id="heading-a-practical-decision-checklist">A practical decision checklist</h2>
<p>When you feel the urge to add a lock to async code, pause and ask yourself:</p>
<p>Is this protecting shared mutable state, or work?<br />Can the state be isolated or partitioned instead?<br />Can the invariant live in the database or downstream system?<br />Does this need ordering, or just mutual exclusion?<br />What happens to throughput if contention increases tenfold?</p>
<p>If you cannot answer these, adding a lock will only move the problem around.</p>
<p>Async locking is not a replacement for <code>lock</code>. It is a different tool with different trade-offs.</p>
<p>Used correctly, async locks allow high-throughput, scalable coordination without blocking threads.</p>
<p>Used lazily, they hide architectural problems until load turns them into outages.</p>
]]></content:encoded></item><item><title><![CDATA[How TLS Works in .NET]]></title><description><![CDATA[TLS sits underneath almost every network call you make in .NET, whether it goes through HttpClient, SslStream, Kestrel, gRPC or QUIC. Most people treat TLS as a black box. Once you understand how the handshake works, how .NET validates certificates, ...]]></description><link>https://dotnetdigest.com/how-tls-works-in-net</link><guid isPermaLink="true">https://dotnetdigest.com/how-tls-works-in-net</guid><category><![CDATA[Security]]></category><category><![CDATA[https]]></category><category><![CDATA[TLS]]></category><category><![CDATA[.NET]]></category><category><![CDATA[networking]]></category><category><![CDATA[engineering]]></category><dc:creator><![CDATA[Patrick Kearns]]></dc:creator><pubDate>Tue, 25 Nov 2025 22:30:51 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1764109761311/2e6cdd73-9c7c-4909-95eb-28931e3e3a0b.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>TLS sits underneath almost every network call you make in .NET, whether it goes through <code>HttpClient</code>, <code>SslStream</code>, Kestrel, gRPC or QUIC. Most people treat TLS as a black box. Once you understand how the handshake works, how .NET validates certificates, how ALPN decides the protocol, and how session resumption speeds up repeat connections, you stop guessing and start diagnosing the right problems.</p>
<p>In .NET the core TLS behaviour does not come from managed code. The runtime wraps the operating system’s native crypto stack. Windows relies on SChannel, Linux typically uses OpenSSL and macOS uses SecureTransport. .NET orchestrates the handshake and encryption process but it delegates the actual cryptography to the platform. For TCP connections this all flows through <code>SslStream</code>; for QUIC and HTTP/3 it goes through MsQuic. The abstractions look high level but underneath them is a strict record protocol and a carefully structured handshake. A TLS 1.3 handshake always begins with the client. The first message it sends, ClientHello, carries the supported TLS versions, the cipher suites it is willing to use, various random values, the key share needed for establishing the shared secret, and two pieces of metadata that matter deeply in .NET systems, ALPN and SNI. ALPN tells the server which application protocols the client is willing to speak. SNI tells the server which hostname the client is targeting. Without SNI the server would not know which certificate to present. This matters when you host multiple domains behind one IP address, without SNI the handshake cannot pick the right certificate.</p>
<p>The server responds with ServerHello and includes its own key share, the chosen cipher suite, the certificate chain, evidence that it owns the private key, and the final messages needed to prove the handshake is authentic. The client verifies the certificate chain against the trusted roots on the local machine. It also verifies that the server certificate matches the hostname inside the SAN fields. .NET is strict here and will fail immediately if the certificate chain is incomplete, the dates do not align, the clock on the machine is wrong or the hostname does not appear in the SAN list. After validation succeeds, the client confirms the handshake by sending its Finished message and both sides switch to encrypted traffic.</p>
<p>ALPN has a direct impact on real application behaviour. When a client offers a list of protocols such as HTTP/3, HTTP/2 and HTTP/1.1, the server selects one based on what it supports. If HTTP/3 is enabled and QUIC is available on the host, the connection will run over QUIC. If not, the selection drops down to HTTP/2. If the server only supports HTTP/1.1, that becomes the negotiated protocol regardless of your expectations. Many engineers assume their connections are using HTTP/2 when they are not, and ALPN is usually the reason. The choice is made during the handshake and it dictates how the rest of the connection behaves.</p>
<p>Once the handshake is complete, the cost of establishing a secure connection becomes more noticeable in systems that open many short lived connections. This is where TLS session resumption matters. Under TLS 1.3 the server can send session tickets after the handshake. The client stores these tickets and presents them the next time it connects. If the server recognises the ticket, the handshake collapses into a much shorter exchange. The shared secret can be re-established without sending the full certificate chain. In practical terms this means the first HTTPS request to an endpoint is always slower than subsequent ones. HttpClient automatically benefits from resumption through its connection pooling. SslStream can benefit too if the server issues tickets and the client reconnects soon enough.</p>
<p>Certificate handling is one of the most common sources of TLS failures in .NET systems. Validation is strict by design. The chain must build to a trusted root. The SAN list must contain the exact hostname the client used. The validity period must be correct and the system clock must be in sync. If the server forgets to include intermediate certificates, the client fails even if the leaf certificate is valid. In internal environments developers sometimes try to bypass validation using custom callbacks, but this simply moves the risk instead of removing it. It is always better to fix the certificate chain than to disable validation.</p>
<p>On the server side, Kestrel uses SNI to select the correct certificate. When the ClientHello indicates the hostname, Kestrel matches it against the registered certificates and picks the right one. If the hostname does not match any certificate, the handshake fails. This behaviour is essential for multi tenant environments and for modern hosting setups where multiple domains share the same IP and port. SNI is the only signal the server receives to choose the certificate; without it only a single certificate would be possible.</p>
<p>Once the handshake is complete, the TLS record layer takes over. TLS is not a raw stream protocol. It breaks data into encrypted records. Each record contains a header and encrypted payload. SslStream hides this complexity and gives you a continuous logical stream, but internally it deals with partial records, fragmentation, buffering, and decryption boundaries. If you build a custom protocol on top of SslStream, the record layer is invisible but always present. QUIC changes this by integrating TLS into its own frame system rather than layering it over a separate transport. TLS in .NET behaves differently when the underlying transport is QUIC. With HTTP/3, TLS 1.3 is embedded directly in the QUIC handshake. After the initial hello messages, everything is encrypted and sent as QUIC frames. There is no TLS record layer on top of TCP because there is no TCP. QUIC handles reliability, congestion control and multiplexing itself. The TLS handshake simply provides the cryptographic foundation. This shifts the performance profile of secure connections significantly because QUIC avoids head-of-line blocking and supports independent bidirectional streams, even when packets are lost.</p>
<p>When you understand how TLS works inside .NET, connection problems become diagnosable instead of mysterious. Mis-negotiated ALPN explains unexpected HTTP versions. Missing SAN entries explain handshake failures. Repeated handshakes explain poor performance. SNI logic explains why a certificate mismatch happened. Session resumption explains why repeat requests become faster. None of this requires memorising every message in the wire protocol. You just need a clear model of what .NET is doing and why.</p>
]]></content:encoded></item><item><title><![CDATA[MsQuic: The Transport Shift That Will Redefine Distributed .NET Systems]]></title><description><![CDATA[For the last decade, most .NET architects have treated Kestrel, ASP.NET Core’s middleware pipeline and HttpClient as the reliable foundation of any internal or external API. Whether it was a modern microservice, a BFF that shapes data for frontend ap...]]></description><link>https://dotnetdigest.com/msquic-the-transport-shift-that-will-redefine-distributed-net-systems</link><guid isPermaLink="true">https://dotnetdigest.com/msquic-the-transport-shift-that-will-redefine-distributed-net-systems</guid><category><![CDATA[msquic]]></category><category><![CDATA[Microsoft]]></category><category><![CDATA[.NET]]></category><category><![CDATA[networking]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[software architecture]]></category><category><![CDATA[software development]]></category><dc:creator><![CDATA[Patrick Kearns]]></dc:creator><pubDate>Mon, 27 Oct 2025 11:46:50 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1761565466190/f48f5621-eafe-4286-b03d-f8fdc57ce5d3.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>For the last decade, most .NET architects have treated Kestrel, <a target="_blank" href="http://ASP.NET">ASP.NET</a> Core’s middleware pipeline and HttpClient as the reliable foundation of any internal or external API. Whether it was a modern microservice, a BFF that shapes data for frontend apps or a low latency command handler protecting a transactional system, the entire stack assumed TCP beneath the surface. That assumption is about to end.</p>
<p>MsQuic is Microsoft’s high performance implementation of QUIC (Quick UDP Internet Connections)<a target="_blank" href="https://www.google.com/search?q=Quick+UDP+Internet+Connections&amp;rlz=1C5CHFA_enIE1120IE1140&amp;oq=what+does+quic+stand+for&amp;gs_lcrp=EgZjaHJvbWUqBwgAEAAYgAQyBwgAEAAYgAQyCAgBEAAYFhgeMggIAhAAGBYYHjIICAMQABgWGB4yDQgEEAAYhgMYgAQYigUyDQgFEAAYhgMYgAQYigUyDQgGEAAYhgMYgAQYigUyCggHEAAYgAQYogQyBwgIEAAY7wUyCggJEAAYgAQYogTSAQk1OTI0ajBqMTWoAgiwAgHxBe0JQzKqHAcJ&amp;sourceid=chrome&amp;ie=UTF-8&amp;mstk=AUtExfBSuRKqYj3Behs_fjlYGiKZ5uEPth15L8yUQb8lj8l0NRhDtz89Y2Lu2DN_KpIdxHclSMz70ollQdL-2NxO7ZHwd9cmoXoVEldiHMSSgqxJmarOqKb_XST3yu8gXG4u9uZr18opFgyWn9dlKyRiyV2u6hQs6SvKYymDkfrOTIeK62c&amp;csui=3&amp;ved=2ahUKEwjux4X1jMWQAxUSUEEAHdTLO5EQgK4QegQIARAC">,</a> a new encrypted transport p<a target="_blank" href="https://www.google.com/search?q=Quick+UDP+Internet+Connections&amp;rlz=1C5CHFA_enIE1120IE1140&amp;oq=what+does+quic+stand+for&amp;gs_lcrp=EgZjaHJvbWUqBwgAEAAYgAQyBwgAEAAYgAQyCAgBEAAYFhgeMggIAhAAGBYYHjIICAMQABgWGB4yDQgEEAAYhgMYgAQYigUyDQgFEAAYhgMYgAQYigUyDQgGEAAYhgMYgAQYigUyCggHEAAYgAQYogQyBwgIEAAY7wUyCggJEAAYgAQYogTSAQk1OTI0ajBqMTWoAgiwAgHxBe0JQzKqHAcJ&amp;sourceid=chrome&amp;ie=UTF-8&amp;mstk=AUtExfBSuRKqYj3Behs_fjlYGiKZ5uEPth15L8yUQb8lj8l0NRhDtz89Y2Lu2DN_KpIdxHclSMz70ollQdL-2NxO7ZHwd9cmoXoVEldiHMSSgqxJmarOqKb_XST3yu8gXG4u9uZr18opFgyWn9dlKyRiyV2u6hQs6SvKYymDkfrOTIeK62c&amp;csui=3&amp;ved=2ahUKEwjux4X1jMWQAxUSUEEAHdTLO5EQgK4QegQIARAC">r</a>otocol built on top of UDP rather than TCP. It is already running inside Windows, Azure App Service, Azure SQL, Azure Service Bus, Edge, Xbox Game Streaming and soon, invisibly inside .NET Aspire. It removes the last legacy constraint from distributed applications - the idea that every connection requires a heavy TLS handshake, a three way TCP SYN exchange and resetting if the client’s IP changes. QUIC was designed to kill that latency. It merges encryption and transport into a single protocol, removing the need for TLS to sit separately on top. It supports 1-RTT or even 0-RTT connection setup, meaning connections can effectively be reused with almost no cost, even after a network hop, mobile handover or failover across regions. In a world where synchronous APIs, streaming ingestion, edge computing and AI inference pipelines are becoming latency constrained, this is not a theoretical optimisation. It is an architecture unlock.</p>
<h2 id="heading-why-tcp-is-the-wrong-default-for-modern-distributed-systems">Why TCP Is The Wrong Default for Modern Distributed Systems</h2>
<p>Before we look at MsQuic implementations, we need to understand why TCP has become a liability rather than an asset.</p>
<p>Look at a typical call in a .NET application:</p>
<pre><code class="lang-csharp"><span class="hljs-comment">// Traditional HTTP/2 over TCP</span>
<span class="hljs-keyword">var</span> client = <span class="hljs-keyword">new</span> HttpClient();
<span class="hljs-keyword">var</span> response = <span class="hljs-keyword">await</span> client.GetAsync(<span class="hljs-string">"https://api.internal/orders/12345"</span>);
</code></pre>
<p>Behind this innocent looking line, your application is paying a hidden price :</p>
<ol>
<li><p><strong>TCP three way handshake</strong>: SYN, SYN-ACK, ACK (1 RTT minimum)</p>
</li>
<li><p><strong>TLS 1.3 handshake</strong>: ClientHello, ServerHello, finished (1 additional RTT)</p>
</li>
<li><p><strong>Head-of-line blocking</strong>: One lost packet blocks the entire connection</p>
</li>
<li><p><strong>Connection migration failure</strong>: IP change = connection reset</p>
</li>
<li><p><strong>Slow start penalty</strong>: Every new connection ramps up slowly</p>
</li>
</ol>
<p>In a single datacenter with sub millisecond latency, this might cost 2-4ms per request. But in distributed edge scenarios, multi cloud architectures, or mobile clients, this compounds brutally. A 50ms RTT means each new connection costs 100ms before a single byte of application data flows.</p>
<h2 id="heading-understanding-quics-architecture">Understanding QUIC's Architecture</h2>
<p>QUIC fundamentally restructures the transport layer by collapsing multiple protocol layers into one:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1761564527898/ccbfbacd-735f-4cf3-8c9d-f503cfb84977.png" alt class="image--center mx-auto" /></p>
<p>The implications are this:</p>
<ul>
<li><p><strong>Encryption is mandatory</strong>: Every QUIC connection is encrypted by default</p>
</li>
<li><p><strong>Connection IDs replace 5-tuple</strong>: Connections survive IP changes</p>
</li>
<li><p><strong>Multiplexed streams</strong>: Independent streams prevent head-of-line blocking</p>
</li>
<li><p><strong>0-RTT resumption</strong>: Previous connection parameters can be reused</p>
</li>
</ul>
<h2 id="heading-setting-up-msquic-in-net">Setting Up MsQuic in .NET</h2>
<p>Let's start with a practical example. MsQuic is available through the <a target="_blank" href="http://System.Net"><code>System.Net</code></a><code>.Quic</code> namespace in .NET 7+, but it requires explicit configuration.</p>
<h3 id="heading-installing-prerequisites">Installing Prerequisites</h3>
<p>First, ensure you have the MsQuic native library installed.</p>
<p>On Windows, it's included in .NET 7+.</p>
<p>On Linux:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># Ubuntu/Debian</span>
sudo apt-get install libmsquic

<span class="hljs-comment"># Or build from source</span>
git <span class="hljs-built_in">clone</span> --recursive https://github.com/microsoft/msquic.git
<span class="hljs-built_in">cd</span> msquic
mkdir build &amp;&amp; <span class="hljs-built_in">cd</span> build
cmake -G <span class="hljs-string">'Unix Makefiles'</span> ..
cmake --build .
</code></pre>
<h3 id="heading-your-first-quic-server">Your First QUIC Server</h3>
<p>Here's a minimal QUIC server that accepts connections and handles bidirectional streams:</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">using</span> System.Net;
<span class="hljs-keyword">using</span> System.Net.Quic;
<span class="hljs-keyword">using</span> System.Net.Security;
<span class="hljs-keyword">using</span> System.Security.Cryptography.X509Certificates;
<span class="hljs-keyword">using</span> System.Text;

<span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title">QuicEchoServer</span>
{
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">readonly</span> QuicListener _listener;
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">readonly</span> CancellationTokenSource _cts = <span class="hljs-keyword">new</span>();

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-title">QuicEchoServer</span>(<span class="hljs-params">IPEndPoint endpoint, X509Certificate2 certificate</span>)</span>
    {
        <span class="hljs-keyword">var</span> listenerOptions = <span class="hljs-keyword">new</span> QuicListenerOptions
        {
            ApplicationProtocols = <span class="hljs-keyword">new</span> List&lt;SslApplicationProtocol&gt; 
            { 
                <span class="hljs-keyword">new</span> SslApplicationProtocol(<span class="hljs-string">"echo-proto"</span>) 
            },
            ConnectionOptionsCallback = (connection, ssl, token) =&gt;
            {
                <span class="hljs-keyword">var</span> serverOptions = <span class="hljs-keyword">new</span> QuicServerConnectionOptions
                {
                    DefaultStreamErrorCode = <span class="hljs-number">0</span>,
                    DefaultCloseErrorCode = <span class="hljs-number">0</span>,
                    ServerAuthenticationOptions = <span class="hljs-keyword">new</span> SslServerAuthenticationOptions
                    {
                        ApplicationProtocols = <span class="hljs-keyword">new</span> List&lt;SslApplicationProtocol&gt;
                        {
                            <span class="hljs-keyword">new</span> SslApplicationProtocol(<span class="hljs-string">"echo-proto"</span>)
                        },
                        ServerCertificate = certificate
                    }
                };
                <span class="hljs-keyword">return</span> ValueTask.FromResult(serverOptions);
            },
            ListenEndPoint = endpoint
        };

        _listener = QuicListener.ListenAsync(listenerOptions).GetAwaiter().GetResult();
    }

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task <span class="hljs-title">StartAsync</span>(<span class="hljs-params"></span>)</span>
    {
        Console.WriteLine(<span class="hljs-string">$"QUIC server listening on <span class="hljs-subst">{_listener.LocalEndPoint}</span>"</span>);

        <span class="hljs-keyword">while</span> (!_cts.Token.IsCancellationRequested)
        {
            <span class="hljs-keyword">var</span> connection = <span class="hljs-keyword">await</span> _listener.AcceptConnectionAsync(_cts.Token);
            _ = HandleConnectionAsync(connection);
        }
    }

    <span class="hljs-function"><span class="hljs-keyword">private</span> <span class="hljs-keyword">async</span> Task <span class="hljs-title">HandleConnectionAsync</span>(<span class="hljs-params">QuicConnection connection</span>)</span>
    {
        Console.WriteLine(<span class="hljs-string">$"Connection established from <span class="hljs-subst">{connection.RemoteEndPoint}</span>"</span>);

        <span class="hljs-keyword">try</span>
        {
            <span class="hljs-keyword">while</span> (!_cts.Token.IsCancellationRequested)
            {
                <span class="hljs-keyword">var</span> stream = <span class="hljs-keyword">await</span> connection.AcceptInboundStreamAsync(_cts.Token);
                _ = HandleStreamAsync(stream);
            }
        }
        <span class="hljs-keyword">catch</span> (QuicException ex) <span class="hljs-keyword">when</span> (ex.QuicError == QuicError.ConnectionAborted)
        {
            Console.WriteLine(<span class="hljs-string">"Connection closed by client"</span>);
        }
        <span class="hljs-keyword">finally</span>
        {
            <span class="hljs-keyword">await</span> connection.CloseAsync(<span class="hljs-number">0</span>);
            <span class="hljs-keyword">await</span> connection.DisposeAsync();
        }
    }

    <span class="hljs-function"><span class="hljs-keyword">private</span> <span class="hljs-keyword">async</span> Task <span class="hljs-title">HandleStreamAsync</span>(<span class="hljs-params">QuicStream stream</span>)</span>
    {
        <span class="hljs-keyword">try</span>
        {
            <span class="hljs-keyword">var</span> buffer = <span class="hljs-keyword">new</span> <span class="hljs-keyword">byte</span>[<span class="hljs-number">4096</span>];
            <span class="hljs-keyword">int</span> bytesRead;

            <span class="hljs-keyword">while</span> ((bytesRead = <span class="hljs-keyword">await</span> stream.ReadAsync(buffer, _cts.Token)) &gt; <span class="hljs-number">0</span>)
            {
                <span class="hljs-keyword">var</span> message = Encoding.UTF8.GetString(buffer, <span class="hljs-number">0</span>, bytesRead);
                Console.WriteLine(<span class="hljs-string">$"Received: <span class="hljs-subst">{message}</span>"</span>);

                <span class="hljs-comment">// Echo back</span>
                <span class="hljs-keyword">await</span> stream.WriteAsync(buffer.AsMemory(<span class="hljs-number">0</span>, bytesRead), _cts.Token);
                <span class="hljs-keyword">await</span> stream.FlushAsync(_cts.Token);
            }

            <span class="hljs-comment">// Complete the stream gracefully</span>
            stream.CompleteWrites();
        }
        <span class="hljs-keyword">catch</span> (Exception ex)
        {
            Console.WriteLine(<span class="hljs-string">$"Stream error: <span class="hljs-subst">{ex.Message}</span>"</span>);
        }
        <span class="hljs-keyword">finally</span>
        {
            <span class="hljs-keyword">await</span> stream.DisposeAsync();
        }
    }

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task <span class="hljs-title">StopAsync</span>(<span class="hljs-params"></span>)</span>
    {
        _cts.Cancel();
        <span class="hljs-keyword">await</span> _listener.DisposeAsync();
    }
}
</code></pre>
<h3 id="heading-creating-a-quic-client">Creating a QUIC Client</h3>
<p>The client side is equally straightforward:</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title">QuicEchoClient</span>
{
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">readonly</span> QuicConnection _connection;

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">async</span> Task&lt;QuicEchoClient&gt; <span class="hljs-title">ConnectAsync</span>(<span class="hljs-params">
        <span class="hljs-keyword">string</span> hostname, 
        <span class="hljs-keyword">int</span> port,
        CancellationToken cancellationToken = <span class="hljs-keyword">default</span></span>)</span>
    {
        <span class="hljs-keyword">var</span> clientOptions = <span class="hljs-keyword">new</span> QuicClientConnectionOptions
        {
            DefaultStreamErrorCode = <span class="hljs-number">0</span>,
            DefaultCloseErrorCode = <span class="hljs-number">0</span>,
            RemoteEndPoint = <span class="hljs-keyword">new</span> DnsEndPoint(hostname, port),
            ClientAuthenticationOptions = <span class="hljs-keyword">new</span> SslClientAuthenticationOptions
            {
                ApplicationProtocols = <span class="hljs-keyword">new</span> List&lt;SslApplicationProtocol&gt;
                {
                    <span class="hljs-keyword">new</span> SslApplicationProtocol(<span class="hljs-string">"echo-proto"</span>)
                },
                <span class="hljs-comment">// For testing only - don't use in production</span>
                RemoteCertificateValidationCallback = (sender, cert, chain, errors) =&gt; <span class="hljs-literal">true</span>
            }
        };

        <span class="hljs-keyword">var</span> connection = <span class="hljs-keyword">await</span> QuicConnection.ConnectAsync(clientOptions, cancellationToken);
        <span class="hljs-keyword">return</span> <span class="hljs-keyword">new</span> QuicEchoClient(connection);
    }

    <span class="hljs-function"><span class="hljs-keyword">private</span> <span class="hljs-title">QuicEchoClient</span>(<span class="hljs-params">QuicConnection connection</span>)</span>
    {
        _connection = connection;
    }

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task&lt;<span class="hljs-keyword">string</span>&gt; <span class="hljs-title">SendMessageAsync</span>(<span class="hljs-params"><span class="hljs-keyword">string</span> message</span>)</span>
    {
        <span class="hljs-keyword">await</span> <span class="hljs-keyword">using</span> <span class="hljs-keyword">var</span> stream = <span class="hljs-keyword">await</span> _connection.OpenOutboundStreamAsync(QuicStreamType.Bidirectional);

        <span class="hljs-keyword">var</span> messageBytes = Encoding.UTF8.GetBytes(message);
        <span class="hljs-keyword">await</span> stream.WriteAsync(messageBytes);
        stream.CompleteWrites();

        <span class="hljs-keyword">var</span> buffer = <span class="hljs-keyword">new</span> <span class="hljs-keyword">byte</span>[<span class="hljs-number">4096</span>];
        <span class="hljs-keyword">var</span> bytesRead = <span class="hljs-keyword">await</span> stream.ReadAsync(buffer);

        <span class="hljs-keyword">return</span> Encoding.UTF8.GetString(buffer, <span class="hljs-number">0</span>, bytesRead);
    }

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task <span class="hljs-title">CloseAsync</span>(<span class="hljs-params"></span>)</span>
    {
        <span class="hljs-keyword">await</span> _connection.CloseAsync(<span class="hljs-number">0</span>);
        <span class="hljs-keyword">await</span> _connection.DisposeAsync();
    }
}
</code></pre>
<h3 id="heading-running-the-example">Running the Example</h3>
<p>Here's how to tie it together:</p>
<pre><code class="lang-csharp"><span class="hljs-comment">// Generate a self-signed certificate (for testing only)</span>
<span class="hljs-function"><span class="hljs-keyword">static</span> X509Certificate2 <span class="hljs-title">GenerateTestCertificate</span>(<span class="hljs-params"></span>)</span>
{
    <span class="hljs-keyword">using</span> <span class="hljs-keyword">var</span> rsa = RSA.Create(<span class="hljs-number">2048</span>);
    <span class="hljs-keyword">var</span> request = <span class="hljs-keyword">new</span> CertificateRequest(
        <span class="hljs-string">"CN=localhost"</span>,
        rsa,
        HashAlgorithmName.SHA256,
        RSASignaturePadding.Pkcs1);

    request.CertificateExtensions.Add(
        <span class="hljs-keyword">new</span> X509KeyUsageExtension(
            X509KeyUsageFlags.DigitalSignature | X509KeyUsageFlags.KeyEncipherment,
            critical: <span class="hljs-literal">true</span>));

    request.CertificateExtensions.Add(
        <span class="hljs-keyword">new</span> X509EnhancedKeyUsageExtension(
            <span class="hljs-keyword">new</span> OidCollection { <span class="hljs-keyword">new</span> Oid(<span class="hljs-string">"1.3.6.1.5.5.7.3.1"</span>) }, <span class="hljs-comment">// Server Authentication</span>
            critical: <span class="hljs-literal">true</span>));

    <span class="hljs-keyword">var</span> sanBuilder = <span class="hljs-keyword">new</span> SubjectAlternativeNameBuilder();
    sanBuilder.AddDnsName(<span class="hljs-string">"localhost"</span>);
    request.CertificateExtensions.Add(sanBuilder.Build());

    <span class="hljs-keyword">var</span> certificate = request.CreateSelfSigned(
        DateTimeOffset.UtcNow.AddDays(<span class="hljs-number">-1</span>),
        DateTimeOffset.UtcNow.AddYears(<span class="hljs-number">1</span>));

    <span class="hljs-keyword">return</span> <span class="hljs-keyword">new</span> X509Certificate2(
        certificate.Export(X509ContentType.Pfx),
        (<span class="hljs-keyword">string</span>?)<span class="hljs-literal">null</span>,
        X509KeyStorageFlags.Exportable);
}

<span class="hljs-comment">// Server</span>
<span class="hljs-keyword">var</span> cert = GenerateTestCertificate();
<span class="hljs-keyword">var</span> server = <span class="hljs-keyword">new</span> QuicEchoServer(<span class="hljs-keyword">new</span> IPEndPoint(IPAddress.Loopback, <span class="hljs-number">5001</span>), cert);
_ = server.StartAsync();

<span class="hljs-comment">// Client</span>
<span class="hljs-keyword">var</span> client = <span class="hljs-keyword">await</span> QuicEchoClient.ConnectAsync(<span class="hljs-string">"localhost"</span>, <span class="hljs-number">5001</span>);
<span class="hljs-keyword">var</span> response = <span class="hljs-keyword">await</span> client.SendMessageAsync(<span class="hljs-string">"Hello, QUIC!"</span>);
Console.WriteLine(<span class="hljs-string">$"Response: <span class="hljs-subst">{response}</span>"</span>);

<span class="hljs-keyword">await</span> client.CloseAsync();
<span class="hljs-keyword">await</span> server.StopAsync();
</code></pre>
<h2 id="heading-building-a-high-performance-rpc-framework">Building a High-Performance RPC Framework</h2>
<p>Now let's build something more realistic - a lightweight RPC framework that leverages QUIC's multiplexing capabilities.</p>
<h3 id="heading-protocol-design">Protocol Design</h3>
<p>We'll design a simple binary protocol:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1761564798646/761df5de-8d34-414e-bc0d-f075485157bb.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-message-framing">Message Framing</h3>
<pre><code class="lang-csharp"><span class="hljs-keyword">public</span> <span class="hljs-keyword">readonly</span> <span class="hljs-keyword">struct</span> RpcMessage
{
    <span class="hljs-keyword">public</span> <span class="hljs-keyword">long</span> MessageId { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">init</span>; }
    <span class="hljs-keyword">public</span> <span class="hljs-keyword">string</span> MethodName { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">init</span>; }
    <span class="hljs-keyword">public</span> ReadOnlyMemory&lt;<span class="hljs-keyword">byte</span>&gt; Payload { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">init</span>; }

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task <span class="hljs-title">WriteToStreamAsync</span>(<span class="hljs-params">QuicStream stream, CancellationToken ct = <span class="hljs-keyword">default</span></span>)</span>
    {
        <span class="hljs-comment">// Write message ID</span>
        <span class="hljs-keyword">var</span> messageIdBytes = BitConverter.GetBytes(MessageId);
        <span class="hljs-keyword">await</span> stream.WriteAsync(messageIdBytes, ct);

        <span class="hljs-comment">// Write method name</span>
        <span class="hljs-keyword">var</span> methodNameBytes = Encoding.UTF8.GetBytes(MethodName);
        <span class="hljs-keyword">var</span> methodNameLength = BitConverter.GetBytes(methodNameBytes.Length);
        <span class="hljs-keyword">await</span> stream.WriteAsync(methodNameLength, ct);
        <span class="hljs-keyword">await</span> stream.WriteAsync(methodNameBytes, ct);

        <span class="hljs-comment">// Write payload length and payload</span>
        <span class="hljs-keyword">var</span> payloadLength = BitConverter.GetBytes(Payload.Length);
        <span class="hljs-keyword">await</span> stream.WriteAsync(payloadLength, ct);
        <span class="hljs-keyword">await</span> stream.WriteAsync(Payload, ct);

        <span class="hljs-keyword">await</span> stream.FlushAsync(ct);
    }

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">async</span> Task&lt;RpcMessage&gt; <span class="hljs-title">ReadFromStreamAsync</span>(<span class="hljs-params">
        QuicStream stream, 
        CancellationToken ct = <span class="hljs-keyword">default</span></span>)</span>
    {
        <span class="hljs-keyword">var</span> buffer = <span class="hljs-keyword">new</span> <span class="hljs-keyword">byte</span>[<span class="hljs-number">8</span>];

        <span class="hljs-comment">// Read message ID</span>
        <span class="hljs-keyword">await</span> ReadExactlyAsync(stream, buffer.AsMemory(<span class="hljs-number">0</span>, <span class="hljs-number">8</span>), ct);
        <span class="hljs-keyword">var</span> messageId = BitConverter.ToInt64(buffer, <span class="hljs-number">0</span>);

        <span class="hljs-comment">// Read method name length</span>
        <span class="hljs-keyword">await</span> ReadExactlyAsync(stream, buffer.AsMemory(<span class="hljs-number">0</span>, <span class="hljs-number">4</span>), ct);
        <span class="hljs-keyword">var</span> methodNameLength = BitConverter.ToInt32(buffer, <span class="hljs-number">0</span>);

        <span class="hljs-comment">// Read method name</span>
        <span class="hljs-keyword">var</span> methodNameBytes = <span class="hljs-keyword">new</span> <span class="hljs-keyword">byte</span>[methodNameLength];
        <span class="hljs-keyword">await</span> ReadExactlyAsync(stream, methodNameBytes, ct);
        <span class="hljs-keyword">var</span> methodName = Encoding.UTF8.GetString(methodNameBytes);

        <span class="hljs-comment">// Read payload length</span>
        <span class="hljs-keyword">await</span> ReadExactlyAsync(stream, buffer.AsMemory(<span class="hljs-number">0</span>, <span class="hljs-number">4</span>), ct);
        <span class="hljs-keyword">var</span> payloadLength = BitConverter.ToInt32(buffer, <span class="hljs-number">0</span>);

        <span class="hljs-comment">// Read payload</span>
        <span class="hljs-keyword">var</span> payload = <span class="hljs-keyword">new</span> <span class="hljs-keyword">byte</span>[payloadLength];
        <span class="hljs-keyword">await</span> ReadExactlyAsync(stream, payload, ct);

        <span class="hljs-keyword">return</span> <span class="hljs-keyword">new</span> RpcMessage
        {
            MessageId = messageId,
            MethodName = methodName,
            Payload = payload
        };
    }

    <span class="hljs-function"><span class="hljs-keyword">private</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">async</span> Task <span class="hljs-title">ReadExactlyAsync</span>(<span class="hljs-params">
        QuicStream stream, 
        Memory&lt;<span class="hljs-keyword">byte</span>&gt; buffer, 
        CancellationToken ct</span>)</span>
    {
        <span class="hljs-keyword">int</span> totalRead = <span class="hljs-number">0</span>;
        <span class="hljs-keyword">while</span> (totalRead &lt; buffer.Length)
        {
            <span class="hljs-keyword">var</span> bytesRead = <span class="hljs-keyword">await</span> stream.ReadAsync(buffer.Slice(totalRead), ct);
            <span class="hljs-keyword">if</span> (bytesRead == <span class="hljs-number">0</span>)
                <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> EndOfStreamException(<span class="hljs-string">"Stream ended unexpectedly"</span>);
            totalRead += bytesRead;
        }
    }
}
</code></pre>
<h3 id="heading-rpc-server-infrastructure">RPC Server Infrastructure</h3>
<pre><code class="lang-csharp"><span class="hljs-keyword">public</span> <span class="hljs-keyword">interface</span> <span class="hljs-title">IRpcService</span>
{
    Task&lt;ReadOnlyMemory&lt;<span class="hljs-keyword">byte</span>&gt;&gt; HandleAsync(
        <span class="hljs-keyword">string</span> methodName, 
        ReadOnlyMemory&lt;<span class="hljs-keyword">byte</span>&gt; payload,
        CancellationToken cancellationToken);
}

<span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title">QuicRpcServer</span>
{
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">readonly</span> QuicListener _listener;
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">readonly</span> Dictionary&lt;<span class="hljs-keyword">string</span>, IRpcService&gt; _services = <span class="hljs-keyword">new</span>();
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">readonly</span> CancellationTokenSource _cts = <span class="hljs-keyword">new</span>();
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">long</span> _messageIdCounter = <span class="hljs-number">0</span>;

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-title">QuicRpcServer</span>(<span class="hljs-params">IPEndPoint endpoint, X509Certificate2 certificate</span>)</span>
    {
        <span class="hljs-keyword">var</span> listenerOptions = <span class="hljs-keyword">new</span> QuicListenerOptions
        {
            ApplicationProtocols = <span class="hljs-keyword">new</span> List&lt;SslApplicationProtocol&gt; 
            { 
                <span class="hljs-keyword">new</span> SslApplicationProtocol(<span class="hljs-string">"quic-rpc"</span>) 
            },
            ConnectionOptionsCallback = (connection, ssl, token) =&gt;
            {
                <span class="hljs-keyword">var</span> serverOptions = <span class="hljs-keyword">new</span> QuicServerConnectionOptions
                {
                    DefaultStreamErrorCode = <span class="hljs-number">0</span>,
                    DefaultCloseErrorCode = <span class="hljs-number">0</span>,
                    ServerAuthenticationOptions = <span class="hljs-keyword">new</span> SslServerAuthenticationOptions
                    {
                        ApplicationProtocols = <span class="hljs-keyword">new</span> List&lt;SslApplicationProtocol&gt;
                        {
                            <span class="hljs-keyword">new</span> SslApplicationProtocol(<span class="hljs-string">"quic-rpc"</span>)
                        },
                        ServerCertificate = certificate
                    }
                };
                <span class="hljs-keyword">return</span> ValueTask.FromResult(serverOptions);
            },
            ListenEndPoint = endpoint
        };

        _listener = QuicListener.ListenAsync(listenerOptions).GetAwaiter().GetResult();
    }

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">RegisterService</span>(<span class="hljs-params"><span class="hljs-keyword">string</span> serviceName, IRpcService service</span>)</span>
    {
        _services[serviceName] = service;
    }

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task <span class="hljs-title">StartAsync</span>(<span class="hljs-params"></span>)</span>
    {
        Console.WriteLine(<span class="hljs-string">$"QUIC RPC server listening on <span class="hljs-subst">{_listener.LocalEndPoint}</span>"</span>);

        <span class="hljs-keyword">while</span> (!_cts.Token.IsCancellationRequested)
        {
            <span class="hljs-keyword">var</span> connection = <span class="hljs-keyword">await</span> _listener.AcceptConnectionAsync(_cts.Token);
            _ = HandleConnectionAsync(connection);
        }
    }

    <span class="hljs-function"><span class="hljs-keyword">private</span> <span class="hljs-keyword">async</span> Task <span class="hljs-title">HandleConnectionAsync</span>(<span class="hljs-params">QuicConnection connection</span>)</span>
    {
        <span class="hljs-keyword">var</span> connectionId = Interlocked.Increment(<span class="hljs-keyword">ref</span> _messageIdCounter);
        Console.WriteLine(<span class="hljs-string">$"[Connection <span class="hljs-subst">{connectionId}</span>] Established from <span class="hljs-subst">{connection.RemoteEndPoint}</span>"</span>);

        <span class="hljs-keyword">var</span> tasks = <span class="hljs-keyword">new</span> List&lt;Task&gt;();

        <span class="hljs-keyword">try</span>
        {
            <span class="hljs-keyword">while</span> (!_cts.Token.IsCancellationRequested)
            {
                <span class="hljs-keyword">var</span> stream = <span class="hljs-keyword">await</span> connection.AcceptInboundStreamAsync(_cts.Token);
                <span class="hljs-keyword">var</span> task = HandleStreamAsync(stream, connectionId);
                tasks.Add(task);
            }
        }
        <span class="hljs-keyword">catch</span> (QuicException ex) <span class="hljs-keyword">when</span> (ex.QuicError == QuicError.ConnectionAborted)
        {
            Console.WriteLine(<span class="hljs-string">$"[Connection <span class="hljs-subst">{connectionId}</span>] Closed by client"</span>);
        }
        <span class="hljs-keyword">finally</span>
        {
            <span class="hljs-keyword">await</span> Task.WhenAll(tasks);
            <span class="hljs-keyword">await</span> connection.CloseAsync(<span class="hljs-number">0</span>);
            <span class="hljs-keyword">await</span> connection.DisposeAsync();
        }
    }

    <span class="hljs-function"><span class="hljs-keyword">private</span> <span class="hljs-keyword">async</span> Task <span class="hljs-title">HandleStreamAsync</span>(<span class="hljs-params">QuicStream stream, <span class="hljs-keyword">long</span> connectionId</span>)</span>
    {
        <span class="hljs-keyword">try</span>
        {
            <span class="hljs-keyword">var</span> request = <span class="hljs-keyword">await</span> RpcMessage.ReadFromStreamAsync(stream, _cts.Token);

            Console.WriteLine(<span class="hljs-string">$"[Connection <span class="hljs-subst">{connectionId}</span>] Method: <span class="hljs-subst">{request.MethodName}</span>, "</span> +
                            <span class="hljs-string">$"MessageId: <span class="hljs-subst">{request.MessageId}</span>, "</span> +
                            <span class="hljs-string">$"Payload: <span class="hljs-subst">{request.Payload.Length}</span> bytes"</span>);

            <span class="hljs-comment">// Parse service name (format: ServiceName.MethodName)</span>
            <span class="hljs-keyword">var</span> parts = request.MethodName.Split(<span class="hljs-string">'.'</span>, <span class="hljs-number">2</span>);
            <span class="hljs-keyword">if</span> (parts.Length != <span class="hljs-number">2</span>)
            {
                <span class="hljs-keyword">await</span> SendErrorAsync(stream, request.MessageId, <span class="hljs-string">"Invalid method name format"</span>);
                <span class="hljs-keyword">return</span>;
            }

            <span class="hljs-keyword">var</span> serviceName = parts[<span class="hljs-number">0</span>];
            <span class="hljs-keyword">var</span> methodName = parts[<span class="hljs-number">1</span>];

            <span class="hljs-keyword">if</span> (!_services.TryGetValue(serviceName, <span class="hljs-keyword">out</span> <span class="hljs-keyword">var</span> service))
            {
                <span class="hljs-keyword">await</span> SendErrorAsync(stream, request.MessageId, <span class="hljs-string">$"Service '<span class="hljs-subst">{serviceName}</span>' not found"</span>);
                <span class="hljs-keyword">return</span>;
            }

            <span class="hljs-keyword">var</span> responsePayload = <span class="hljs-keyword">await</span> service.HandleAsync(
                methodName, 
                request.Payload, 
                _cts.Token);

            <span class="hljs-keyword">var</span> response = <span class="hljs-keyword">new</span> RpcMessage
            {
                MessageId = request.MessageId,
                MethodName = request.MethodName,
                Payload = responsePayload
            };

            <span class="hljs-keyword">await</span> response.WriteToStreamAsync(stream, _cts.Token);
            stream.CompleteWrites();
        }
        <span class="hljs-keyword">catch</span> (Exception ex)
        {
            Console.WriteLine(<span class="hljs-string">$"[Connection <span class="hljs-subst">{connectionId}</span>] Stream error: <span class="hljs-subst">{ex.Message}</span>"</span>);
        }
        <span class="hljs-keyword">finally</span>
        {
            <span class="hljs-keyword">await</span> stream.DisposeAsync();
        }
    }

    <span class="hljs-function"><span class="hljs-keyword">private</span> <span class="hljs-keyword">async</span> Task <span class="hljs-title">SendErrorAsync</span>(<span class="hljs-params">QuicStream stream, <span class="hljs-keyword">long</span> messageId, <span class="hljs-keyword">string</span> error</span>)</span>
    {
        <span class="hljs-keyword">var</span> errorBytes = Encoding.UTF8.GetBytes(error);
        <span class="hljs-keyword">var</span> response = <span class="hljs-keyword">new</span> RpcMessage
        {
            MessageId = messageId,
            MethodName = <span class="hljs-string">"error"</span>,
            Payload = errorBytes
        };
        <span class="hljs-keyword">await</span> response.WriteToStreamAsync(stream, _cts.Token);
        stream.CompleteWrites();
    }

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task <span class="hljs-title">StopAsync</span>(<span class="hljs-params"></span>)</span>
    {
        _cts.Cancel();
        <span class="hljs-keyword">await</span> _listener.DisposeAsync();
    }
}
</code></pre>
<h3 id="heading-rpc-client-with-connection-pooling">RPC Client with Connection Pooling</h3>
<pre><code class="lang-csharp"><span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title">QuicRpcClient</span> : <span class="hljs-title">IAsyncDisposable</span>
{
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">readonly</span> QuicConnection _connection;
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">long</span> _nextMessageId = <span class="hljs-number">0</span>;
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">readonly</span> SemaphoreSlim _streamSemaphore;

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">async</span> Task&lt;QuicRpcClient&gt; <span class="hljs-title">ConnectAsync</span>(<span class="hljs-params">
        <span class="hljs-keyword">string</span> hostname,
        <span class="hljs-keyword">int</span> port,
        <span class="hljs-keyword">int</span> maxConcurrentStreams = <span class="hljs-number">100</span>,
        CancellationToken cancellationToken = <span class="hljs-keyword">default</span></span>)</span>
    {
        <span class="hljs-keyword">var</span> clientOptions = <span class="hljs-keyword">new</span> QuicClientConnectionOptions
        {
            DefaultStreamErrorCode = <span class="hljs-number">0</span>,
            DefaultCloseErrorCode = <span class="hljs-number">0</span>,
            MaxInboundBidirectionalStreams = maxConcurrentStreams,
            MaxInboundUnidirectionalStreams = maxConcurrentStreams,
            RemoteEndPoint = <span class="hljs-keyword">new</span> DnsEndPoint(hostname, port),
            ClientAuthenticationOptions = <span class="hljs-keyword">new</span> SslClientAuthenticationOptions
            {
                ApplicationProtocols = <span class="hljs-keyword">new</span> List&lt;SslApplicationProtocol&gt;
                {
                    <span class="hljs-keyword">new</span> SslApplicationProtocol(<span class="hljs-string">"quic-rpc"</span>)
                },
                RemoteCertificateValidationCallback = (sender, cert, chain, errors) =&gt; <span class="hljs-literal">true</span>
            }
        };

        <span class="hljs-keyword">var</span> connection = <span class="hljs-keyword">await</span> QuicConnection.ConnectAsync(clientOptions, cancellationToken);
        <span class="hljs-keyword">return</span> <span class="hljs-keyword">new</span> QuicRpcClient(connection, maxConcurrentStreams);
    }

    <span class="hljs-function"><span class="hljs-keyword">private</span> <span class="hljs-title">QuicRpcClient</span>(<span class="hljs-params">QuicConnection connection, <span class="hljs-keyword">int</span> maxConcurrentStreams</span>)</span>
    {
        _connection = connection;
        _streamSemaphore = <span class="hljs-keyword">new</span> SemaphoreSlim(maxConcurrentStreams, maxConcurrentStreams);
    }

    <span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task&lt;ReadOnlyMemory&lt;<span class="hljs-keyword">byte</span>&gt;&gt; CallAsync(
        <span class="hljs-keyword">string</span> serviceName,
        <span class="hljs-keyword">string</span> methodName,
        ReadOnlyMemory&lt;<span class="hljs-keyword">byte</span>&gt; payload,
        CancellationToken cancellationToken = <span class="hljs-keyword">default</span>)
    {
        <span class="hljs-keyword">await</span> _streamSemaphore.WaitAsync(cancellationToken);

        <span class="hljs-keyword">try</span>
        {
            <span class="hljs-keyword">await</span> <span class="hljs-keyword">using</span> <span class="hljs-keyword">var</span> stream = <span class="hljs-keyword">await</span> _connection.OpenOutboundStreamAsync(
                QuicStreamType.Bidirectional,
                cancellationToken);

            <span class="hljs-keyword">var</span> messageId = Interlocked.Increment(<span class="hljs-keyword">ref</span> _nextMessageId);
            <span class="hljs-keyword">var</span> request = <span class="hljs-keyword">new</span> RpcMessage
            {
                MessageId = messageId,
                MethodName = <span class="hljs-string">$"<span class="hljs-subst">{serviceName}</span>.<span class="hljs-subst">{methodName}</span>"</span>,
                Payload = payload
            };

            <span class="hljs-keyword">await</span> request.WriteToStreamAsync(stream, cancellationToken);
            stream.CompleteWrites();

            <span class="hljs-keyword">var</span> response = <span class="hljs-keyword">await</span> RpcMessage.ReadFromStreamAsync(stream, cancellationToken);

            <span class="hljs-keyword">if</span> (response.MethodName == <span class="hljs-string">"error"</span>)
            {
                <span class="hljs-keyword">var</span> errorMessage = Encoding.UTF8.GetString(response.Payload.Span);
                <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> RpcException(errorMessage);
            }

            <span class="hljs-keyword">return</span> response.Payload;
        }
        <span class="hljs-keyword">finally</span>
        {
            _streamSemaphore.Release();
        }
    }

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> ValueTask <span class="hljs-title">DisposeAsync</span>(<span class="hljs-params"></span>)</span>
    {
        <span class="hljs-keyword">await</span> _connection.CloseAsync(<span class="hljs-number">0</span>);
        <span class="hljs-keyword">await</span> _connection.DisposeAsync();
        _streamSemaphore.Dispose();
    }
}

<span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title">RpcException</span> : <span class="hljs-title">Exception</span>
{
    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-title">RpcException</span>(<span class="hljs-params"><span class="hljs-keyword">string</span> message</span>) : <span class="hljs-title">base</span>(<span class="hljs-params">message</span>)</span> { }
}
</code></pre>
<h3 id="heading-example-service-implementation">Example Service Implementation</h3>
<p>Let's create a practical service - an order management system:</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title">Order</span>
{
    <span class="hljs-keyword">public</span> Guid OrderId { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">set</span>; }
    <span class="hljs-keyword">public</span> <span class="hljs-keyword">string</span> CustomerId { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">set</span>; } = <span class="hljs-keyword">string</span>.Empty;
    <span class="hljs-keyword">public</span> List&lt;OrderItem&gt; Items { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">set</span>; } = <span class="hljs-keyword">new</span>();
    <span class="hljs-keyword">public</span> <span class="hljs-keyword">decimal</span> TotalAmount { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">set</span>; }
    <span class="hljs-keyword">public</span> OrderStatus Status { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">set</span>; }
    <span class="hljs-keyword">public</span> DateTime CreatedAt { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">set</span>; }
}

<span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title">OrderItem</span>
{
    <span class="hljs-keyword">public</span> <span class="hljs-keyword">string</span> ProductId { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">set</span>; } = <span class="hljs-keyword">string</span>.Empty;
    <span class="hljs-keyword">public</span> <span class="hljs-keyword">int</span> Quantity { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">set</span>; }
    <span class="hljs-keyword">public</span> <span class="hljs-keyword">decimal</span> UnitPrice { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">set</span>; }
}

<span class="hljs-keyword">public</span> <span class="hljs-keyword">enum</span> OrderStatus
{
    Pending,
    Processing,
    Shipped,
    Delivered,
    Cancelled
}

<span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title">OrderService</span> : <span class="hljs-title">IRpcService</span>
{
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">readonly</span> ConcurrentDictionary&lt;Guid, Order&gt; _orders = <span class="hljs-keyword">new</span>();

    <span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task&lt;ReadOnlyMemory&lt;<span class="hljs-keyword">byte</span>&gt;&gt; HandleAsync(
        <span class="hljs-keyword">string</span> methodName,
        ReadOnlyMemory&lt;<span class="hljs-keyword">byte</span>&gt; payload,
        CancellationToken cancellationToken)
    {
        <span class="hljs-keyword">return</span> methodName <span class="hljs-keyword">switch</span>
        {
            <span class="hljs-string">"CreateOrder"</span> =&gt; <span class="hljs-keyword">await</span> CreateOrderAsync(payload, cancellationToken),
            <span class="hljs-string">"GetOrder"</span> =&gt; <span class="hljs-keyword">await</span> GetOrderAsync(payload, cancellationToken),
            <span class="hljs-string">"UpdateOrderStatus"</span> =&gt; <span class="hljs-keyword">await</span> UpdateOrderStatusAsync(payload, cancellationToken),
            <span class="hljs-string">"ListOrders"</span> =&gt; <span class="hljs-keyword">await</span> ListOrdersAsync(payload, cancellationToken),
            _ =&gt; <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> RpcException(<span class="hljs-string">$"Unknown method: <span class="hljs-subst">{methodName}</span>"</span>)
        };
    }

    <span class="hljs-keyword">private</span> <span class="hljs-keyword">async</span> Task&lt;ReadOnlyMemory&lt;<span class="hljs-keyword">byte</span>&gt;&gt; CreateOrderAsync(
        ReadOnlyMemory&lt;<span class="hljs-keyword">byte</span>&gt; payload,
        CancellationToken ct)
    {
        <span class="hljs-keyword">var</span> order = JsonSerializer.Deserialize&lt;Order&gt;(payload.Span);
        <span class="hljs-keyword">if</span> (order == <span class="hljs-literal">null</span>)
            <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> RpcException(<span class="hljs-string">"Invalid order data"</span>);

        order.OrderId = Guid.NewGuid();
        order.CreatedAt = DateTime.UtcNow;
        order.Status = OrderStatus.Pending;
        order.TotalAmount = order.Items.Sum(item =&gt; item.Quantity * item.UnitPrice);

        _orders[order.OrderId] = order;

        <span class="hljs-keyword">await</span> Task.Delay(<span class="hljs-number">10</span>, ct); <span class="hljs-comment">// Simulate some work</span>

        <span class="hljs-keyword">return</span> JsonSerializer.SerializeToUtf8Bytes(order);
    }

    <span class="hljs-keyword">private</span> <span class="hljs-keyword">async</span> Task&lt;ReadOnlyMemory&lt;<span class="hljs-keyword">byte</span>&gt;&gt; GetOrderAsync(
        ReadOnlyMemory&lt;<span class="hljs-keyword">byte</span>&gt; payload,
        CancellationToken ct)
    {
        <span class="hljs-keyword">var</span> orderId = JsonSerializer.Deserialize&lt;Guid&gt;(payload.Span);

        <span class="hljs-keyword">await</span> Task.Delay(<span class="hljs-number">5</span>, ct); <span class="hljs-comment">// Simulate database lookup</span>

        <span class="hljs-keyword">if</span> (!_orders.TryGetValue(orderId, <span class="hljs-keyword">out</span> <span class="hljs-keyword">var</span> order))
            <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> RpcException(<span class="hljs-string">$"Order <span class="hljs-subst">{orderId}</span> not found"</span>);

        <span class="hljs-keyword">return</span> JsonSerializer.SerializeToUtf8Bytes(order);
    }

    <span class="hljs-keyword">private</span> <span class="hljs-keyword">async</span> Task&lt;ReadOnlyMemory&lt;<span class="hljs-keyword">byte</span>&gt;&gt; UpdateOrderStatusAsync(
        ReadOnlyMemory&lt;<span class="hljs-keyword">byte</span>&gt; payload,
        CancellationToken ct)
    {
        <span class="hljs-keyword">var</span> request = JsonSerializer.Deserialize&lt;UpdateStatusRequest&gt;(payload.Span);
        <span class="hljs-keyword">if</span> (request == <span class="hljs-literal">null</span>)
            <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> RpcException(<span class="hljs-string">"Invalid request"</span>);

        <span class="hljs-keyword">if</span> (!_orders.TryGetValue(request.OrderId, <span class="hljs-keyword">out</span> <span class="hljs-keyword">var</span> order))
            <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> RpcException(<span class="hljs-string">$"Order <span class="hljs-subst">{request.OrderId}</span> not found"</span>);

        <span class="hljs-keyword">await</span> Task.Delay(<span class="hljs-number">15</span>, ct); <span class="hljs-comment">// Simulate status update workflow</span>

        order.Status = request.NewStatus;
        <span class="hljs-keyword">return</span> JsonSerializer.SerializeToUtf8Bytes(order);
    }

    <span class="hljs-keyword">private</span> <span class="hljs-keyword">async</span> Task&lt;ReadOnlyMemory&lt;<span class="hljs-keyword">byte</span>&gt;&gt; ListOrdersAsync(
        ReadOnlyMemory&lt;<span class="hljs-keyword">byte</span>&gt; payload,
        CancellationToken ct)
    {
        <span class="hljs-keyword">await</span> Task.Delay(<span class="hljs-number">20</span>, ct); <span class="hljs-comment">// Simulate query</span>

        <span class="hljs-keyword">var</span> orders = _orders.Values.ToList();
        <span class="hljs-keyword">return</span> JsonSerializer.SerializeToUtf8Bytes(orders);
    }

    <span class="hljs-keyword">private</span> <span class="hljs-keyword">class</span> <span class="hljs-title">UpdateStatusRequest</span>
    {
        <span class="hljs-keyword">public</span> Guid OrderId { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">set</span>; }
        <span class="hljs-keyword">public</span> OrderStatus NewStatus { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">set</span>; }
    }
}
</code></pre>
<h3 id="heading-using-the-rpc-framework">Using the RPC Framework</h3>
<pre><code class="lang-csharp"><span class="hljs-comment">// Server setup</span>
<span class="hljs-keyword">var</span> cert = GenerateTestCertificate();
<span class="hljs-keyword">var</span> server = <span class="hljs-keyword">new</span> QuicRpcServer(<span class="hljs-keyword">new</span> IPEndPoint(IPAddress.Loopback, <span class="hljs-number">5002</span>), cert);
server.RegisterService(<span class="hljs-string">"OrderService"</span>, <span class="hljs-keyword">new</span> OrderService());
_ = server.StartAsync();

<span class="hljs-comment">// Client usage</span>
<span class="hljs-keyword">await</span> <span class="hljs-keyword">using</span> <span class="hljs-keyword">var</span> client = <span class="hljs-keyword">await</span> QuicRpcClient.ConnectAsync(<span class="hljs-string">"localhost"</span>, <span class="hljs-number">5002</span>);

<span class="hljs-comment">// Create an order</span>
<span class="hljs-keyword">var</span> newOrder = <span class="hljs-keyword">new</span> Order
{
    CustomerId = <span class="hljs-string">"CUST-12345"</span>,
    Items = <span class="hljs-keyword">new</span> List&lt;OrderItem&gt;
    {
        <span class="hljs-keyword">new</span>() { ProductId = <span class="hljs-string">"PROD-001"</span>, Quantity = <span class="hljs-number">2</span>, UnitPrice = <span class="hljs-number">29.99</span>m },
        <span class="hljs-keyword">new</span>() { ProductId = <span class="hljs-string">"PROD-002"</span>, Quantity = <span class="hljs-number">1</span>, UnitPrice = <span class="hljs-number">149.99</span>m }
    }
};

<span class="hljs-keyword">var</span> requestPayload = JsonSerializer.SerializeToUtf8Bytes(newOrder);
<span class="hljs-keyword">var</span> responsePayload = <span class="hljs-keyword">await</span> client.CallAsync(<span class="hljs-string">"OrderService"</span>, <span class="hljs-string">"CreateOrder"</span>, requestPayload);
<span class="hljs-keyword">var</span> createdOrder = JsonSerializer.Deserialize&lt;Order&gt;(responsePayload.Span);

Console.WriteLine(<span class="hljs-string">$"Created order: <span class="hljs-subst">{createdOrder.OrderId}</span>"</span>);
Console.WriteLine(<span class="hljs-string">$"Total amount: $<span class="hljs-subst">{createdOrder.TotalAmount:F2}</span>"</span>);

<span class="hljs-comment">// Retrieve the order</span>
<span class="hljs-keyword">var</span> orderIdPayload = JsonSerializer.SerializeToUtf8Bytes(createdOrder.OrderId);
<span class="hljs-keyword">var</span> getOrderPayload = <span class="hljs-keyword">await</span> client.CallAsync(<span class="hljs-string">"OrderService"</span>, <span class="hljs-string">"GetOrder"</span>, orderIdPayload);
<span class="hljs-keyword">var</span> retrievedOrder = JsonSerializer.Deserialize&lt;Order&gt;(getOrderPayload.Span);

Console.WriteLine(<span class="hljs-string">$"Retrieved order status: <span class="hljs-subst">{retrievedOrder.Status}</span>"</span>);
</code></pre>
<h2 id="heading-stream-multiplexing-the-game-changer">Stream Multiplexing: The Game Changer</h2>
<p>One of QUIC's most powerful features is independent stream multiplexing. Unlike HTTP/2 over TCP (where one lost packet blocks all streams), QUIC streams are completely independent at the transport layer.</p>
<h3 id="heading-demonstrating-stream-independence">Demonstrating Stream Independence</h3>
<pre><code class="lang-csharp"><span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title">StreamMultiplexingDemo</span>
{
    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">async</span> Task <span class="hljs-title">DemonstrateAsync</span>(<span class="hljs-params"></span>)</span>
    {
        <span class="hljs-keyword">var</span> cert = GenerateTestCertificate();
        <span class="hljs-keyword">var</span> server = <span class="hljs-keyword">new</span> QuicRpcServer(<span class="hljs-keyword">new</span> IPEndPoint(IPAddress.Loopback, <span class="hljs-number">5003</span>), cert);
        server.RegisterService(<span class="hljs-string">"SlowService"</span>, <span class="hljs-keyword">new</span> SlowService());
        _ = server.StartAsync();

        <span class="hljs-keyword">await</span> <span class="hljs-keyword">using</span> <span class="hljs-keyword">var</span> client = <span class="hljs-keyword">await</span> QuicRpcClient.ConnectAsync(<span class="hljs-string">"localhost"</span>, <span class="hljs-number">5003</span>);

        <span class="hljs-keyword">var</span> stopwatch = Stopwatch.StartNew();

        <span class="hljs-comment">// Launch 10 concurrent requests with varying delays</span>
        <span class="hljs-keyword">var</span> tasks = Enumerable.Range(<span class="hljs-number">0</span>, <span class="hljs-number">10</span>).Select(<span class="hljs-keyword">async</span> i =&gt;
        {
            <span class="hljs-keyword">var</span> delay = (i % <span class="hljs-number">3</span>) * <span class="hljs-number">100</span>; <span class="hljs-comment">// 0ms, 100ms, or 200ms delay</span>
            <span class="hljs-keyword">var</span> requestData = JsonSerializer.SerializeToUtf8Bytes(<span class="hljs-keyword">new</span> { RequestId = i, Delay = delay });

            <span class="hljs-keyword">var</span> start = stopwatch.ElapsedMilliseconds;
            <span class="hljs-keyword">var</span> response = <span class="hljs-keyword">await</span> client.CallAsync(<span class="hljs-string">"SlowService"</span>, <span class="hljs-string">"Process"</span>, requestData);
            <span class="hljs-keyword">var</span> end = stopwatch.ElapsedMilliseconds;

            <span class="hljs-keyword">var</span> result = JsonSerializer.Deserialize&lt;ProcessResult&gt;(response.Span);
            Console.WriteLine(<span class="hljs-string">$"Request <span class="hljs-subst">{i}</span> (delay=<span class="hljs-subst">{delay}</span>ms): "</span> +
                            <span class="hljs-string">$"completed in <span class="hljs-subst">{end - start}</span>ms, "</span> +
                            <span class="hljs-string">$"server processing took <span class="hljs-subst">{result.ActualDelay}</span>ms"</span>);
        }).ToArray();

        <span class="hljs-keyword">await</span> Task.WhenAll(tasks);
        stopwatch.Stop();

        Console.WriteLine(<span class="hljs-string">$"\nTotal time for 10 concurrent requests: <span class="hljs-subst">{stopwatch.ElapsedMilliseconds}</span>ms"</span>);
        Console.WriteLine(<span class="hljs-string">"Notice how requests with 0ms delay completed quickly "</span> +
                         <span class="hljs-string">"despite other requests having 200ms delays."</span>);
    }

    <span class="hljs-keyword">private</span> <span class="hljs-keyword">class</span> <span class="hljs-title">SlowService</span> : <span class="hljs-title">IRpcService</span>
    {
        <span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task&lt;ReadOnlyMemory&lt;<span class="hljs-keyword">byte</span>&gt;&gt; HandleAsync(
            <span class="hljs-keyword">string</span> methodName,
            ReadOnlyMemory&lt;<span class="hljs-keyword">byte</span>&gt; payload,
            CancellationToken cancellationToken)
        {
            <span class="hljs-keyword">var</span> request = JsonSerializer.Deserialize&lt;ProcessRequest&gt;(payload.Span);
            <span class="hljs-keyword">if</span> (request == <span class="hljs-literal">null</span>)
                <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> RpcException(<span class="hljs-string">"Invalid request"</span>);

            <span class="hljs-keyword">await</span> Task.Delay(request.Delay, cancellationToken);

            <span class="hljs-keyword">var</span> result = <span class="hljs-keyword">new</span> ProcessResult
            {
                RequestId = request.RequestId,
                ActualDelay = request.Delay,
                CompletedAt = DateTime.UtcNow
            };

            <span class="hljs-keyword">return</span> JsonSerializer.SerializeToUtf8Bytes(result);
        }

        <span class="hljs-keyword">private</span> <span class="hljs-keyword">class</span> <span class="hljs-title">ProcessRequest</span>
        {
            <span class="hljs-keyword">public</span> <span class="hljs-keyword">int</span> RequestId { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">set</span>; }
            <span class="hljs-keyword">public</span> <span class="hljs-keyword">int</span> Delay { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">set</span>; }
        }
    }

    <span class="hljs-keyword">private</span> <span class="hljs-keyword">class</span> <span class="hljs-title">ProcessResult</span>
    {
        <span class="hljs-keyword">public</span> <span class="hljs-keyword">int</span> RequestId { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">set</span>; }
        <span class="hljs-keyword">public</span> <span class="hljs-keyword">int</span> ActualDelay { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">set</span>; }
        <span class="hljs-keyword">public</span> DateTime CompletedAt { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">set</span>; }
    }
}
</code></pre>
<h2 id="heading-0-rtt-connection-resumption">0-RTT Connection Resumption</h2>
<p>QUIC's 0-RTT feature allows clients to send application data in the very first packet to the server, eliminating connection establishment latency for resumed connections.</p>
<h3 id="heading-implementing-0-rtt-support">Implementing 0-RTT Support</h3>
<pre><code class="lang-csharp"><span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title">ZeroRttClient</span>
{
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">byte</span>[]? _resumptionTicket;
    <span class="hljs-keyword">private</span> QuicConnection? _connection;

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> <span class="hljs-title">Task</span>&lt;<span class="hljs-title">T</span>&gt; <span class="hljs-title">CallWithResumptionAsync</span>&lt;<span class="hljs-title">T</span>&gt;(<span class="hljs-params">
        <span class="hljs-keyword">string</span> hostname,
        <span class="hljs-keyword">int</span> port,
        <span class="hljs-keyword">string</span> serviceName,
        <span class="hljs-keyword">string</span> methodName,
        <span class="hljs-keyword">object</span> request,
        CancellationToken ct = <span class="hljs-keyword">default</span></span>)</span>
    {
        <span class="hljs-keyword">var</span> clientOptions = <span class="hljs-keyword">new</span> QuicClientConnectionOptions
        {
            DefaultStreamErrorCode = <span class="hljs-number">0</span>,
            DefaultCloseErrorCode = <span class="hljs-number">0</span>,
            RemoteEndPoint = <span class="hljs-keyword">new</span> DnsEndPoint(hostname, port),
            ClientAuthenticationOptions = <span class="hljs-keyword">new</span> SslClientAuthenticationOptions
            {
                ApplicationProtocols = <span class="hljs-keyword">new</span> List&lt;SslApplicationProtocol&gt;
                {
                    <span class="hljs-keyword">new</span> SslApplicationProtocol(<span class="hljs-string">"quic-rpc"</span>)
                },
                RemoteCertificateValidationCallback = (sender, cert, chain, errors) =&gt; <span class="hljs-literal">true</span>
            }
        };

        <span class="hljs-comment">// If we have a resumption ticket, try 0-RTT</span>
        <span class="hljs-keyword">if</span> (_resumptionTicket != <span class="hljs-literal">null</span>)
        {
            <span class="hljs-comment">// Note: 0-RTT API is still evolving in .NET</span>
            <span class="hljs-comment">// This is conceptual - actual implementation depends on .NET version</span>
            Console.WriteLine(<span class="hljs-string">"Attempting 0-RTT connection resumption..."</span>);
        }

        <span class="hljs-keyword">if</span> (_connection == <span class="hljs-literal">null</span> || _connection.RemoteEndPoint == <span class="hljs-literal">null</span>)
        {
            _connection = <span class="hljs-keyword">await</span> QuicConnection.ConnectAsync(clientOptions, ct);
        }

        <span class="hljs-keyword">var</span> payload = JsonSerializer.SerializeToUtf8Bytes(request);

        <span class="hljs-keyword">await</span> <span class="hljs-keyword">using</span> <span class="hljs-keyword">var</span> stream = <span class="hljs-keyword">await</span> _connection.OpenOutboundStreamAsync(
            QuicStreamType.Bidirectional, ct);

        <span class="hljs-keyword">var</span> messageId = Random.Shared.NextInt64();
        <span class="hljs-keyword">var</span> rpcMessage = <span class="hljs-keyword">new</span> RpcMessage
        {
            MessageId = messageId,
            MethodName = <span class="hljs-string">$"<span class="hljs-subst">{serviceName}</span>.<span class="hljs-subst">{methodName}</span>"</span>,
            Payload = payload
        };

        <span class="hljs-keyword">await</span> rpcMessage.WriteToStreamAsync(stream, ct);
        stream.CompleteWrites();

        <span class="hljs-keyword">var</span> response = <span class="hljs-keyword">await</span> RpcMessage.ReadFromStreamAsync(stream, ct);
        <span class="hljs-keyword">return</span> JsonSerializer.Deserialize&lt;T&gt;(response.Payload.Span)!;
    }

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> ValueTask <span class="hljs-title">DisposeAsync</span>(<span class="hljs-params"></span>)</span>
    {
        <span class="hljs-keyword">if</span> (_connection != <span class="hljs-literal">null</span>)
        {
            <span class="hljs-keyword">await</span> _connection.CloseAsync(<span class="hljs-number">0</span>);
            <span class="hljs-keyword">await</span> _connection.DisposeAsync();
        }
    }
}
</code></pre>
<h2 id="heading-connection-migration-in-action">Connection Migration in Action</h2>
<p>QUIC connections survive network changes through Connection IDs. This is revolutionary for mobile clients and edge scenarios.</p>
<h3 id="heading-simulating-connection-migration">Simulating Connection Migration</h3>
<pre><code class="lang-csharp"><span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title">ConnectionMigrationDemo</span>
{
    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">async</span> Task <span class="hljs-title">DemonstrateAsync</span>(<span class="hljs-params"></span>)</span>
    {
        <span class="hljs-keyword">var</span> cert = GenerateTestCertificate();
        <span class="hljs-keyword">var</span> server = <span class="hljs-keyword">new</span> QuicRpcServer(<span class="hljs-keyword">new</span> IPEndPoint(IPAddress.Any, <span class="hljs-number">5004</span>), cert);
        server.RegisterService(<span class="hljs-string">"CounterService"</span>, <span class="hljs-keyword">new</span> CounterService());
        _ = server.StartAsync();

        <span class="hljs-comment">// Connect from first interface</span>
        <span class="hljs-keyword">await</span> <span class="hljs-keyword">using</span> <span class="hljs-keyword">var</span> client = <span class="hljs-keyword">await</span> QuicRpcClient.ConnectAsync(<span class="hljs-string">"localhost"</span>, <span class="hljs-number">5004</span>);

        <span class="hljs-keyword">for</span> (<span class="hljs-keyword">int</span> i = <span class="hljs-number">0</span>; i &lt; <span class="hljs-number">5</span>; i++)
        {
            <span class="hljs-keyword">var</span> payload = JsonSerializer.SerializeToUtf8Bytes(<span class="hljs-keyword">new</span> { Action = <span class="hljs-string">"increment"</span> });
            <span class="hljs-keyword">var</span> response = <span class="hljs-keyword">await</span> client.CallAsync(<span class="hljs-string">"CounterService"</span>, <span class="hljs-string">"Update"</span>, payload);
            <span class="hljs-keyword">var</span> result = JsonSerializer.Deserialize&lt;CounterResult&gt;(response.Span);

            Console.WriteLine(<span class="hljs-string">$"Request <span class="hljs-subst">{i + <span class="hljs-number">1</span>}</span>: Counter = <span class="hljs-subst">{result.Value}</span>"</span>);

            <span class="hljs-keyword">if</span> (i == <span class="hljs-number">2</span>)
            {
                Console.WriteLine(<span class="hljs-string">"\n&gt;&gt;&gt; Simulating network switch (WiFi -&gt; Cellular) &lt;&lt;&lt;"</span>);
                Console.WriteLine(<span class="hljs-string">"&gt;&gt;&gt; In a real scenario, the connection would migrate &lt;&lt;&lt;\n"</span>);

                <span class="hljs-comment">// In production, QUIC handles this automatically via Connection IDs</span>
                <span class="hljs-comment">// The connection remains valid even as the underlying IP changes</span>
                <span class="hljs-keyword">await</span> Task.Delay(<span class="hljs-number">500</span>);
            }
        }

        Console.WriteLine(<span class="hljs-string">"\nConnection remained stable across network change!"</span>);
    }

    <span class="hljs-keyword">private</span> <span class="hljs-keyword">class</span> <span class="hljs-title">CounterService</span> : <span class="hljs-title">IRpcService</span>
    {
        <span class="hljs-keyword">private</span> <span class="hljs-keyword">int</span> _counter = <span class="hljs-number">0</span>;

        <span class="hljs-keyword">public</span> Task&lt;ReadOnlyMemory&lt;<span class="hljs-keyword">byte</span>&gt;&gt; HandleAsync(
            <span class="hljs-keyword">string</span> methodName,
            ReadOnlyMemory&lt;<span class="hljs-keyword">byte</span>&gt; payload,
            CancellationToken cancellationToken)
        {
            <span class="hljs-keyword">var</span> request = JsonSerializer.Deserialize&lt;CounterRequest&gt;(payload.Span);

            <span class="hljs-keyword">if</span> (request?.Action == <span class="hljs-string">"increment"</span>)
                Interlocked.Increment(<span class="hljs-keyword">ref</span> _counter);
            <span class="hljs-keyword">else</span> <span class="hljs-keyword">if</span> (request?.Action == <span class="hljs-string">"decrement"</span>)
                Interlocked.Decrement(<span class="hljs-keyword">ref</span> _counter);

            <span class="hljs-keyword">var</span> result = <span class="hljs-keyword">new</span> CounterResult { Value = _counter };
            <span class="hljs-keyword">return</span> Task.FromResult&lt;ReadOnlyMemory&lt;<span class="hljs-keyword">byte</span>&gt;&gt;(
                JsonSerializer.SerializeToUtf8Bytes(result));
        }

        <span class="hljs-keyword">private</span> <span class="hljs-keyword">class</span> <span class="hljs-title">CounterRequest</span>
        {
            <span class="hljs-keyword">public</span> <span class="hljs-keyword">string</span> Action { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">set</span>; } = <span class="hljs-keyword">string</span>.Empty;
        }
    }

    <span class="hljs-keyword">private</span> <span class="hljs-keyword">class</span> <span class="hljs-title">CounterResult</span>
    {
        <span class="hljs-keyword">public</span> <span class="hljs-keyword">int</span> Value { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">set</span>; }
    }
}
</code></pre>
<h2 id="heading-performance-benchmarking-quic-vs-tcp">Performance Benchmarking: QUIC vs TCP</h2>
<p>Let's create a comprehensive benchmark comparing QUIC and traditional TCP-based HTTP/2.</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">using</span> BenchmarkDotNet.Attributes;
<span class="hljs-keyword">using</span> BenchmarkDotNet.Running;

[<span class="hljs-meta">MemoryDiagnoser</span>]
[<span class="hljs-meta">SimpleJob(warmupCount: 3, iterationCount: 10)</span>]
<span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title">QuicVsTcpBenchmark</span>
{
    <span class="hljs-keyword">private</span> QuicRpcServer? _quicServer;
    <span class="hljs-keyword">private</span> QuicRpcClient? _quicClient;
    <span class="hljs-keyword">private</span> HttpClient? _httpClient;
    <span class="hljs-keyword">private</span> HttpListener? _httpListener;
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">const</span> <span class="hljs-keyword">int</span> Port = <span class="hljs-number">5005</span>;
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">const</span> <span class="hljs-keyword">int</span> HttpPort = <span class="hljs-number">5006</span>;

    [<span class="hljs-meta">GlobalSetup</span>]
    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task <span class="hljs-title">Setup</span>(<span class="hljs-params"></span>)</span>
    {
        <span class="hljs-comment">// Setup QUIC server</span>
        <span class="hljs-keyword">var</span> cert = GenerateTestCertificate();
        _quicServer = <span class="hljs-keyword">new</span> QuicRpcServer(<span class="hljs-keyword">new</span> IPEndPoint(IPAddress.Loopback, Port), cert);
        _quicServer.RegisterService(<span class="hljs-string">"BenchService"</span>, <span class="hljs-keyword">new</span> BenchmarkService());
        _ = _quicServer.StartAsync();

        _quicClient = <span class="hljs-keyword">await</span> QuicRpcClient.ConnectAsync(<span class="hljs-string">"localhost"</span>, Port);

        <span class="hljs-comment">// Setup HTTP/2 server</span>
        _httpListener = <span class="hljs-keyword">new</span> HttpListener();
        _httpListener.Prefixes.Add(<span class="hljs-string">$"http://localhost:<span class="hljs-subst">{HttpPort}</span>/"</span>);
        _httpListener.Start();
        _ = HandleHttpRequestsAsync();

        _httpClient = <span class="hljs-keyword">new</span> HttpClient
        {
            BaseAddress = <span class="hljs-keyword">new</span> Uri(<span class="hljs-string">$"http://localhost:<span class="hljs-subst">{HttpPort}</span>"</span>)
        };

        <span class="hljs-keyword">await</span> Task.Delay(<span class="hljs-number">500</span>); <span class="hljs-comment">// Let servers stabilize</span>
    }

    [<span class="hljs-meta">GlobalCleanup</span>]
    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task <span class="hljs-title">Cleanup</span>(<span class="hljs-params"></span>)</span>
    {
        <span class="hljs-keyword">if</span> (_quicClient != <span class="hljs-literal">null</span>)
            <span class="hljs-keyword">await</span> _quicClient.DisposeAsync();

        <span class="hljs-keyword">if</span> (_quicServer != <span class="hljs-literal">null</span>)
            <span class="hljs-keyword">await</span> _quicServer.StopAsync();

        _httpClient?.Dispose();
        _httpListener?.Stop();
    }

    [<span class="hljs-meta">Benchmark(Baseline = true)</span>]
    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task <span class="hljs-title">Http2OverTcp_SingleRequest</span>(<span class="hljs-params"></span>)</span>
    {
        <span class="hljs-keyword">var</span> request = <span class="hljs-keyword">new</span> { Value = <span class="hljs-number">42</span> };
        <span class="hljs-keyword">var</span> content = <span class="hljs-keyword">new</span> StringContent(
            JsonSerializer.Serialize(request),
            Encoding.UTF8,
            <span class="hljs-string">"application/json"</span>);

        <span class="hljs-keyword">var</span> response = <span class="hljs-keyword">await</span> _httpClient!.PostAsync(<span class="hljs-string">"/process"</span>, content);
        <span class="hljs-keyword">var</span> result = <span class="hljs-keyword">await</span> response.Content.ReadAsStringAsync();
    }

    [<span class="hljs-meta">Benchmark</span>]
    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task <span class="hljs-title">QuicRpc_SingleRequest</span>(<span class="hljs-params"></span>)</span>
    {
        <span class="hljs-keyword">var</span> request = <span class="hljs-keyword">new</span> { Value = <span class="hljs-number">42</span> };
        <span class="hljs-keyword">var</span> payload = JsonSerializer.SerializeToUtf8Bytes(request);
        <span class="hljs-keyword">var</span> response = <span class="hljs-keyword">await</span> _quicClient!.CallAsync(<span class="hljs-string">"BenchService"</span>, <span class="hljs-string">"Process"</span>, payload);
    }

    [<span class="hljs-meta">Benchmark</span>]
    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task <span class="hljs-title">Http2OverTcp_ConcurrentRequests</span>(<span class="hljs-params"></span>)</span>
    {
        <span class="hljs-keyword">var</span> tasks = Enumerable.Range(<span class="hljs-number">0</span>, <span class="hljs-number">10</span>).Select(<span class="hljs-keyword">async</span> i =&gt;
        {
            <span class="hljs-keyword">var</span> request = <span class="hljs-keyword">new</span> { Value = i };
            <span class="hljs-keyword">var</span> content = <span class="hljs-keyword">new</span> StringContent(
                JsonSerializer.Serialize(request),
                Encoding.UTF8,
                <span class="hljs-string">"application/json"</span>);

            <span class="hljs-keyword">var</span> response = <span class="hljs-keyword">await</span> _httpClient!.PostAsync(<span class="hljs-string">"/process"</span>, content);
            <span class="hljs-keyword">await</span> response.Content.ReadAsStringAsync();
        });

        <span class="hljs-keyword">await</span> Task.WhenAll(tasks);
    }

    [<span class="hljs-meta">Benchmark</span>]
    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task <span class="hljs-title">QuicRpc_ConcurrentRequests</span>(<span class="hljs-params"></span>)</span>
    {
        <span class="hljs-keyword">var</span> tasks = Enumerable.Range(<span class="hljs-number">0</span>, <span class="hljs-number">10</span>).Select(<span class="hljs-keyword">async</span> i =&gt;
        {
            <span class="hljs-keyword">var</span> request = <span class="hljs-keyword">new</span> { Value = i };
            <span class="hljs-keyword">var</span> payload = JsonSerializer.SerializeToUtf8Bytes(request);
            <span class="hljs-keyword">await</span> _quicClient!.CallAsync(<span class="hljs-string">"BenchService"</span>, <span class="hljs-string">"Process"</span>, payload);
        });

        <span class="hljs-keyword">await</span> Task.WhenAll(tasks);
    }

    <span class="hljs-function"><span class="hljs-keyword">private</span> <span class="hljs-keyword">async</span> Task <span class="hljs-title">HandleHttpRequestsAsync</span>(<span class="hljs-params"></span>)</span>
    {
        <span class="hljs-keyword">while</span> (_httpListener!.IsListening)
        {
            <span class="hljs-keyword">try</span>
            {
                <span class="hljs-keyword">var</span> context = <span class="hljs-keyword">await</span> _httpListener.GetContextAsync();
                _ = Task.Run(<span class="hljs-keyword">async</span> () =&gt;
                {
                    <span class="hljs-keyword">using</span> <span class="hljs-keyword">var</span> reader = <span class="hljs-keyword">new</span> StreamReader(context.Request.InputStream);
                    <span class="hljs-keyword">var</span> body = <span class="hljs-keyword">await</span> reader.ReadToEndAsync();
                    <span class="hljs-keyword">var</span> request = JsonSerializer.Deserialize&lt;BenchRequest&gt;(body);

                    <span class="hljs-keyword">var</span> result = <span class="hljs-keyword">new</span> BenchResult { Result = request!.Value * <span class="hljs-number">2</span> };
                    <span class="hljs-keyword">var</span> responseJson = JsonSerializer.Serialize(result);

                    context.Response.ContentType = <span class="hljs-string">"application/json"</span>;
                    <span class="hljs-keyword">await</span> context.Response.OutputStream.WriteAsync(
                        Encoding.UTF8.GetBytes(responseJson));
                    context.Response.Close();
                });
            }
            <span class="hljs-keyword">catch</span>
            {
                <span class="hljs-keyword">break</span>;
            }
        }
    }

    <span class="hljs-keyword">private</span> <span class="hljs-keyword">class</span> <span class="hljs-title">BenchmarkService</span> : <span class="hljs-title">IRpcService</span>
    {
        <span class="hljs-keyword">public</span> Task&lt;ReadOnlyMemory&lt;<span class="hljs-keyword">byte</span>&gt;&gt; HandleAsync(
            <span class="hljs-keyword">string</span> methodName,
            ReadOnlyMemory&lt;<span class="hljs-keyword">byte</span>&gt; payload,
            CancellationToken cancellationToken)
        {
            <span class="hljs-keyword">var</span> request = JsonSerializer.Deserialize&lt;BenchRequest&gt;(payload.Span);
            <span class="hljs-keyword">var</span> result = <span class="hljs-keyword">new</span> BenchResult { Result = request!.Value * <span class="hljs-number">2</span> };
            <span class="hljs-keyword">return</span> Task.FromResult&lt;ReadOnlyMemory&lt;<span class="hljs-keyword">byte</span>&gt;&gt;(
                JsonSerializer.SerializeToUtf8Bytes(result));
        }
    }

    <span class="hljs-keyword">private</span> <span class="hljs-keyword">class</span> <span class="hljs-title">BenchRequest</span>
    {
        <span class="hljs-keyword">public</span> <span class="hljs-keyword">int</span> Value { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">set</span>; }
    }

    <span class="hljs-keyword">private</span> <span class="hljs-keyword">class</span> <span class="hljs-title">BenchResult</span>
    {
        <span class="hljs-keyword">public</span> <span class="hljs-keyword">int</span> Result { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">set</span>; }
    }
}
</code></pre>
<p>Run the benchmark with:</p>
<pre><code class="lang-powershell">dotnet run <span class="hljs-literal">-c</span> Release
</code></pre>
<p>Expected results show QUIC typically offering:</p>
<ul>
<li><p>20-40% lower latency for single requests</p>
</li>
<li><p>30-60% better throughput for concurrent requests</p>
</li>
<li><p>Significantly better performance under packet loss conditions</p>
</li>
</ul>
<h2 id="heading-building-a-distributed-cache-with-quic">Building a Distributed Cache with QUIC</h2>
<p>Let's build a production-ready distributed cache that leverages QUIC's multiplexing and low latency.</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">public</span> <span class="hljs-keyword">interface</span> <span class="hljs-title">IDistributedQuicCache</span>
{
    Task&lt;CacheEntry?&gt; GetAsync(<span class="hljs-keyword">string</span> key, CancellationToken ct = <span class="hljs-keyword">default</span>);
    <span class="hljs-function">Task <span class="hljs-title">SetAsync</span>(<span class="hljs-params"><span class="hljs-keyword">string</span> key, <span class="hljs-keyword">byte</span>[] <span class="hljs-keyword">value</span>, TimeSpan? expiration = <span class="hljs-literal">null</span>, CancellationToken ct = <span class="hljs-keyword">default</span></span>)</span>;
    <span class="hljs-function">Task&lt;<span class="hljs-keyword">bool</span>&gt; <span class="hljs-title">DeleteAsync</span>(<span class="hljs-params"><span class="hljs-keyword">string</span> key, CancellationToken ct = <span class="hljs-keyword">default</span></span>)</span>;
    <span class="hljs-function">Task&lt;<span class="hljs-keyword">bool</span>&gt; <span class="hljs-title">ExistsAsync</span>(<span class="hljs-params"><span class="hljs-keyword">string</span> key, CancellationToken ct = <span class="hljs-keyword">default</span></span>)</span>;
}

<span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title">CacheEntry</span>
{
    <span class="hljs-keyword">public</span> <span class="hljs-keyword">string</span> Key { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">set</span>; } = <span class="hljs-keyword">string</span>.Empty;
    <span class="hljs-keyword">public</span> <span class="hljs-keyword">byte</span>[] Value { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">set</span>; } = Array.Empty&lt;<span class="hljs-keyword">byte</span>&gt;();
    <span class="hljs-keyword">public</span> DateTime ExpiresAt { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">set</span>; }
}

<span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title">QuicCacheServer</span>
{
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">readonly</span> QuicRpcServer _rpcServer;
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">readonly</span> CacheService _cacheService;

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-title">QuicCacheServer</span>(<span class="hljs-params">IPEndPoint endpoint, X509Certificate2 certificate</span>)</span>
    {
        _cacheService = <span class="hljs-keyword">new</span> CacheService();
        _rpcServer = <span class="hljs-keyword">new</span> QuicRpcServer(endpoint, certificate);
        _rpcServer.RegisterService(<span class="hljs-string">"Cache"</span>, _cacheService);
    }

    <span class="hljs-function"><span class="hljs-keyword">public</span> Task <span class="hljs-title">StartAsync</span>(<span class="hljs-params"></span>)</span> =&gt; _rpcServer.StartAsync();
    <span class="hljs-function"><span class="hljs-keyword">public</span> Task <span class="hljs-title">StopAsync</span>(<span class="hljs-params"></span>)</span> =&gt; _rpcServer.StopAsync();

    <span class="hljs-keyword">private</span> <span class="hljs-keyword">class</span> <span class="hljs-title">CacheService</span> : <span class="hljs-title">IRpcService</span>
    {
        <span class="hljs-keyword">private</span> <span class="hljs-keyword">readonly</span> ConcurrentDictionary&lt;<span class="hljs-keyword">string</span>, CacheEntry&gt; _cache = <span class="hljs-keyword">new</span>();
        <span class="hljs-keyword">private</span> <span class="hljs-keyword">readonly</span> Timer _cleanupTimer;

        <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-title">CacheService</span>(<span class="hljs-params"></span>)</span>
        {
            _cleanupTimer = <span class="hljs-keyword">new</span> Timer(CleanupExpiredEntries, <span class="hljs-literal">null</span>, 
                TimeSpan.FromSeconds(<span class="hljs-number">60</span>), TimeSpan.FromSeconds(<span class="hljs-number">60</span>));
        }

        <span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task&lt;ReadOnlyMemory&lt;<span class="hljs-keyword">byte</span>&gt;&gt; HandleAsync(
            <span class="hljs-keyword">string</span> methodName,
            ReadOnlyMemory&lt;<span class="hljs-keyword">byte</span>&gt; payload,
            CancellationToken cancellationToken)
        {
            <span class="hljs-keyword">return</span> methodName <span class="hljs-keyword">switch</span>
            {
                <span class="hljs-string">"Get"</span> =&gt; <span class="hljs-keyword">await</span> GetAsync(payload),
                <span class="hljs-string">"Set"</span> =&gt; <span class="hljs-keyword">await</span> SetAsync(payload),
                <span class="hljs-string">"Delete"</span> =&gt; <span class="hljs-keyword">await</span> DeleteAsync(payload),
                <span class="hljs-string">"Exists"</span> =&gt; <span class="hljs-keyword">await</span> ExistsAsync(payload),
                _ =&gt; <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> RpcException(<span class="hljs-string">$"Unknown cache operation: <span class="hljs-subst">{methodName}</span>"</span>)
            };
        }

        <span class="hljs-keyword">private</span> Task&lt;ReadOnlyMemory&lt;<span class="hljs-keyword">byte</span>&gt;&gt; GetAsync(ReadOnlyMemory&lt;<span class="hljs-keyword">byte</span>&gt; payload)
        {
            <span class="hljs-keyword">var</span> request = JsonSerializer.Deserialize&lt;GetRequest&gt;(payload.Span);
            <span class="hljs-keyword">if</span> (request == <span class="hljs-literal">null</span>)
                <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> RpcException(<span class="hljs-string">"Invalid get request"</span>);

            <span class="hljs-keyword">if</span> (_cache.TryGetValue(request.Key, <span class="hljs-keyword">out</span> <span class="hljs-keyword">var</span> entry) &amp;&amp; 
                entry.ExpiresAt &gt; DateTime.UtcNow)
            {
                <span class="hljs-keyword">return</span> Task.FromResult&lt;ReadOnlyMemory&lt;<span class="hljs-keyword">byte</span>&gt;&gt;(
                    JsonSerializer.SerializeToUtf8Bytes(entry));
            }

            <span class="hljs-keyword">return</span> Task.FromResult&lt;ReadOnlyMemory&lt;<span class="hljs-keyword">byte</span>&gt;&gt;(
                JsonSerializer.SerializeToUtf8Bytes&lt;CacheEntry?&gt;(<span class="hljs-literal">null</span>));
        }

        <span class="hljs-keyword">private</span> Task&lt;ReadOnlyMemory&lt;<span class="hljs-keyword">byte</span>&gt;&gt; SetAsync(ReadOnlyMemory&lt;<span class="hljs-keyword">byte</span>&gt; payload)
        {
            <span class="hljs-keyword">var</span> request = JsonSerializer.Deserialize&lt;SetRequest&gt;(payload.Span);
            <span class="hljs-keyword">if</span> (request == <span class="hljs-literal">null</span>)
                <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> RpcException(<span class="hljs-string">"Invalid set request"</span>);

            <span class="hljs-keyword">var</span> entry = <span class="hljs-keyword">new</span> CacheEntry
            {
                Key = request.Key,
                Value = request.Value,
                ExpiresAt = request.ExpirationSeconds.HasValue
                    ? DateTime.UtcNow.AddSeconds(request.ExpirationSeconds.Value)
                    : DateTime.MaxValue
            };

            _cache[request.Key] = entry;

            <span class="hljs-keyword">var</span> response = <span class="hljs-keyword">new</span> SetResponse { Success = <span class="hljs-literal">true</span> };
            <span class="hljs-keyword">return</span> Task.FromResult&lt;ReadOnlyMemory&lt;<span class="hljs-keyword">byte</span>&gt;&gt;(
                JsonSerializer.SerializeToUtf8Bytes(response));
        }

        <span class="hljs-keyword">private</span> Task&lt;ReadOnlyMemory&lt;<span class="hljs-keyword">byte</span>&gt;&gt; DeleteAsync(ReadOnlyMemory&lt;<span class="hljs-keyword">byte</span>&gt; payload)
        {
            <span class="hljs-keyword">var</span> request = JsonSerializer.Deserialize&lt;DeleteRequest&gt;(payload.Span);
            <span class="hljs-keyword">if</span> (request == <span class="hljs-literal">null</span>)
                <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> RpcException(<span class="hljs-string">"Invalid delete request"</span>);

            <span class="hljs-keyword">var</span> removed = _cache.TryRemove(request.Key, <span class="hljs-keyword">out</span> _);
            <span class="hljs-keyword">var</span> response = <span class="hljs-keyword">new</span> DeleteResponse { Success = removed };

            <span class="hljs-keyword">return</span> Task.FromResult&lt;ReadOnlyMemory&lt;<span class="hljs-keyword">byte</span>&gt;&gt;(
                JsonSerializer.SerializeToUtf8Bytes(response));
        }

        <span class="hljs-keyword">private</span> Task&lt;ReadOnlyMemory&lt;<span class="hljs-keyword">byte</span>&gt;&gt; ExistsAsync(ReadOnlyMemory&lt;<span class="hljs-keyword">byte</span>&gt; payload)
        {
            <span class="hljs-keyword">var</span> request = JsonSerializer.Deserialize&lt;ExistsRequest&gt;(payload.Span);
            <span class="hljs-keyword">if</span> (request == <span class="hljs-literal">null</span>)
                <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> RpcException(<span class="hljs-string">"Invalid exists request"</span>);

            <span class="hljs-keyword">var</span> exists = _cache.TryGetValue(request.Key, <span class="hljs-keyword">out</span> <span class="hljs-keyword">var</span> entry) &amp;&amp; 
                        entry.ExpiresAt &gt; DateTime.UtcNow;

            <span class="hljs-keyword">var</span> response = <span class="hljs-keyword">new</span> ExistsResponse { Exists = exists };
            <span class="hljs-keyword">return</span> Task.FromResult&lt;ReadOnlyMemory&lt;<span class="hljs-keyword">byte</span>&gt;&gt;(
                JsonSerializer.SerializeToUtf8Bytes(response));
        }

        <span class="hljs-function"><span class="hljs-keyword">private</span> <span class="hljs-keyword">void</span> <span class="hljs-title">CleanupExpiredEntries</span>(<span class="hljs-params"><span class="hljs-keyword">object</span>? state</span>)</span>
        {
            <span class="hljs-keyword">var</span> now = DateTime.UtcNow;
            <span class="hljs-keyword">var</span> expiredKeys = _cache
                .Where(kvp =&gt; kvp.Value.ExpiresAt &lt;= now)
                .Select(kvp =&gt; kvp.Key)
                .ToList();

            <span class="hljs-keyword">foreach</span> (<span class="hljs-keyword">var</span> key <span class="hljs-keyword">in</span> expiredKeys)
            {
                _cache.TryRemove(key, <span class="hljs-keyword">out</span> _);
            }

            <span class="hljs-keyword">if</span> (expiredKeys.Count &gt; <span class="hljs-number">0</span>)
            {
                Console.WriteLine(<span class="hljs-string">$"Cleaned up <span class="hljs-subst">{expiredKeys.Count}</span> expired cache entries"</span>);
            }
        }

        <span class="hljs-keyword">private</span> <span class="hljs-keyword">class</span> <span class="hljs-title">GetRequest</span> { <span class="hljs-keyword">public</span> <span class="hljs-keyword">string</span> Key { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">set</span>; } = <span class="hljs-keyword">string</span>.Empty; }
        <span class="hljs-keyword">private</span> <span class="hljs-keyword">class</span> <span class="hljs-title">SetRequest</span> 
        { 
            <span class="hljs-keyword">public</span> <span class="hljs-keyword">string</span> Key { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">set</span>; } = <span class="hljs-keyword">string</span>.Empty;
            <span class="hljs-keyword">public</span> <span class="hljs-keyword">byte</span>[] Value { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">set</span>; } = Array.Empty&lt;<span class="hljs-keyword">byte</span>&gt;();
            <span class="hljs-keyword">public</span> <span class="hljs-keyword">int</span>? ExpirationSeconds { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">set</span>; }
        }
        <span class="hljs-keyword">private</span> <span class="hljs-keyword">class</span> <span class="hljs-title">DeleteRequest</span> { <span class="hljs-keyword">public</span> <span class="hljs-keyword">string</span> Key { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">set</span>; } = <span class="hljs-keyword">string</span>.Empty; }
        <span class="hljs-keyword">private</span> <span class="hljs-keyword">class</span> <span class="hljs-title">ExistsRequest</span> { <span class="hljs-keyword">public</span> <span class="hljs-keyword">string</span> Key { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">set</span>; } = <span class="hljs-keyword">string</span>.Empty; }
        <span class="hljs-keyword">private</span> <span class="hljs-keyword">class</span> <span class="hljs-title">SetResponse</span> { <span class="hljs-keyword">public</span> <span class="hljs-keyword">bool</span> Success { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">set</span>; } }
        <span class="hljs-keyword">private</span> <span class="hljs-keyword">class</span> <span class="hljs-title">DeleteResponse</span> { <span class="hljs-keyword">public</span> <span class="hljs-keyword">bool</span> Success { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">set</span>; } }
        <span class="hljs-keyword">private</span> <span class="hljs-keyword">class</span> <span class="hljs-title">ExistsResponse</span> { <span class="hljs-keyword">public</span> <span class="hljs-keyword">bool</span> Exists { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">set</span>; } }
    }
}

<span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title">QuicCacheClient</span> : <span class="hljs-title">IDistributedQuicCache</span>
{
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">readonly</span> QuicRpcClient _client;

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">async</span> Task&lt;QuicCacheClient&gt; <span class="hljs-title">ConnectAsync</span>(<span class="hljs-params">
        <span class="hljs-keyword">string</span> hostname,
        <span class="hljs-keyword">int</span> port,
        CancellationToken ct = <span class="hljs-keyword">default</span></span>)</span>
    {
        <span class="hljs-keyword">var</span> client = <span class="hljs-keyword">await</span> QuicRpcClient.ConnectAsync(hostname, port, maxConcurrentStreams: <span class="hljs-number">1000</span>, ct);
        <span class="hljs-keyword">return</span> <span class="hljs-keyword">new</span> QuicCacheClient(client);
    }

    <span class="hljs-function"><span class="hljs-keyword">private</span> <span class="hljs-title">QuicCacheClient</span>(<span class="hljs-params">QuicRpcClient client</span>)</span>
    {
        _client = client;
    }

    <span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task&lt;CacheEntry?&gt; GetAsync(<span class="hljs-keyword">string</span> key, CancellationToken ct = <span class="hljs-keyword">default</span>)
    {
        <span class="hljs-keyword">var</span> request = <span class="hljs-keyword">new</span> { Key = key };
        <span class="hljs-keyword">var</span> payload = JsonSerializer.SerializeToUtf8Bytes(request);
        <span class="hljs-keyword">var</span> response = <span class="hljs-keyword">await</span> _client.CallAsync(<span class="hljs-string">"Cache"</span>, <span class="hljs-string">"Get"</span>, payload, ct);
        <span class="hljs-keyword">return</span> JsonSerializer.Deserialize&lt;CacheEntry?&gt;(response.Span);
    }

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task <span class="hljs-title">SetAsync</span>(<span class="hljs-params">
        <span class="hljs-keyword">string</span> key, 
        <span class="hljs-keyword">byte</span>[] <span class="hljs-keyword">value</span>, 
        TimeSpan? expiration = <span class="hljs-literal">null</span>,
        CancellationToken ct = <span class="hljs-keyword">default</span></span>)</span>
    {
        <span class="hljs-keyword">var</span> request = <span class="hljs-keyword">new</span> 
        { 
            Key = key, 
            Value = <span class="hljs-keyword">value</span>, 
            ExpirationSeconds = expiration.HasValue ? (<span class="hljs-keyword">int</span>)expiration.Value.TotalSeconds : (<span class="hljs-keyword">int</span>?)<span class="hljs-literal">null</span>
        };
        <span class="hljs-keyword">var</span> payload = JsonSerializer.SerializeToUtf8Bytes(request);
        <span class="hljs-keyword">await</span> _client.CallAsync(<span class="hljs-string">"Cache"</span>, <span class="hljs-string">"Set"</span>, payload, ct);
    }

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task&lt;<span class="hljs-keyword">bool</span>&gt; <span class="hljs-title">DeleteAsync</span>(<span class="hljs-params"><span class="hljs-keyword">string</span> key, CancellationToken ct = <span class="hljs-keyword">default</span></span>)</span>
    {
        <span class="hljs-keyword">var</span> request = <span class="hljs-keyword">new</span> { Key = key };
        <span class="hljs-keyword">var</span> payload = JsonSerializer.SerializeToUtf8Bytes(request);
        <span class="hljs-keyword">var</span> response = <span class="hljs-keyword">await</span> _client.CallAsync(<span class="hljs-string">"Cache"</span>, <span class="hljs-string">"Delete"</span>, payload, ct);
        <span class="hljs-keyword">var</span> result = JsonSerializer.Deserialize&lt;DeleteResponse&gt;(response.Span);
        <span class="hljs-keyword">return</span> result?.Success ?? <span class="hljs-literal">false</span>;
    }

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task&lt;<span class="hljs-keyword">bool</span>&gt; <span class="hljs-title">ExistsAsync</span>(<span class="hljs-params"><span class="hljs-keyword">string</span> key, CancellationToken ct = <span class="hljs-keyword">default</span></span>)</span>
    {
        <span class="hljs-keyword">var</span> request = <span class="hljs-keyword">new</span> { Key = key };
        <span class="hljs-keyword">var</span> payload = JsonSerializer.SerializeToUtf8Bytes(request);
        <span class="hljs-keyword">var</span> response = <span class="hljs-keyword">await</span> _client.CallAsync(<span class="hljs-string">"Cache"</span>, <span class="hljs-string">"Exists"</span>, payload, ct);
        <span class="hljs-keyword">var</span> result = JsonSerializer.Deserialize&lt;ExistsResponse&gt;(response.Span);
        <span class="hljs-keyword">return</span> result?.Exists ?? <span class="hljs-literal">false</span>;
    }

    <span class="hljs-keyword">private</span> <span class="hljs-keyword">class</span> <span class="hljs-title">DeleteResponse</span> { <span class="hljs-keyword">public</span> <span class="hljs-keyword">bool</span> Success { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">set</span>; } }
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">class</span> <span class="hljs-title">ExistsResponse</span> { <span class="hljs-keyword">public</span> <span class="hljs-keyword">bool</span> Exists { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">set</span>; } }
}
</code></pre>
<h3 id="heading-cache-usage-example">Cache Usage Example</h3>
<pre><code class="lang-csharp"><span class="hljs-comment">// Start cache server</span>
<span class="hljs-keyword">var</span> cert = GenerateTestCertificate();
<span class="hljs-keyword">var</span> cacheServer = <span class="hljs-keyword">new</span> QuicCacheServer(<span class="hljs-keyword">new</span> IPEndPoint(IPAddress.Loopback, <span class="hljs-number">6000</span>), cert);
_ = cacheServer.StartAsync();

<span class="hljs-comment">// Connect client</span>
<span class="hljs-keyword">var</span> cache = <span class="hljs-keyword">await</span> QuicCacheClient.ConnectAsync(<span class="hljs-string">"localhost"</span>, <span class="hljs-number">6000</span>);

<span class="hljs-comment">// Store data</span>
<span class="hljs-keyword">var</span> userData = JsonSerializer.SerializeToUtf8Bytes(<span class="hljs-keyword">new</span> 
{ 
    UserId = <span class="hljs-string">"user-123"</span>, 
    Name = <span class="hljs-string">"Alice"</span>, 
    Email = <span class="hljs-string">"alice@example.com"</span> 
});

<span class="hljs-keyword">await</span> cache.SetAsync(<span class="hljs-string">"user:123"</span>, userData, TimeSpan.FromMinutes(<span class="hljs-number">5</span>));

<span class="hljs-comment">// Retrieve data</span>
<span class="hljs-keyword">var</span> cachedEntry = <span class="hljs-keyword">await</span> cache.GetAsync(<span class="hljs-string">"user:123"</span>);
<span class="hljs-keyword">if</span> (cachedEntry != <span class="hljs-literal">null</span>)
{
    <span class="hljs-keyword">var</span> user = JsonSerializer.Deserialize&lt;<span class="hljs-keyword">dynamic</span>&gt;(cachedEntry.Value);
    Console.WriteLine(<span class="hljs-string">$"Cached user: <span class="hljs-subst">{user}</span>"</span>);
}

<span class="hljs-comment">// Check existence</span>
<span class="hljs-keyword">var</span> exists = <span class="hljs-keyword">await</span> cache.ExistsAsync(<span class="hljs-string">"user:123"</span>);
Console.WriteLine(<span class="hljs-string">$"Key exists: <span class="hljs-subst">{exists}</span>"</span>);

<span class="hljs-comment">// Delete</span>
<span class="hljs-keyword">var</span> deleted = <span class="hljs-keyword">await</span> cache.DeleteAsync(<span class="hljs-string">"user:123"</span>);
Console.WriteLine(<span class="hljs-string">$"Deleted: <span class="hljs-subst">{deleted}</span>"</span>);
</code></pre>
<h2 id="heading-integration-with-aspnethttpaspnet-core">Integration with <a target="_blank" href="http://ASP.NET">ASP.NET</a> Core</h2>
<p>While full HTTP/3 support in <a target="_blank" href="http://ASP.NET">ASP.NET</a> Core is evolving, you can build hybrid applications that use QUIC for internal service-to-service communication.</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title">Startup</span>
{
    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">ConfigureServices</span>(<span class="hljs-params">IServiceCollection services</span>)</span>
    {
        services.AddControllers();

        <span class="hljs-comment">// Register QUIC cache as a singleton</span>
        services.AddSingleton&lt;IDistributedQuicCache&gt;(sp =&gt;
        {
            <span class="hljs-keyword">var</span> configuration = sp.GetRequiredService&lt;IConfiguration&gt;();
            <span class="hljs-keyword">var</span> cacheHost = configuration[<span class="hljs-string">"QuicCache:Host"</span>] ?? <span class="hljs-string">"localhost"</span>;
            <span class="hljs-keyword">var</span> cachePort = configuration.GetValue&lt;<span class="hljs-keyword">int</span>&gt;(<span class="hljs-string">"QuicCache:Port"</span>, <span class="hljs-number">6000</span>);

            <span class="hljs-keyword">return</span> QuicCacheClient.ConnectAsync(cacheHost, cachePort).GetAwaiter().GetResult();
        });
    }

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">Configure</span>(<span class="hljs-params">IApplicationBuilder app</span>)</span>
    {
        app.UseRouting();
        app.UseEndpoints(endpoints =&gt;
        {
            endpoints.MapControllers();
        });
    }
}

[<span class="hljs-meta">ApiController</span>]
[<span class="hljs-meta">Route(<span class="hljs-meta-string">"api/[controller]"</span>)</span>]
<span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title">UsersController</span> : <span class="hljs-title">ControllerBase</span>
{
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">readonly</span> IDistributedQuicCache _cache;

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-title">UsersController</span>(<span class="hljs-params">IDistributedQuicCache cache</span>)</span>
    {
        _cache = cache;
    }

    [<span class="hljs-meta">HttpGet(<span class="hljs-meta-string">"{userId}"</span>)</span>]
    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task&lt;IActionResult&gt; <span class="hljs-title">GetUser</span>(<span class="hljs-params"><span class="hljs-keyword">string</span> userId</span>)</span>
    {
        <span class="hljs-keyword">var</span> cacheKey = <span class="hljs-string">$"user:<span class="hljs-subst">{userId}</span>"</span>;
        <span class="hljs-keyword">var</span> cached = <span class="hljs-keyword">await</span> _cache.GetAsync(cacheKey);

        <span class="hljs-keyword">if</span> (cached != <span class="hljs-literal">null</span>)
        {
            <span class="hljs-keyword">var</span> user = JsonSerializer.Deserialize&lt;User&gt;(cached.Value);
            <span class="hljs-keyword">return</span> Ok(<span class="hljs-keyword">new</span> { Source = <span class="hljs-string">"cache"</span>, Data = user });
        }

        <span class="hljs-comment">// Simulate database lookup</span>
        <span class="hljs-keyword">var</span> user = <span class="hljs-keyword">await</span> FetchUserFromDatabaseAsync(userId);

        <span class="hljs-comment">// Cache for 5 minutes</span>
        <span class="hljs-keyword">var</span> userData = JsonSerializer.SerializeToUtf8Bytes(user);
        <span class="hljs-keyword">await</span> _cache.SetAsync(cacheKey, userData, TimeSpan.FromMinutes(<span class="hljs-number">5</span>));

        <span class="hljs-keyword">return</span> Ok(<span class="hljs-keyword">new</span> { Source = <span class="hljs-string">"database"</span>, Data = user });
    }

    [<span class="hljs-meta">HttpPut(<span class="hljs-meta-string">"{userId}"</span>)</span>]
    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task&lt;IActionResult&gt; <span class="hljs-title">UpdateUser</span>(<span class="hljs-params"><span class="hljs-keyword">string</span> userId, [FromBody] User updatedUser</span>)</span>
    {
        <span class="hljs-comment">// Update in database</span>
        <span class="hljs-keyword">await</span> UpdateUserInDatabaseAsync(userId, updatedUser);

        <span class="hljs-comment">// Invalidate cache</span>
        <span class="hljs-keyword">await</span> _cache.DeleteAsync(<span class="hljs-string">$"user:<span class="hljs-subst">{userId}</span>"</span>);

        <span class="hljs-keyword">return</span> NoContent();
    }

    <span class="hljs-function"><span class="hljs-keyword">private</span> <span class="hljs-keyword">async</span> Task&lt;User&gt; <span class="hljs-title">FetchUserFromDatabaseAsync</span>(<span class="hljs-params"><span class="hljs-keyword">string</span> userId</span>)</span>
    {
        <span class="hljs-keyword">await</span> Task.Delay(<span class="hljs-number">50</span>); <span class="hljs-comment">// Simulate DB latency</span>
        <span class="hljs-keyword">return</span> <span class="hljs-keyword">new</span> User 
        { 
            UserId = userId, 
            Name = <span class="hljs-string">"Alice"</span>, 
            Email = <span class="hljs-string">"alice@example.com"</span> 
        };
    }

    <span class="hljs-function"><span class="hljs-keyword">private</span> <span class="hljs-keyword">async</span> Task <span class="hljs-title">UpdateUserInDatabaseAsync</span>(<span class="hljs-params"><span class="hljs-keyword">string</span> userId, User user</span>)</span>
    {
        <span class="hljs-keyword">await</span> Task.Delay(<span class="hljs-number">50</span>); <span class="hljs-comment">// Simulate DB latency</span>
    }

    <span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title">User</span>
    {
        <span class="hljs-keyword">public</span> <span class="hljs-keyword">string</span> UserId { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">set</span>; } = <span class="hljs-keyword">string</span>.Empty;
        <span class="hljs-keyword">public</span> <span class="hljs-keyword">string</span> Name { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">set</span>; } = <span class="hljs-keyword">string</span>.Empty;
        <span class="hljs-keyword">public</span> <span class="hljs-keyword">string</span> Email { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">set</span>; } = <span class="hljs-keyword">string</span>.Empty;
    }
}
</code></pre>
<h2 id="heading-monitoring-and-observability">Monitoring and Observability</h2>
<p>Production QUIC services need comprehensive monitoring.</p>
<p>Let's add telemetry:</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">using</span> System.Diagnostics;
<span class="hljs-keyword">using</span> System.Diagnostics.Metrics;

<span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title">QuicMetrics</span>
{
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">readonly</span> Meter _meter;
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">readonly</span> Counter&lt;<span class="hljs-keyword">long</span>&gt; _connectionsAccepted;
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">readonly</span> Counter&lt;<span class="hljs-keyword">long</span>&gt; _requestsProcessed;
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">readonly</span> Histogram&lt;<span class="hljs-keyword">double</span>&gt; _requestDuration;
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">readonly</span> Counter&lt;<span class="hljs-keyword">long</span>&gt; _errorsTotal;
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">readonly</span> ObservableGauge&lt;<span class="hljs-keyword">int</span>&gt; _activeConnections;

    <span class="hljs-keyword">private</span> <span class="hljs-keyword">int</span> _currentConnections = <span class="hljs-number">0</span>;

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-title">QuicMetrics</span>(<span class="hljs-params"><span class="hljs-keyword">string</span> serviceName</span>)</span>
    {
        _meter = <span class="hljs-keyword">new</span> Meter(<span class="hljs-string">$"QuicRpc.<span class="hljs-subst">{serviceName}</span>"</span>, <span class="hljs-string">"1.0.0"</span>);

        _connectionsAccepted = _meter.CreateCounter&lt;<span class="hljs-keyword">long</span>&gt;(
            <span class="hljs-string">"quic.connections.accepted"</span>,
            description: <span class="hljs-string">"Total number of QUIC connections accepted"</span>);

        _requestsProcessed = _meter.CreateCounter&lt;<span class="hljs-keyword">long</span>&gt;(
            <span class="hljs-string">"quic.requests.processed"</span>,
            description: <span class="hljs-string">"Total number of RPC requests processed"</span>);

        _requestDuration = _meter.CreateHistogram&lt;<span class="hljs-keyword">double</span>&gt;(
            <span class="hljs-string">"quic.request.duration"</span>,
            unit: <span class="hljs-string">"ms"</span>,
            description: <span class="hljs-string">"RPC request duration in milliseconds"</span>);

        _errorsTotal = _meter.CreateCounter&lt;<span class="hljs-keyword">long</span>&gt;(
            <span class="hljs-string">"quic.errors.total"</span>,
            description: <span class="hljs-string">"Total number of errors"</span>);

        _activeConnections = _meter.CreateObservableGauge&lt;<span class="hljs-keyword">int</span>&gt;(
            <span class="hljs-string">"quic.connections.active"</span>,
            () =&gt; _currentConnections,
            description: <span class="hljs-string">"Current number of active connections"</span>);
    }

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">RecordConnectionAccepted</span>(<span class="hljs-params"></span>)</span> =&gt; _connectionsAccepted.Add(<span class="hljs-number">1</span>);
    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">RecordConnectionClosed</span>(<span class="hljs-params"></span>)</span> =&gt; Interlocked.Decrement(<span class="hljs-keyword">ref</span> _currentConnections);
    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">IncrementActiveConnections</span>(<span class="hljs-params"></span>)</span> =&gt; Interlocked.Increment(<span class="hljs-keyword">ref</span> _currentConnections);

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">RecordRequest</span>(<span class="hljs-params"><span class="hljs-keyword">string</span> methodName, <span class="hljs-keyword">double</span> durationMs, <span class="hljs-keyword">bool</span> success</span>)</span>
    {
        _requestsProcessed.Add(<span class="hljs-number">1</span>, <span class="hljs-keyword">new</span> KeyValuePair&lt;<span class="hljs-keyword">string</span>, <span class="hljs-keyword">object</span>?&gt;(<span class="hljs-string">"method"</span>, methodName));
        _requestDuration.Record(durationMs, <span class="hljs-keyword">new</span> KeyValuePair&lt;<span class="hljs-keyword">string</span>, <span class="hljs-keyword">object</span>?&gt;(<span class="hljs-string">"method"</span>, methodName));

        <span class="hljs-keyword">if</span> (!success)
        {
            _errorsTotal.Add(<span class="hljs-number">1</span>, <span class="hljs-keyword">new</span> KeyValuePair&lt;<span class="hljs-keyword">string</span>, <span class="hljs-keyword">object</span>?&gt;(<span class="hljs-string">"method"</span>, methodName));
        }
    }
}

<span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title">InstrumentedQuicRpcServer</span>
{
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">readonly</span> QuicListener _listener;
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">readonly</span> Dictionary&lt;<span class="hljs-keyword">string</span>, IRpcService&gt; _services = <span class="hljs-keyword">new</span>();
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">readonly</span> CancellationTokenSource _cts = <span class="hljs-keyword">new</span>();
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">readonly</span> QuicMetrics _metrics;

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-title">InstrumentedQuicRpcServer</span>(<span class="hljs-params">
        IPEndPoint endpoint, 
        X509Certificate2 certificate,
        <span class="hljs-keyword">string</span> serviceName</span>)</span>
    {
        _metrics = <span class="hljs-keyword">new</span> QuicMetrics(serviceName);

        <span class="hljs-keyword">var</span> listenerOptions = <span class="hljs-keyword">new</span> QuicListenerOptions
        {
            ApplicationProtocols = <span class="hljs-keyword">new</span> List&lt;SslApplicationProtocol&gt; 
            { 
                <span class="hljs-keyword">new</span> SslApplicationProtocol(<span class="hljs-string">"quic-rpc"</span>) 
            },
            ConnectionOptionsCallback = (connection, ssl, token) =&gt;
            {
                <span class="hljs-keyword">var</span> serverOptions = <span class="hljs-keyword">new</span> QuicServerConnectionOptions
                {
                    DefaultStreamErrorCode = <span class="hljs-number">0</span>,
                    DefaultCloseErrorCode = <span class="hljs-number">0</span>,
                    ServerAuthenticationOptions = <span class="hljs-keyword">new</span> SslServerAuthenticationOptions
                    {
                        ApplicationProtocols = <span class="hljs-keyword">new</span> List&lt;SslApplicationProtocol&gt;
                        {
                            <span class="hljs-keyword">new</span> SslApplicationProtocol(<span class="hljs-string">"quic-rpc"</span>)
                        },
                        ServerCertificate = certificate
                    }
                };
                <span class="hljs-keyword">return</span> ValueTask.FromResult(serverOptions);
            },
            ListenEndPoint = endpoint
        };

        _listener = QuicListener.ListenAsync(listenerOptions).GetAwaiter().GetResult();
    }

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">RegisterService</span>(<span class="hljs-params"><span class="hljs-keyword">string</span> serviceName, IRpcService service</span>)</span>
    {
        _services[serviceName] = service;
    }

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task <span class="hljs-title">StartAsync</span>(<span class="hljs-params"></span>)</span>
    {
        Console.WriteLine(<span class="hljs-string">$"Instrumented QUIC RPC server listening on <span class="hljs-subst">{_listener.LocalEndPoint}</span>"</span>);

        <span class="hljs-keyword">while</span> (!_cts.Token.IsCancellationRequested)
        {
            <span class="hljs-keyword">var</span> connection = <span class="hljs-keyword">await</span> _listener.AcceptConnectionAsync(_cts.Token);
            _metrics.RecordConnectionAccepted();
            _metrics.IncrementActiveConnections();
            _ = HandleConnectionAsync(connection);
        }
    }

    <span class="hljs-function"><span class="hljs-keyword">private</span> <span class="hljs-keyword">async</span> Task <span class="hljs-title">HandleConnectionAsync</span>(<span class="hljs-params">QuicConnection connection</span>)</span>
    {
        <span class="hljs-keyword">try</span>
        {
            <span class="hljs-keyword">while</span> (!_cts.Token.IsCancellationRequested)
            {
                <span class="hljs-keyword">var</span> stream = <span class="hljs-keyword">await</span> connection.AcceptInboundStreamAsync(_cts.Token);
                _ = HandleStreamAsync(stream);
            }
        }
        <span class="hljs-keyword">catch</span> (QuicException ex) <span class="hljs-keyword">when</span> (ex.QuicError == QuicError.ConnectionAborted)
        {
            <span class="hljs-comment">// Normal closure</span>
        }
        <span class="hljs-keyword">finally</span>
        {
            _metrics.RecordConnectionClosed();
            <span class="hljs-keyword">await</span> connection.CloseAsync(<span class="hljs-number">0</span>);
            <span class="hljs-keyword">await</span> connection.DisposeAsync();
        }
    }

    <span class="hljs-function"><span class="hljs-keyword">private</span> <span class="hljs-keyword">async</span> Task <span class="hljs-title">HandleStreamAsync</span>(<span class="hljs-params">QuicStream stream</span>)</span>
    {
        <span class="hljs-keyword">var</span> stopwatch = Stopwatch.StartNew();
        <span class="hljs-keyword">var</span> success = <span class="hljs-literal">false</span>;
        <span class="hljs-keyword">string</span> methodName = <span class="hljs-string">"unknown"</span>;

        <span class="hljs-keyword">try</span>
        {
            <span class="hljs-keyword">var</span> request = <span class="hljs-keyword">await</span> RpcMessage.ReadFromStreamAsync(stream, _cts.Token);
            methodName = request.MethodName;

            <span class="hljs-keyword">var</span> parts = request.MethodName.Split(<span class="hljs-string">'.'</span>, <span class="hljs-number">2</span>);
            <span class="hljs-keyword">if</span> (parts.Length != <span class="hljs-number">2</span> || !_services.TryGetValue(parts[<span class="hljs-number">0</span>], <span class="hljs-keyword">out</span> <span class="hljs-keyword">var</span> service))
            {
                <span class="hljs-keyword">await</span> SendErrorAsync(stream, request.MessageId, <span class="hljs-string">"Service not found"</span>);
                <span class="hljs-keyword">return</span>;
            }

            <span class="hljs-keyword">var</span> responsePayload = <span class="hljs-keyword">await</span> service.HandleAsync(parts[<span class="hljs-number">1</span>], request.Payload, _cts.Token);

            <span class="hljs-keyword">var</span> response = <span class="hljs-keyword">new</span> RpcMessage
            {
                MessageId = request.MessageId,
                MethodName = request.MethodName,
                Payload = responsePayload
            };

            <span class="hljs-keyword">await</span> response.WriteToStreamAsync(stream, _cts.Token);
            stream.CompleteWrites();
            success = <span class="hljs-literal">true</span>;
        }
        <span class="hljs-keyword">catch</span> (Exception ex)
        {
            Console.WriteLine(<span class="hljs-string">$"Stream error: <span class="hljs-subst">{ex.Message}</span>"</span>);
        }
        <span class="hljs-keyword">finally</span>
        {
            stopwatch.Stop();
            _metrics.RecordRequest(methodName, stopwatch.Elapsed.TotalMilliseconds, success);
            <span class="hljs-keyword">await</span> stream.DisposeAsync();
        }
    }

    <span class="hljs-function"><span class="hljs-keyword">private</span> <span class="hljs-keyword">async</span> Task <span class="hljs-title">SendErrorAsync</span>(<span class="hljs-params">QuicStream stream, <span class="hljs-keyword">long</span> messageId, <span class="hljs-keyword">string</span> error</span>)</span>
    {
        <span class="hljs-keyword">var</span> errorBytes = Encoding.UTF8.GetBytes(error);
        <span class="hljs-keyword">var</span> response = <span class="hljs-keyword">new</span> RpcMessage
        {
            MessageId = messageId,
            MethodName = <span class="hljs-string">"error"</span>,
            Payload = errorBytes
        };
        <span class="hljs-keyword">await</span> response.WriteToStreamAsync(stream, _cts.Token);
        stream.CompleteWrites();
    }

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task <span class="hljs-title">StopAsync</span>(<span class="hljs-params"></span>)</span>
    {
        _cts.Cancel();
        <span class="hljs-keyword">await</span> _listener.DisposeAsync();
    }
}
</code></pre>
<h2 id="heading-production-deployment-considerations">Production Deployment Considerations</h2>
<h3 id="heading-certificate-management">Certificate Management</h3>
<p>In production, use proper certificates from a CA:</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title">CertificateManager</span>
{
    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">static</span> X509Certificate2 <span class="hljs-title">LoadFromFile</span>(<span class="hljs-params"><span class="hljs-keyword">string</span> pfxPath, <span class="hljs-keyword">string</span> password</span>)</span>
    {
        <span class="hljs-keyword">return</span> <span class="hljs-keyword">new</span> X509Certificate2(pfxPath, password, 
            X509KeyStorageFlags.Exportable | X509KeyStorageFlags.PersistKeySet);
    }

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">static</span> X509Certificate2 <span class="hljs-title">LoadFromAzureKeyVault</span>(<span class="hljs-params"><span class="hljs-keyword">string</span> keyVaultUrl, <span class="hljs-keyword">string</span> certName</span>)</span>
    {
        <span class="hljs-comment">// Use Azure.Security.KeyVault.Certificates</span>
        <span class="hljs-comment">// Implementation depends on your Azure setup</span>
        <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> NotImplementedException(<span class="hljs-string">"Integrate with Azure Key Vault"</span>);
    }

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">static</span> X509Certificate2 <span class="hljs-title">LoadFromStore</span>(<span class="hljs-params">StoreName storeName, StoreLocation storeLocation, <span class="hljs-keyword">string</span> thumbprint</span>)</span>
    {
        <span class="hljs-keyword">using</span> <span class="hljs-keyword">var</span> store = <span class="hljs-keyword">new</span> X509Store(storeName, storeLocation);
        store.Open(OpenFlags.ReadOnly);

        <span class="hljs-keyword">var</span> certs = store.Certificates.Find(X509FindType.FindByThumbprint, thumbprint, <span class="hljs-literal">false</span>);
        <span class="hljs-keyword">if</span> (certs.Count == <span class="hljs-number">0</span>)
            <span class="hljs-keyword">throw</span> <span class="hljs-keyword">new</span> InvalidOperationException(<span class="hljs-string">$"Certificate with thumbprint <span class="hljs-subst">{thumbprint}</span> not found"</span>);

        <span class="hljs-keyword">return</span> certs[<span class="hljs-number">0</span>];
    }
}
</code></pre>
<h3 id="heading-load-balancing">Load Balancing</h3>
<p>QUIC's Connection IDs enable seamless load balancing:</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title">QuicLoadBalancer</span>
{
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">readonly</span> List&lt;(<span class="hljs-keyword">string</span> Host, <span class="hljs-keyword">int</span> Port)&gt; _backends;
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">int</span> _currentIndex = <span class="hljs-number">0</span>;

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-title">QuicLoadBalancer</span>(<span class="hljs-params">List&lt;(<span class="hljs-keyword">string</span> Host, <span class="hljs-keyword">int</span> Port</span>)&gt; backends)</span>
    {
        _backends = backends;
    }

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task&lt;QuicRpcClient&gt; <span class="hljs-title">GetClientAsync</span>(<span class="hljs-params"></span>)</span>
    {
        <span class="hljs-comment">// Simple round-robin - production would use health checks</span>
        <span class="hljs-keyword">var</span> index = Interlocked.Increment(<span class="hljs-keyword">ref</span> _currentIndex) % _backends.Count;
        <span class="hljs-keyword">var</span> backend = _backends[index];

        <span class="hljs-keyword">return</span> <span class="hljs-keyword">await</span> QuicRpcClient.ConnectAsync(backend.Host, backend.Port);
    }
}

<span class="hljs-comment">// Usage</span>
<span class="hljs-keyword">var</span> loadBalancer = <span class="hljs-keyword">new</span> QuicLoadBalancer(<span class="hljs-keyword">new</span> List&lt;(<span class="hljs-keyword">string</span>, <span class="hljs-keyword">int</span>)&gt;
{
    (<span class="hljs-string">"quic-server-1.example.com"</span>, <span class="hljs-number">443</span>),
    (<span class="hljs-string">"quic-server-2.example.com"</span>, <span class="hljs-number">443</span>),
    (<span class="hljs-string">"quic-server-3.example.com"</span>, <span class="hljs-number">443</span>)
});

<span class="hljs-keyword">var</span> client = <span class="hljs-keyword">await</span> loadBalancer.GetClientAsync();
</code></pre>
<h3 id="heading-error-handling-and-retry-logic">Error Handling and Retry Logic</h3>
<pre><code class="lang-csharp"><span class="hljs-keyword">public</span> <span class="hljs-keyword">class</span> <span class="hljs-title">ResilientQuicClient</span> : <span class="hljs-title">IAsyncDisposable</span>
{
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">readonly</span> QuicRpcClient _client;
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">readonly</span> RetryPolicy _retryPolicy;

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-title">ResilientQuicClient</span>(<span class="hljs-params">QuicRpcClient client, <span class="hljs-keyword">int</span> maxRetries = <span class="hljs-number">3</span></span>)</span>
    {
        _client = client;
        _retryPolicy = <span class="hljs-keyword">new</span> RetryPolicy(maxRetries);
    }

    <span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task&lt;ReadOnlyMemory&lt;<span class="hljs-keyword">byte</span>&gt;&gt; CallWithRetryAsync(
        <span class="hljs-keyword">string</span> serviceName,
        <span class="hljs-keyword">string</span> methodName,
        ReadOnlyMemory&lt;<span class="hljs-keyword">byte</span>&gt; payload,
        CancellationToken ct = <span class="hljs-keyword">default</span>)
    {
        <span class="hljs-keyword">return</span> <span class="hljs-keyword">await</span> _retryPolicy.ExecuteAsync(<span class="hljs-keyword">async</span> () =&gt;
            <span class="hljs-keyword">await</span> _client.CallAsync(serviceName, methodName, payload, ct));
    }

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> ValueTask <span class="hljs-title">DisposeAsync</span>(<span class="hljs-params"></span>)</span>
    {
        <span class="hljs-keyword">await</span> _client.DisposeAsync();
    }

    <span class="hljs-keyword">private</span> <span class="hljs-keyword">class</span> <span class="hljs-title">RetryPolicy</span>
    {
        <span class="hljs-keyword">private</span> <span class="hljs-keyword">readonly</span> <span class="hljs-keyword">int</span> _maxRetries;

        <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-title">RetryPolicy</span>(<span class="hljs-params"><span class="hljs-keyword">int</span> maxRetries</span>)</span>
        {
            _maxRetries = maxRetries;
        }

        <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> <span class="hljs-title">Task</span>&lt;<span class="hljs-title">T</span>&gt; <span class="hljs-title">ExecuteAsync</span>&lt;<span class="hljs-title">T</span>&gt;(<span class="hljs-params">Func&lt;Task&lt;T&gt;&gt; action</span>)</span>
        {
            <span class="hljs-keyword">var</span> retryCount = <span class="hljs-number">0</span>;
            <span class="hljs-keyword">var</span> baseDelay = TimeSpan.FromMilliseconds(<span class="hljs-number">100</span>);

            <span class="hljs-keyword">while</span> (<span class="hljs-literal">true</span>)
            {
                <span class="hljs-keyword">try</span>
                {
                    <span class="hljs-keyword">return</span> <span class="hljs-keyword">await</span> action();
                }
                <span class="hljs-keyword">catch</span> (Exception ex) <span class="hljs-keyword">when</span> (retryCount &lt; _maxRetries &amp;&amp; IsTransient(ex))
                {
                    retryCount++;
                    <span class="hljs-keyword">var</span> delay = TimeSpan.FromMilliseconds(
                        baseDelay.TotalMilliseconds * Math.Pow(<span class="hljs-number">2</span>, retryCount - <span class="hljs-number">1</span>));

                    Console.WriteLine(<span class="hljs-string">$"Retry <span class="hljs-subst">{retryCount}</span>/<span class="hljs-subst">{_maxRetries}</span> after <span class="hljs-subst">{delay.TotalMilliseconds}</span>ms"</span>);
                    <span class="hljs-keyword">await</span> Task.Delay(delay);
                }
            }
        }

        <span class="hljs-function"><span class="hljs-keyword">private</span> <span class="hljs-keyword">bool</span> <span class="hljs-title">IsTransient</span>(<span class="hljs-params">Exception ex</span>)</span>
        {
            <span class="hljs-keyword">return</span> ex <span class="hljs-keyword">is</span> QuicException qe &amp;&amp; 
                   (qe.QuicError == QuicError.ConnectionTimeout ||
                    qe.QuicError == QuicError.ConnectionRefused);
        }
    }
}
</code></pre>
<h2 id="heading-the-future-http3">The Future: HTTP/3</h2>
<p>While we've built custom RPC protocols, the future of QUIC in .NET includes native HTTP/3 support:</p>
<pre><code class="lang-csharp"><span class="hljs-comment">// .NET 7+ HTTP/3 client (preview)</span>
<span class="hljs-keyword">var</span> client = <span class="hljs-keyword">new</span> HttpClient(<span class="hljs-keyword">new</span> SocketsHttpHandler
{
    EnableMultipleHttp3Connections = <span class="hljs-literal">true</span>
})
{
    DefaultRequestVersion = HttpVersion.Version30,
    DefaultVersionPolicy = HttpVersionPolicy.RequestVersionOrHigher
};

<span class="hljs-keyword">var</span> response = <span class="hljs-keyword">await</span> client.GetAsync(<span class="hljs-string">"https://quic.example.com/api/data"</span>);
<span class="hljs-comment">// Automatically uses QUIC if server supports HTTP/3</span>
</code></pre>
<p>MsQuic represents a shift in how we build distributed .NET systems. By eliminating the latency tax of TCP and TLS handshakes, providing true multiplexed streams, and enabling connection migration, it unlocks architectural patterns that were previously impractical.</p>
<p>Things to remember:</p>
<ol>
<li><p><strong>QUIC removes latency barriers</strong>: 0-1 RTT connection setup vs 2-3 RTT for TCP+TLS</p>
</li>
<li><p><strong>Stream independence</strong>: No head-of-line blocking between streams</p>
</li>
<li><p><strong>Connection resilience</strong>: Survives IP changes through Connection IDs</p>
</li>
<li><p><strong>Production ready</strong>: Already running in Windows, Azure, and Xbox</p>
</li>
</ol>
<p>You can start experimenting with MsQuic today:</p>
<ul>
<li><p>Build internal RPC frameworks for microservices</p>
</li>
<li><p>Replace Redis with QUIC-based caching</p>
</li>
<li><p>Implement real-time data synchronization</p>
</li>
<li><p>Create edge computing pipelines with low latency</p>
</li>
</ul>
<p>The transport layer has been holding back distributed systems for decades. QUIC finally removes that constraint.</p>
<p><strong>***All code examples are from Microsoft and are available at</strong> <a target="_blank" href="https://github.com/microsoft/msquic"><strong>https://github.com/microsoft/msquic</strong></a></p>
]]></content:encoded></item><item><title><![CDATA[REPR: The Quiet Evolution of Clean Architecture and CQRS in Modern .NET]]></title><description><![CDATA[If you’ve been building serious .NET systems for a few years, you’ve probably ended up with vertical feature slices, slim controllers, and a handler that orchestrates domain work. You may not have given it a name, but there’s a high chance you ended ...]]></description><link>https://dotnetdigest.com/repr-the-quiet-evolution-of-clean-architecture-and-cqrs-in-modern-net</link><guid isPermaLink="true">https://dotnetdigest.com/repr-the-quiet-evolution-of-clean-architecture-and-cqrs-in-modern-net</guid><category><![CDATA[.NET]]></category><category><![CDATA[Microsoft]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[software development]]></category><category><![CDATA[design patterns]]></category><category><![CDATA[C#]]></category><dc:creator><![CDATA[Patrick Kearns]]></dc:creator><pubDate>Fri, 24 Oct 2025 17:06:49 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1761325677491/bb71a6f3-4426-4ec1-9571-c3471081d5db.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>If you’ve been building serious .NET systems for a few years, you’ve probably ended up with vertical feature slices, slim controllers, and a handler that orchestrates domain work. You may not have given it a name, but there’s a high chance you ended up with <strong>REPR</strong> - <strong>Request &gt; Entity &gt; Processor &gt; Response</strong>. It’s not a framework or a library, it’s a way of composing features so that your application expresses intentions rather than mechanics. REPR isn’t a rejection of Clean Architecture or CQRS. It’s the next, quieter step those approaches converge toward when you scale.</p>
<p>Across large engineering teams, from the bank-backed scale ups to FAANG, you’ll find structures that look suspiciously like REPR even if nobody uses the term. Netflix’s emphasis on single responsibility components, GitHub’s product slice autonomy, and Monzo’s event first thinking in platform services all repeat the same move, make each feature a self contained pipeline with clear boundaries. In the .NET world, REPR is how that idea lands in code. Below we can see what REPR looks like, why it’s a natural evolution of Clean Architecture + CQRS, and how to deploy it incrementally without to much hassle.</p>
<h2 id="heading-why-repr-emerges-in-grown-up-codebases">Why REPR emerges in grown up codebases</h2>
<p>Classic MVC puts controllers at the centre, and everything else hangs off them. Thats fine until its not enough. As features multiply, controllers become routers with opinions, and the real decisions leak into ad hoc services. Clean Architecture pushes the decisions into the domain and forces dependencies inward, which is a huge improvement, but teams still argued about where use cases should live and how to keep handlers from collapsing into thin orchestration.</p>
<p>CQRS helped by splitting read and write flows, but it didn’t dictate <em>how to structure a single feature unit</em>. You still needed a pattern to express the shape of a use case. REPR fills the gap by making each feature a small, discoverable pipeline:</p>
<ul>
<li><p>A <strong>Request</strong> that is the <em>user’s intent</em> (API input, message payload, scheduled job trigger).</p>
</li>
<li><p>An <strong>Entity</strong> that is the <em>domain root(s)</em> we’re going to change or read consistently.</p>
</li>
<li><p>A <strong>Processor</strong> that <em>applies policy</em> and coordinates domain work (your use-case brain).</p>
</li>
<li><p>A <strong>Response</strong> that returns <em>business level results</em> (API DTO, event, or status).</p>
</li>
</ul>
<p>REPR shifts the centre of gravity away from controllers and towards the feature. You no longer ask which controller? but which intent? The controller becomes a tiny adapter whose only job is to turn an HTTP exchange into a Request and then hand it to the Processor. The Processor loads and manipulates Entities under invariant rules and emits a Response. That’s it. It is CQRS by posture, Clean Architecture by dependency rule, and domain-first by attitude.</p>
<h2 id="heading-the-shape-of-a-vertical-slice">The shape of a vertical slice</h2>
<p>A typical feature directory contains only what the feature needs, nothing global, nothing clever. Here’s a sketch for “Approve Submission”, but the pattern holds for pricing, quotes, orders, or any other business verb.</p>
<pre><code class="lang-csharp">/Features/Submissions/Approve
  ApproveSubmissionRequest.cs
  ApproveSubmissionResponse.cs
  ApproveSubmissionProcessor.cs
  Submission.cs                <span class="hljs-comment">// Aggregate root for this slice</span>
  SubmissionRepository.cs      <span class="hljs-comment">// Port to persistence</span>
  ApproveSubmissionEndpoint.cs <span class="hljs-comment">// Thin HTTP adapter</span>
</code></pre>
<p>Each file is small. The endpoint delegates, the processor thinks, the entity enforces invariants. The repository is a port that returns or persists the entity in a shape that suits the domain, not the database.</p>
<h2 id="heading-modern-net-minimal-noise">Modern .NET, minimal noise</h2>
<p>You don’t need a library to do REPR. Modern .NET gives you everything out of the box. The samples below use primary constructors, results for error flow, and a validator that sits in front of the Processor. Use any flavours you like, the pattern is independent of frameworks.</p>
<h3 id="heading-the-request-amp-response">The Request &amp; Response</h3>
<pre><code class="lang-csharp"><span class="hljs-comment">// Request → intent, not transport.</span>
<span class="hljs-comment">// Use PascalCase for commands in CQRS/REPR; treat it like a business message.</span>
<span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">sealed</span> record <span class="hljs-title">ApproveSubmissionRequest</span>(<span class="hljs-params"><span class="hljs-keyword">long</span> SubmissionId, <span class="hljs-keyword">string</span> ApprovedBy</span>)</span>;

<span class="hljs-comment">// Response → business outcome, not entity leak.</span>
<span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">sealed</span> record <span class="hljs-title">ApproveSubmissionResponse</span>(<span class="hljs-params">
    <span class="hljs-keyword">long</span> SubmissionId,
    DateTime ApprovedAtUtc,
    <span class="hljs-keyword">string</span> ApprovedBy,
    <span class="hljs-keyword">string</span> StatusMessage</span>)</span>;
</code></pre>
<p>Keep them tiny. Requests model <em>intent</em>, not your database. Responses are <em>what happened</em>, not your EF entities. If you need pagination, projections, or hyperlinks, add them deliberately.</p>
<h3 id="heading-the-entity">The Entity</h3>
<pre><code class="lang-csharp"><span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">sealed</span> class <span class="hljs-title">Submission</span>(<span class="hljs-params"><span class="hljs-keyword">long</span> id, DateTime createdAtUtc</span>)</span>
{
    <span class="hljs-keyword">public</span> <span class="hljs-keyword">long</span> Id { <span class="hljs-keyword">get</span>; } = id;
    <span class="hljs-keyword">public</span> DateTime CreatedAtUtc { <span class="hljs-keyword">get</span>; } = createdAtUtc;
    <span class="hljs-keyword">public</span> <span class="hljs-keyword">bool</span> IsApproved { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">private</span> <span class="hljs-keyword">set</span>; }
    <span class="hljs-keyword">public</span> DateTime? ApprovedAtUtc { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">private</span> <span class="hljs-keyword">set</span>; }
    <span class="hljs-keyword">public</span> <span class="hljs-keyword">string</span>? ApprovedBy { <span class="hljs-keyword">get</span>; <span class="hljs-keyword">private</span> <span class="hljs-keyword">set</span>; }

    <span class="hljs-function"><span class="hljs-keyword">public</span> Result <span class="hljs-title">Approve</span>(<span class="hljs-params"><span class="hljs-keyword">string</span> approver, DateTime nowUtc</span>)</span>
    {
        <span class="hljs-keyword">if</span> (IsApproved)
            <span class="hljs-keyword">return</span> Result.Fail(<span class="hljs-string">"Submission is already approved."</span>);

        <span class="hljs-keyword">if</span> (<span class="hljs-keyword">string</span>.IsNullOrWhiteSpace(approver))
            <span class="hljs-keyword">return</span> Result.Fail(<span class="hljs-string">"Approver is required."</span>);

        IsApproved = <span class="hljs-literal">true</span>;
        ApprovedAtUtc = nowUtc;
        ApprovedBy = approver;

        <span class="hljs-keyword">return</span> Result.Ok();
    }
}
</code></pre>
<p>The entity owns the business rules. The Processor doesn’t toggle flags directly; it calls Approve and handles the result. That one move protects you from a thousand “just set the field” PRs.</p>
<h3 id="heading-the-repository-port">The Repository port</h3>
<pre><code class="lang-csharp"><span class="hljs-keyword">public</span> <span class="hljs-keyword">interface</span> <span class="hljs-title">ISubmissionRepository</span>
{
    Task&lt;Submission?&gt; GetAsync(<span class="hljs-keyword">long</span> id, CancellationToken stopToken);
    <span class="hljs-function">Task <span class="hljs-title">SaveAsync</span>(<span class="hljs-params">Submission submission, CancellationToken stopToken</span>)</span>;
}
</code></pre>
<p>Implementation can be EF Core, Dapper, or an API call, the feature doesn’t care. The Processor depends on the interface, not the storage.</p>
<h3 id="heading-the-processor">The Processor</h3>
<pre><code class="lang-csharp"><span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">sealed</span> class <span class="hljs-title">ApproveSubmissionProcessor</span>(<span class="hljs-params">ISubmissionRepository repo, IClock clock</span>)</span>
{   
    <span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task&lt;Result&lt;ApproveSubmissionResponse&gt;&gt; Handle(
        ApproveSubmissionRequest request,
        CancellationToken stopToken)
    {
        <span class="hljs-keyword">var</span> Submission = <span class="hljs-keyword">await</span> repo.GetAsync(request.SubmissionId, stopToken);
        <span class="hljs-keyword">if</span> (Submission <span class="hljs-keyword">is</span> <span class="hljs-literal">null</span>)
            <span class="hljs-keyword">return</span> Result.Fail&lt;ApproveSubmissionResponse&gt;(<span class="hljs-string">"Submission not found."</span>);

        <span class="hljs-keyword">var</span> approved = submission.Approve(request.ApprovedBy, clock.UtcNow);
        <span class="hljs-keyword">if</span> (approved.IsFailure)
            <span class="hljs-keyword">return</span> approved.Cast&lt;ApproveSubmissionResponse&gt;();

        <span class="hljs-keyword">await</span> repo.SaveAsync(submission, stopToken);

        <span class="hljs-keyword">return</span> Result.Ok(<span class="hljs-keyword">new</span> ApproveSubmissionResponse(
            submission.Id,
            submission.ApprovedAtUtc!.Value,
            submission.ApprovedBy!,
            <span class="hljs-string">"Approved"</span>));
    }
}
</code></pre>
<p>The Processor is the use case script. Short, readable, and nothing infrastructural. Validation sits just in front of it, invariants sit just beneath it.</p>
<h3 id="heading-the-endpoint-thin-adapter">The Endpoint (thin adapter)</h3>
<pre><code class="lang-csharp"><span class="hljs-keyword">public</span> <span class="hljs-keyword">sealed</span> <span class="hljs-keyword">class</span> <span class="hljs-title">ApproveSubmissionEndpoint</span> : <span class="hljs-title">IEndpoint</span>
{
    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">MapEndpoint</span>(<span class="hljs-params">IEndpointRouteBuilder app</span>)</span>
    {
        app.MapPost(<span class="hljs-string">"/submission/submissions/{id:long}/approve"</span>,
            <span class="hljs-keyword">async</span> (<span class="hljs-keyword">long</span> id, [FromBody] ApproveRequestBody body,
                   ApproveSubmissionProcessor processor, CancellationToken stopToken) =&gt;
            {
                <span class="hljs-keyword">var</span> request = <span class="hljs-keyword">new</span> ApproveSubmissionRequest(id, body.ApprovedBy);
                <span class="hljs-keyword">var</span> result = <span class="hljs-keyword">await</span> processor.Handle(request, stopToken);
                <span class="hljs-keyword">return</span> result.Match(
                    ok  =&gt; Results.Ok(ok),
                    err =&gt; Results.Problem(title: <span class="hljs-string">"Approval failed"</span>, detail: err.Message));
            })
           .WithName(<span class="hljs-string">"ApproveSubmission"</span>);
    }

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">sealed</span> record <span class="hljs-title">ApproveRequestBody</span>(<span class="hljs-params"><span class="hljs-keyword">string</span> ApprovedBy</span>)</span>;
}
</code></pre>
<p>Controllers aren’t wrong, they’re just no longer the centrepiece. The endpoint does translation and nothing else.</p>
<h2 id="heading-validation-without-noise">Validation without noise</h2>
<p>Place transport level checks (shape, ranges, formats, UTC dates) in a validator that runs before the Processor. Keep invariant enforcement inside the Entity. That separation makes your domain honest and your API polite.</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">using</span> FluentValidation;

<span class="hljs-keyword">public</span> <span class="hljs-keyword">sealed</span> <span class="hljs-keyword">class</span> <span class="hljs-title">ApproveSubmissionRequestValidator</span> : <span class="hljs-title">AbstractValidator</span>&lt;<span class="hljs-title">ApproveSubmissionRequest</span>&gt;
{
    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-title">ApproveSubmissionRequestValidator</span>(<span class="hljs-params"></span>)</span>
    {
        RuleLevelCascadeMode = CascadeMode.Stop;

        RuleFor(r =&gt; r.SubmissionId).GreaterThan(<span class="hljs-number">0</span>);
        RuleFor(r =&gt; r.ApprovedBy).NotEmpty().MaximumLength(<span class="hljs-number">200</span>);
    }
}
</code></pre>
<p>Wire it as middleware or as a pipeline behaviour, either way, the Processor sees only valid intent.</p>
<h2 id="heading-pipelines-behaviours-amp-middleware">Pipelines, behaviours &amp; middleware</h2>
<p>REPR shines when you add pipeline behaviours around your Processors, logging, correlation IDs, retry policies, idempotency checks, and authorisation guards. The behaviours wrap every Processor handle, so you get consistent cross cutting concerns without contaminating business logic.</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">public</span> <span class="hljs-keyword">interface</span> <span class="hljs-title">IRequestHandler</span>&lt;<span class="hljs-title">TRequest</span>, <span class="hljs-title">TResponse</span>&gt;
{
    Task&lt;Result&lt;TResponse&gt;&gt; Handle(TRequest request, CancellationToken stopToken);
}

<span class="hljs-keyword">public</span> <span class="hljs-keyword">interface</span> <span class="hljs-title">IPipelineBehavior</span>&lt;<span class="hljs-title">TRequest</span>, <span class="hljs-title">TResponse</span>&gt;
{
    Task&lt;Result&lt;TResponse&gt;&gt; Handle(
        TRequest request,
        CancellationToken stopToken,
        Func&lt;TRequest, CancellationToken, Task&lt;Result&lt;TResponse&gt;&gt;&gt; next);
}

<span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">sealed</span> class <span class="hljs-title">LoggingBehavior</span>&lt;<span class="hljs-title">TRequest</span>, <span class="hljs-title">TResponse</span>&gt;(<span class="hljs-params">ILogger&lt;LoggingBehavior&lt;TRequest,TResponse&gt;&gt; log</span>)
    : IPipelineBehavior&lt;TRequest, TResponse&gt;</span>
{
    <span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task&lt;Result&lt;TResponse&gt;&gt; Handle(
        TRequest request, CancellationToken stopToken,
        Func&lt;TRequest, CancellationToken, Task&lt;Result&lt;TResponse&gt;&gt;&gt; next)
    {
        log.LogInformation(<span class="hljs-string">"Handling {RequestType}"</span>, <span class="hljs-keyword">typeof</span>(TRequest).Name);
        <span class="hljs-keyword">var</span> result = <span class="hljs-keyword">await</span> next(request, stopToken);
        log.LogInformation(<span class="hljs-string">"Handled {RequestType} -&gt; {Outcome}"</span>,
            <span class="hljs-keyword">typeof</span>(TRequest).Name, result.IsSuccess ? <span class="hljs-string">"Success"</span> : <span class="hljs-string">"Failure"</span>);
        <span class="hljs-keyword">return</span> result;
    }
}
</code></pre>
<p>You can stack resilience behaviours (Polly/Microsoft.Extensions.Resilience), metrics emission, and audit trails. REPR doesn’t fight these concerns, it invites them to the right layer.</p>
<h2 id="heading-reads-belong-to-repr">Reads belong to REPR</h2>
<p>Developers often think REPR only applies to commands. In practice, read models benefit even more. A Request expresses the query intent, the Processor composes the projection, and the Response is the shaped data. Keep it crisp, use <code>.AsNoTracking()</code> and projection early to avoid N+1 traps.</p>
<p>(I recently wrote about N+1 <a target="_blank" href="https://fullstackcity.com/n1-the-silent-performance-killer">here</a>)</p>
<pre><code class="lang-csharp"><span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">sealed</span> record <span class="hljs-title">GetSubmissionsRequest</span>(<span class="hljs-params"><span class="hljs-keyword">int</span> Page, <span class="hljs-keyword">int</span> PageSize</span>)</span>;

<span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">sealed</span> record <span class="hljs-title">SubmissionSummary</span>(<span class="hljs-params"><span class="hljs-keyword">long</span> Id, <span class="hljs-keyword">bool</span> IsApproved, DateTime CreatedAtUtc</span>)</span>;

<span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">sealed</span> class <span class="hljs-title">GetSubmissionsProcessor</span>(<span class="hljs-params">MyDbContext db</span>)</span>
{
    <span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task&lt;Result&lt;IReadOnlyList&lt;SubmissionSummary&gt;&gt;&gt; Handle(GetSubmissionsRequest req, CancellationToken stopToken)
    {
        <span class="hljs-keyword">if</span> (req.Page &lt;= <span class="hljs-number">0</span> || req.PageSize <span class="hljs-keyword">is</span> &lt;=<span class="hljs-number">0</span> or &gt; <span class="hljs-number">200</span>)
            <span class="hljs-keyword">return</span> Result.Fail&lt;IReadOnlyList&lt;SubmissionSummary&gt;&gt;(<span class="hljs-string">"Invalid paging"</span>);

        <span class="hljs-keyword">var</span> query = db.Submissions
            .AsNoTracking()
            .OrderByDescending(h =&gt; h.CreatedAtUtc)
            .Skip((req.Page - <span class="hljs-number">1</span>) * req.PageSize)
            .Take(req.PageSize)
            .Select(h =&gt; <span class="hljs-keyword">new</span> SubmissionSummary(h.Id, h.IsApproved, h.CreatedAtUtc));

        <span class="hljs-keyword">var</span> data = <span class="hljs-keyword">await</span> query.ToListAsync(stopToken);
        <span class="hljs-keyword">return</span> Result.Ok&lt;IReadOnlyList&lt;SubmissionSummary&gt;&gt;(data);
    }
}
</code></pre>
<p>This keeps the read slice standalone. If you later move submissions to a dedicated read store or cache, only this Processor changes.</p>
<h2 id="heading-how-repr-evolves-clean-architecture-cqrs">How REPR evolves Clean Architecture + CQRS</h2>
<p>Clean Architecture gave us inversion of control and use case intermediation. CQRS sharpened intent by separating reads and writes. REPR tightens things and localises decisions. Instead of huge application services or thin handlers, each feature is a small pipeline with three rules:</p>
<ol>
<li><p><strong>The Request is the truth of intent.</strong><br /> It’s what the user or system wants to do or know. It’s not an EF model or a controller parameter bag.</p>
</li>
<li><p><strong>The Entity owns invariants.</strong><br /> The Processor never sets <code>IsApproved = true</code>, it calls <code>Approve()</code>. That is the thin line between a coherent core and a distributed bag of flags.</p>
</li>
<li><p><strong>The Processor composes the use case and emits a Response.</strong><br /> It loads, asks, commands, persists, and returns a business outcome. It doesn’t know about HTTP, and it doesn’t leak storage concerns upward.</p>
</li>
</ol>
<p>This alignment makes vertical slices genuinely independent. You can ship features without stepping on each other. GitHub’s approach to “ship small, ship often” is totally compatible with REPR because <strong>features are shippable units</strong>. Monzo’s domain led teams repeat the same structure internally, <strong>message in &gt; policy &gt; state change &gt; event out</strong>.</p>
<h2 id="heading-migration-without-drama">Migration without drama</h2>
<p>You don’t have to burn your controllers. Start with one endpoint that’s currently messy. Create a feature folder, write a Request/Response, move the domain object or create a thin entity wrapper, and add a Processor. In the controller action, instantiate the Request and invoke the Processor. Merge. Done.</p>
<p>Repeat for the next mess. Keep your existing DI setup; register Processors as scoped services. Introduce behaviours when you see repetition. Over a sprint or two, the hotspot areas of your codebase will look similar, small, navigable slices where the entry file reads like a story.</p>
<h2 id="heading-testing">Testing</h2>
<p>REPR’s most practical gift is what it does to tests. You can instantiate a Processor in isolation, mock a repository or use an in memory store, and assert on a Response. You can test the Entity, calling <code>Approve()</code> and asserting state and failure messages. You can test a validator without spinning up <a target="_blank" href="http://ASP.NET">ASP.NET</a>. Integration tests can target the endpoint adapter if you wish, but you’re no longer forced to go through HTTP to test business logic.</p>
<p>A typical unit test:</p>
<pre><code class="lang-csharp">[<span class="hljs-meta">Fact</span>]
<span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task <span class="hljs-title">Approve_Submission_Sets_State_And_Emits_Response</span>(<span class="hljs-params"></span>)</span>
{
    <span class="hljs-keyword">var</span> now = <span class="hljs-keyword">new</span> DateTime(<span class="hljs-number">2025</span>, <span class="hljs-number">10</span>, <span class="hljs-number">23</span>, <span class="hljs-number">12</span>, <span class="hljs-number">0</span>, <span class="hljs-number">0</span>, DateTimeKind.Utc);
    <span class="hljs-keyword">var</span> clock = <span class="hljs-keyword">new</span> FakeClock(now);

    <span class="hljs-keyword">var</span> submission = <span class="hljs-keyword">new</span> Submission(id: <span class="hljs-number">42</span>, createdAtUtc: now.AddDays(<span class="hljs-number">-1</span>));
    <span class="hljs-keyword">var</span> repo = <span class="hljs-keyword">new</span> FakeRepo().With(submission);

    <span class="hljs-keyword">var</span> processor = <span class="hljs-keyword">new</span> ApproveSubmissionProcessor(repo, clock);

    <span class="hljs-keyword">var</span> result = <span class="hljs-keyword">await</span> processor.Handle(<span class="hljs-keyword">new</span> ApproveSubmissionRequest(<span class="hljs-number">42</span>, <span class="hljs-string">"PK"</span>), CancellationToken.None);

    result.IsSuccess.ShouldBeTrue();
    submission.IsApproved.ShouldBeTrue();
    submission.ApprovedAtUtc.ShouldBe(now);
    submission.ApprovedBy.ShouldBe(<span class="hljs-string">"PK"</span>);
}
</code></pre>
<p>No hosted server. No controllers. Pure business code.</p>
<h2 id="heading-handling-cross-cutting-concerns">Handling cross cutting concerns</h2>
<p>Large organisations tend to accumulate swathes of cross cutting policy, correlation IDs, authorisation, audit trails, retries, idempotency, data masking, PII redaction, and more. In controller centric designs these drift into action filters and base controllers. In REPR, you add a behaviour to the Processor pipeline and you’re done.</p>
<ul>
<li><p><strong>Authorisation</strong>: run a guard behaviour that evaluates user roles/claims against the <code>Request</code> shape.</p>
</li>
<li><p><strong>Idempotency</strong>: deduplicate based on a key present in the <code>Request</code> (e.g., commandId) and short circuit the <code>next</code>.</p>
</li>
<li><p><strong>Resilience</strong>: wrap repository calls in a policy, wire via dependency or inside a behaviour if consistent across all Processors.</p>
</li>
</ul>
<p>Because Requests are explicit, guards can be deterministic and auditable. You can even use a Roslyn analyser to ensure every Processor is wrapped by certain behaviours in DI.</p>
<h2 id="heading-performance-and-the-n1-traphttpsfullstackcitycomn1-the-silent-performance-killer">Performance and the <a target="_blank" href="https://fullstackcity.com/n1-the-silent-performance-killer">N+1 trap</a></h2>
<p>REPR doesn’t fix N+1 by itself, but it makes it obvious who owns the fix. Reads should project early and be <code>.AsNoTracking()</code>. Writes should load the minimum shape necessary to enforce invariants. If an invariant demands related data, eager load that relation explicitly in the repository. Because the Processor depends on a repository <em>interface</em>, you can swap to compiled queries, Dapper, or a read cache without touching the feature’s public surface.</p>
<p>The pattern nudges better boundaries, your Repository returns Entities or purpose-built DTOs, not <code>IQueryable&lt;T&gt;</code> that leaks into the Processor. As a result, accidental N+1 caused by deferred LINQ outside the persistence boundary simply doesn’t happen.</p>
<h2 id="heading-events-outboxes-and-repr">Events, outboxes, and REPR</h2>
<p>In evented systems, the Response is often two sided, the API returns a DTO, and the Processor also emits a domain event. Pair REPR with an outbox so that event publishing is transactional with your state change. The Processor writes the entity and the event together, a background dispatcher drains the outbox. In request response scenarios, you can surface the event ID in the Response for traceability without coupling your API to the event schema.</p>
<p>This is where Netflix style “tell, don’t ask” and Monzo’s ledger thinking reinforce the same habit, intent in, state change under invariants, facts out. REPR is simply how that looks in a .NET feature slice.</p>
<h2 id="heading-anti-patterns-to-resist">Anti patterns to resist</h2>
<p>The temptations are predictable. Don’t let the Request become a “god DTO” that mirrors your entire database row, keep it as <em>intent</em>. Don’t put database toggles in the Processor; push them into Entity methods. Don’t return Entities to controllers, turn decisions into a Response designed for the consumer. Don’t turn repositories into generic “BaseRepository&lt;T&gt;” that leak EF idioms into your use cases, design ports that speak the language of the Entity.</p>
<p>If you inherit a codebase full of “service services”, carve a thin REPR slice around one use case and let that become the example. Engineers copy what feels better to work with.</p>
<h2 id="heading-foldering-naming-and-the-day-two-ergonomics">Foldering, naming, and the day two ergonomics</h2>
<p>Engineers live inside their editor more than any architecture diagram. The test of a good pattern is whether a newcomer can find the right file in under five seconds. With REPR, you teach them one rule, <strong>open the feature folder named for the verb, and everything you need is there</strong>. The next on call incident becomes “find the slice, run the tests, add a guard”, not “follow a call graph across six projects”.</p>
<p>Name Processors for the action (ApproveSubmission, CalculatePremium, RenewPolicy). Name Requests for intent (ApproveSubmissionSubmissionRequest). Keep Response names parallel. Apply the same discipline to your events (“SubmissionApproved”) and your logging contexts. Boring naming is a feature, not a bug.</p>
<h2 id="heading-a-bigger-picture">A bigger picture</h2>
<p>There’s a reason big teams drift toward this shape. It’s easier to review. It’s easier to secure. It produces less surprise in production. When you open a PR at GitHub or Netflix scale, reviewers look for intention first, what is this feature trying to do? REPR encodes that intention into the files themselves. Observability is simpler because your logs can key off <code>RequestType</code>. Incident responders can set up dashboards by Request/Response semantic names rather than raw URLs. Product managers can search the repo for a verb and land on the relevant pipeline.</p>
<p>REPR also supports steady-state autonomy. Teams can own sets of features without having to “own the controllers” or “own the shared service layer”. Ownership maps to verbs, deployment maps to slices. That is how you scale engineering organisations without introducing more “platform” than necessary.</p>
<h2 id="heading-bringing-it-all-together-in-programcs">Bringing it all together in Program.cs</h2>
<p>There’s nothing special to wire up. Register your repositories, validators, and behaviours, expose endpoints that adapt HTTP to Requests. If you prefer and its not overkill, add a simple “Processor mediator” to centralise behaviour chains, or inject Processors directly. Here’s a thin example:</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">var</span> builder = WebApplication.CreateBuilder(args);

builder.Services.AddDbContext&lt;MyDbContext&gt;(...);
builder.Services.AddScoped&lt;ISubmissionRepository, EfSubmissionRepository&gt;();
builder.Services.AddScoped&lt;ApproveSubmissionProcessor&gt;();
builder.Services.AddScoped&lt;GetSubmissionsProcessor&gt;();
builder.Services.AddValidatorsFromAssemblyContaining&lt;ApproveSubmiddion    RequestValidator&gt;();

<span class="hljs-comment">// Optional: register pipeline mediator and behaviours if you want dynamic chaining</span>
<span class="hljs-comment">// builder.Services.AddProcessorMediator()</span>
<span class="hljs-comment">//                .AddBehavior(typeof(LoggingBehavior&lt;,&gt;))</span>
<span class="hljs-comment">//                .AddBehavior(typeof(AuthorizationBehavior&lt;,&gt;));</span>

<span class="hljs-keyword">var</span> app = builder.Build();

<span class="hljs-keyword">new</span> ApproveSubmissionEndpoint().MapEndpoint(app);
<span class="hljs-comment">// ... map other feature endpoints</span>

app.Run();
</code></pre>
<p>You’re composing an application from <strong>features</strong>, not from controllers or “modules” whose only purpose is to group files. It feels small because it is.</p>
<h2 id="heading-name-the-thing-and-move-on">Name the thing and move on</h2>
<p>Patterns become powerful when you can name them and carry on with the work. REPR is a small name for something you already sense, intent centric slices with honest entities and processors that think. It’s where Clean Architecture and CQRS were pointing all along. It scales in companies with many teams because it maps to how humans reason about software, by verb, not by layer.</p>
<p>The next time you sketch a use case, title the page “Request &gt; Entity &gt; Processor &gt; Response”. Write the Request first so you’re forced to state the intent. Give the Entity one method that enforces the rule. Keep the Processor short. Return a Response that a product manager could read aloud. That’s the pattern. That’s how teams at Monzo, GitHub, and Netflix scale, keep features shippable without drowning in architecture rules.</p>
<p>This REPR example from <strong>Milan Jovanović</strong> makes use of <a target="_blank" href="https://fast-endpoints.com/">Fast Endpoints</a> which I previously wrote about <a target="_blank" href="https://fullstackcity.com/fastendpoints-in-net">here</a>.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://www.youtube.com/watch?v=layTLQJ5xYw">https://www.youtube.com/watch?v=layTLQJ5xYw</a></div>
]]></content:encoded></item><item><title><![CDATA[The EventGrid and Durable Functions Pattern]]></title><description><![CDATA[When you want resilient orchestration in Azure, you usually reach for Azure Service Bus, Logic Apps, or Durable Functions with internal queues. That approach works fine, but it can also feel heavy handed when you just need to coordinate a few microse...]]></description><link>https://dotnetdigest.com/the-eventgrid-and-durable-functions-pattern</link><guid isPermaLink="true">https://dotnetdigest.com/the-eventgrid-and-durable-functions-pattern</guid><category><![CDATA[Azure]]></category><category><![CDATA[Azure Functions]]></category><category><![CDATA[azure event grid]]></category><category><![CDATA[Microsoft]]></category><category><![CDATA[C#]]></category><category><![CDATA[.NET]]></category><category><![CDATA[software architecture]]></category><category><![CDATA[software development]]></category><category><![CDATA[Software Engineering]]></category><dc:creator><![CDATA[Patrick Kearns]]></dc:creator><pubDate>Fri, 17 Oct 2025 17:35:58 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1760722440788/7e2b0606-f63f-4e5a-b1d9-3935a0771117.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When you want resilient orchestration in Azure, you usually reach for Azure Service Bus, Logic Apps, or Durable Functions with internal queues. That approach works fine, but it can also feel heavy handed when you just need to coordinate a few microservices or workflows without the massive footprint of a full broker.</p>
<p>Without sounding like Morpheus “What if I told you, you could build a fully reliable orchestration system, complete with retries, state tracking, and fan out/fan in behaviour, using just Azure Event Grid and Durable Functions?”</p>
<p>Below we’ll check out the EventGrid &amp; Durable Functions pattern, a lean alternative to queue based orchestration that’s perfect for event driven distributed systems, domain workflows, and API-to-API coordination where you want durable state without centralised messaging infrastructure.</p>
<h2 id="heading-rethinking-orchestration-in-the-cloud">Rethinking Orchestration in the Cloud</h2>
<p>Traditional orchestration often relies on Service Bus or Storage Queues to trigger background processing. Each step in a workflow pushes messages into a queue, which are later processed by downstream functions. This model is proven and reliable, but it comes with extra management, queue dead lettering, message expiry, scaling rules, and hidden coupling between producers and consumers.</p>
<p>Azure Event Grid provides a different approach, a pure <em>publish subscribe event router</em> that can fan out events to multiple handlers without coupling them directly. It’s lightweight, fast, and designed for event driven architectures, but by itself, it doesn’t provide <em>state</em> or <em>reliability guarantees</em>.</p>
<p>That’s where Durable Functions come in. Durable Functions can persist orchestration state in storage and resume execution deterministically after failures. Combining the two gives you the best of both worlds, Event Grid’s reactive decoupling and Durable Functions, built in resilience.</p>
<h2 id="heading-the-core-idea">The Core Idea</h2>
<p>The pattern looks like this:</p>
<ol>
<li><p>Event producers publish domain events to Event Grid.</p>
</li>
<li><p>Durable Functions act as <em>event driven orchestrators</em> that listen for those events.</p>
</li>
<li><p>Each orchestrator replays its workflow deterministically based on persisted state.</p>
</li>
<li><p>Event Grid handles fan out, retries, and delivery, while Durable Functions handle state and coordination.</p>
</li>
</ol>
<p>No queue management, no manual checkpoints, no external workflow engine.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760684592449/08c94c4a-abaa-48a3-8729-c4faa39118e9.png" alt class="image--center mx-auto" /></p>
<p>This model scales horizontally, supports fan out and compensation, and uses only serverless components that scale to zero when idle.</p>
<h2 id="heading-publishing-an-event">Publishing an Event</h2>
<p>An event can come from anywhere, an API endpoint, a blob upload, or a domain action. Publishing to Event Grid is just a single API call.</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">using</span> Azure.Messaging.EventGrid;
<span class="hljs-keyword">using</span> Azure;

<span class="hljs-keyword">var</span> client = <span class="hljs-keyword">new</span> EventGridPublisherClient(
    <span class="hljs-keyword">new</span> Uri(<span class="hljs-string">"https://myapp-events.westeurope-1.eventgrid.azure.net/api/events"</span>),
    <span class="hljs-keyword">new</span> AzureKeyCredential(<span class="hljs-string">"&lt;event-grid-key&gt;"</span>)
);

<span class="hljs-keyword">var</span> data = <span class="hljs-keyword">new</span> { OrderId = <span class="hljs-number">1234</span>, Status = <span class="hljs-string">"Placed"</span> };
<span class="hljs-keyword">var</span> evt = <span class="hljs-keyword">new</span> EventGridEvent(
    <span class="hljs-string">"orders/placed"</span>,
    <span class="hljs-string">"OrderCreated"</span>,
    <span class="hljs-string">"1.0"</span>,
    data
);

<span class="hljs-keyword">await</span> client.SendEventAsync(evt);
</code></pre>
<p>Once published, Event Grid guarantees at-least-once delivery to all subscribers, including Durable Functions orchestrators.</p>
<h2 id="heading-creating-an-eventgrid-triggered-function">Creating an EventGrid Triggered Function</h2>
<p>Event Grid triggers are lightweight and cost effective. Each event delivery invokes the function once.</p>
<pre><code class="lang-csharp">[<span class="hljs-meta">Function(<span class="hljs-meta-string">"StartOrderOrchestration"</span>)</span>]
<span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> Task <span class="hljs-title">RunAsync</span>(<span class="hljs-params">[EventGridTrigger] EventGridEvent evt,
                           [DurableClient] DurableClientContext context,
                           ILogger log</span>)</span>
{
    <span class="hljs-keyword">var</span> order = evt.Data.ToObjectFromJson&lt;OrderCreated&gt;();
    log.LogInformation(<span class="hljs-string">"Received order {OrderId}"</span>, order.OrderId);

    <span class="hljs-keyword">string</span> instanceId = <span class="hljs-keyword">await</span> context.Client.StartNewAsync(
        <span class="hljs-keyword">nameof</span>(OrderOrchestrator),
        order
    );

    log.LogInformation(<span class="hljs-string">"Started orchestration {InstanceId}"</span>, instanceId);
}
</code></pre>
<p>Each incoming event spawns a durable workflow with a unique instance ID, perfect for correlating domain entities like orders or users.</p>
<h2 id="heading-orchestrating-the-workflow">Orchestrating the Workflow</h2>
<p>The orchestrator function expresses the business process declaratively.</p>
<pre><code class="lang-csharp">[<span class="hljs-meta">Function(nameof(OrderOrchestrator))</span>]
<span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">async</span> Task <span class="hljs-title">RunOrchestrator</span>(<span class="hljs-params">
    [OrchestrationTrigger] TaskOrchestrationContext ctx</span>)</span>
{
    <span class="hljs-keyword">var</span> order = ctx.GetInput&lt;OrderCreated&gt;();

    <span class="hljs-keyword">await</span> ctx.CallActivityAsync(<span class="hljs-keyword">nameof</span>(ValidateOrderActivity), order);
    <span class="hljs-keyword">await</span> ctx.CallActivityAsync(<span class="hljs-keyword">nameof</span>(ReserveInventoryActivity), order);
    <span class="hljs-keyword">await</span> ctx.CallActivityAsync(<span class="hljs-keyword">nameof</span>(SendConfirmationEmailActivity), order);

    <span class="hljs-comment">// Emit a completion event</span>
    <span class="hljs-keyword">var</span> evt = <span class="hljs-keyword">new</span> EventGridEvent(
        <span class="hljs-string">"orders/completed"</span>,
        <span class="hljs-string">"OrderCompleted"</span>,
        <span class="hljs-string">"1.0"</span>,
        <span class="hljs-keyword">new</span> { order.OrderId }
    );

    <span class="hljs-keyword">var</span> publisher = ctx.CreateEventGridPublisher();
    <span class="hljs-keyword">await</span> publisher.SendEventAsync(evt);
}
</code></pre>
<p>Because Durable Functions checkpoint state after every awaited call, if the function app restarts mid execution, it resumes automatically from the last completed step.</p>
<p>You gain durability without queues, each orchestration is replayed deterministically using the Durable Task Framework’s history log.</p>
<h2 id="heading-handling-fan-out-fan-in">Handling Fan Out / Fan In</h2>
<p>Durable Functions support fan out/fan in patterns natively. You can process multiple activities in parallel and aggregate results.</p>
<pre><code class="lang-csharp">[<span class="hljs-meta">Function(nameof(NotifyVendors))</span>]
<span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">async</span> Task <span class="hljs-title">RunAsync</span>(<span class="hljs-params">[OrchestrationTrigger] TaskOrchestrationContext ctx</span>)</span>
{
    <span class="hljs-keyword">var</span> order = ctx.GetInput&lt;OrderCreated&gt;();

    <span class="hljs-keyword">var</span> vendors = <span class="hljs-keyword">await</span> ctx.CallActivityAsync&lt;List&lt;<span class="hljs-keyword">string</span>&gt;&gt;(
        <span class="hljs-keyword">nameof</span>(GetVendorsActivity), order);

    <span class="hljs-keyword">var</span> tasks = vendors.Select(v =&gt;
        ctx.CallActivityAsync(<span class="hljs-keyword">nameof</span>(SendVendorNotificationActivity), v));

    <span class="hljs-keyword">await</span> Task.WhenAll(tasks);

    <span class="hljs-keyword">await</span> ctx.CallActivityAsync(<span class="hljs-keyword">nameof</span>(MarkOrderReadyActivity), order);
}
</code></pre>
<p>Event Grid isn’t doing the fan out here, Durable Functions is. Event Grid merely <em>triggers</em> the orchestrations, and they handle their own internal parallelism.</p>
<h2 id="heading-publishing-completion-events">Publishing Completion Events</h2>
<p>A key advantage of combining these two services is the ability to emit new domain events at each stage. When a durable orchestration completes, you can publish follow up events back to Event Grid, keeping the system reactive and decoupled.</p>
<pre><code class="lang-csharp">[<span class="hljs-meta">Function(nameof(PublishOrderCompletedEvent))</span>]
<span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">async</span> Task <span class="hljs-title">RunAsync</span>(<span class="hljs-params">[ActivityTrigger] OrderCreated order</span>)</span>
{
    <span class="hljs-keyword">var</span> client = <span class="hljs-keyword">new</span> EventGridPublisherClient(
        <span class="hljs-keyword">new</span> Uri(Environment.GetEnvironmentVariable(<span class="hljs-string">"EventGridTopicUrl"</span>)!),
        <span class="hljs-keyword">new</span> AzureKeyCredential(Environment.GetEnvironmentVariable(<span class="hljs-string">"EventGridKey"</span>)!)
    );

    <span class="hljs-keyword">var</span> evt = <span class="hljs-keyword">new</span> EventGridEvent(
        <span class="hljs-string">"orders/completed"</span>,
        <span class="hljs-string">"OrderCompleted"</span>,
        <span class="hljs-string">"1.0"</span>,
        <span class="hljs-keyword">new</span> { order.OrderId }
    );

    <span class="hljs-keyword">await</span> client.SendEventAsync(evt);
}
</code></pre>
<p>Downstream services subscribed to <code>OrderCompleted</code>, analytics, shipping, invoicing, will automatically receive notifications, no queues required.</p>
<h2 id="heading-handling-failures-and-retries">Handling Failures and Retries</h2>
<p>Event Grid and Durable Functions both offer built-in retry mechanisms.</p>
<ul>
<li><p>Event Grid retries delivery for up to 24 hours with exponential backoff.</p>
</li>
<li><p>Durable Functions replay orchestration state automatically after transient failures.</p>
</li>
</ul>
<p>To make failure handling explicit, you can wrap activities with retry policies:</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">var</span> retry = <span class="hljs-keyword">new</span> RetryOptions(firstRetryInterval: TimeSpan.FromSeconds(<span class="hljs-number">5</span>), maxNumberOfAttempts: <span class="hljs-number">5</span>)
{
    Handle = ex =&gt; ex <span class="hljs-keyword">is</span> HttpRequestException
};

<span class="hljs-keyword">await</span> ctx.CallActivityWithRetryAsync(<span class="hljs-keyword">nameof</span>(ReserveInventoryActivity), retry, order);
</code></pre>
<p>This pattern ensures that downstream systems can temporarily fail without breaking the orchestration.</p>
<h2 id="heading-observability-and-correlation">Observability and Correlation</h2>
<p>Tracing is straightforward when everything flows through Event Grid and Durable Functions.</p>
<p>You can enrich each event with a correlation ID and inject it into the orchestration context:</p>
<pre><code class="lang-csharp">evt = evt.WithCorrelationId(ctx.InstanceId);
</code></pre>
<p>Durable Functions persist instance IDs in Azure Storage, and Event Grid forwards event metadata such as <code>eventType</code>, <code>subject</code>, and <code>traceparent</code> headers.<br />This makes it easy to connect distributed traces across the system using OpenTelemetry or Application Insights.</p>
<p>In Application Insights, you’ll see orchestration executions visualised as hierarchical traces, the orchestrator as a parent span, activities as children, and Event Grid events as external dependencies.</p>
<h2 id="heading-benefits-of-the-eventgrid-and-durable-pattern">Benefits of the EventGrid and Durable Pattern</h2>
<p>This combination gives you several architectural advantages:</p>
<ul>
<li><p><strong>No dedicated broker</strong> - Event Grid handles routing without message queues.</p>
</li>
<li><p><strong>Durable state</strong> - Orchestration checkpoints are persisted in Azure Storage automatically.</p>
</li>
<li><p><strong>True decoupling</strong> - Event producers and consumers never reference each other.</p>
</li>
<li><p><strong>Scale to zero</strong> - Both services are fully serverless, so you pay only when events flow.</p>
</li>
<li><p><strong>Simple retry semantics</strong> - Event Grid’s delivery guarantees plus Durable Functions’ replay logic create end-to-end reliability.</p>
</li>
</ul>
<p>It’s a perfect fit for systems where you need resilience and traceability but don’t want the load of managing queues, topics, or dead letter policies.</p>
<h2 id="heading-example-processing-insurance-claims">Example: Processing Insurance Claims</h2>
<p>Imagine an insurance claim processing system with several steps, claim submission, document validation, fraud scoring, and payout approval.</p>
<p>Traditionally, this would involve a Service Bus topic with multiple subscriptions and queue triggered functions, each one handling part of the process.</p>
<p>With Event Grid ans Durable Functions, the same pipeline becomes a self contained orchestration triggered by an event such as <code>ClaimSubmitted</code>. Each step becomes an activity, and when complete, the orchestrator emits a <code>ClaimProcessed</code> event.</p>
<p>Downstream services like notifications, analytics, and auditing all subscribe to the event type they care about. The architecture remains event driven and fully observable, but without Service Bus infrastructure.</p>
<h2 id="heading-deployment-and-security-considerations">Deployment and Security Considerations</h2>
<p>You can deploy both Event Grid topics and Function Apps declaratively via Bicep or ARM. Each subscriber (the Function App) uses Event Grid subscriptions with Azure AD authentication.</p>
<p>When securing the pipeline:</p>
<ul>
<li><p>Use Managed Identity for publishing events.</p>
</li>
<li><p>Ensure Event Grid topic filters restrict inbound subjects (e.g., <code>orders/*</code>).</p>
</li>
<li><p>Configure Durable Function storage accounts with private endpoints.</p>
</li>
</ul>
<p>This ensures the pattern works securely even within private VNets or hybrid networks.</p>
<h2 id="heading-limitations">Limitations</h2>
<p>While this pattern is powerful, it’s not a universal replacement for Service Bus.</p>
<p>Event Grid is best for event driven communication, not command driven or transactional scenarios. It doesn’t support message sessions or ordered delivery, and while at-least-once is good enough for most workflows, it’s not exactly once.</p>
<p>If your system requires strict FIFO ordering, high message throughput (millions per second), or guaranteed persistence for every event, Service Bus or Event Hubs remain the right tools.</p>
<p>But for most orchestrations that span seconds to minutes, where each event represents a state transition, the EventGrid and Durable pattern is simpler, cheaper, and easier to reason about.</p>
<h2 id="heading-state-machines-over-events">State Machines Over Events</h2>
<p>Think of this architecture as a distributed state machine driven by events.<br />Each Durable Function orchestration holds the state, and each Event Grid event represents a state transition.<br />Instead of queuing messages, you publish facts about what happened, and orchestrators react accordingly.</p>
<h2 id="heading-is-it-for-you">Is it for you?</h2>
<p>This pattern is one of the cleanest ways to build resilient, event driven orchestrations in modern Azure architectures. It eliminates queue complexity, scales automatically, and keeps every component naturally decoupled. By thinking in terms of <em>events</em> and <em>durable state</em> rather than <em>messages</em> and <em>queues</em>, you gain a more declarative, observable, and fault-tolerant architecture, one that’s elegant in concept and in code. Sometimes, reliable orchestration doesn’t need a broker at all, just two Azure services working together.</p>
]]></content:encoded></item><item><title><![CDATA[Designing a Message Bus Using IAsyncEnumerable<T> and Channels]]></title><description><![CDATA[We’re all familiar with message buses as networked systems, RabbitMQ, Azure Service Bus, Kafka, NATS. But inside many applications, a quieter kind of messaging happens constantly, background services hand off work to each other, pipelines stream data...]]></description><link>https://dotnetdigest.com/designing-a-message-bus-using-iasyncenumerablet-and-channels</link><guid isPermaLink="true">https://dotnetdigest.com/designing-a-message-bus-using-iasyncenumerablet-and-channels</guid><category><![CDATA[in memory bus]]></category><category><![CDATA[message bus]]></category><category><![CDATA[C#]]></category><category><![CDATA[.NET]]></category><category><![CDATA[Microsoft]]></category><category><![CDATA[Software Engineering]]></category><dc:creator><![CDATA[Patrick Kearns]]></dc:creator><pubDate>Tue, 14 Oct 2025 19:03:51 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1760468537124/b829baef-518b-482c-9108-25e257be93e1.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>We’re all familiar with message buses as networked systems, RabbitMQ, Azure Service Bus, Kafka, NATS. But inside many applications, a quieter kind of messaging happens constantly, background services hand off work to each other, pipelines stream data between layers, and modules coordinate without ever crossing a network boundary. For this kind of internal process communication, introducing a full message broker can be a waste of time. The .NET runtime already gives us two powerful tools for building lightweight, resilient pipelines inside a single process,<code>System.Threading.Channels</code> and <code>IAsyncEnumerable&lt;T&gt;</code>.</p>
<p>Together, they form the foundation of a high performance in process message bus, one that supports multiple publishers and consumers, provides backpressure, and integrates cleanly with async/await .</p>
<h2 id="heading-why-bother-with-an-in-process-bus">Why Bother with an In Process Bus?</h2>
<p>In process buses occupy a useful middle ground between events and queues. They are perfect when:</p>
<ul>
<li><p>Components need to communicate asynchronously but still within the same host process.</p>
</li>
<li><p>You want to decouple producers from consumers without introducing infrastructure.</p>
</li>
<li><p>You need streaming behaviour, continuous consumption of messages as they arrive.</p>
</li>
<li><p>You care about flow control and backpressure, preventing runaway memory growth.</p>
</li>
</ul>
<p>Examples include telemetry pipelines, background email dispatchers, integration event routers, or even a miniature “domain event” system within a modular monolith.</p>
<h2 id="heading-channels-and-async-streams">Channels and Async Streams</h2>
<p>Before diving in, it helps to recall what these primitives do.</p>
<ul>
<li><p>Channels provide asynchronous producer consumer queues. Writers call <code>WriteAsync</code>, readers call <code>ReadAsync</code>, and both sides are decoupled yet synchronised through backpressure.</p>
</li>
<li><p><code>IAsyncEnumerable&lt;T&gt;</code> represents asynchronous streams of data, consumable with <code>await foreach</code>. When used with channels, it creates a natural “push pull” flow that fits perfectly with .NET’s async model.</p>
</li>
</ul>
<p>A <code>Channel&lt;T&gt;</code> essentially becomes a bounded, thread safe buffer that can be streamed through an async iterator.</p>
<h2 id="heading-core-abstraction-imessagebus">Core Abstraction: IMessageBus</h2>
<p>We’ll start by defining a minimal interface for our in process message bus.</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">namespace</span> <span class="hljs-title">InProcessBus</span>;

<span class="hljs-keyword">public</span> <span class="hljs-keyword">interface</span> <span class="hljs-title">IMessageBus</span>
{
    <span class="hljs-function">ValueTask <span class="hljs-title">PublishAsync</span>&lt;<span class="hljs-title">T</span>&gt;(<span class="hljs-params">T message, CancellationToken token = <span class="hljs-keyword">default</span></span>)</span>;
    <span class="hljs-function"><span class="hljs-title">IAsyncEnumerable</span>&lt;<span class="hljs-title">T</span>&gt; <span class="hljs-title">SubscribeAsync</span>&lt;<span class="hljs-title">T</span>&gt;(<span class="hljs-params">CancellationToken token = <span class="hljs-keyword">default</span></span>)</span>;
}
</code></pre>
<p>Producers <code>PublishAsync</code> messages of any type, while consumers <code>SubscribeAsync</code> to an asynchronous stream of messages. Each message type effectively acts as its own topic.</p>
<h2 id="heading-implementing-the-channel-registry">Implementing the Channel Registry</h2>
<p>We’ll store one <code>Channel</code> per message type. Because different message types may be published concurrently, we’ll use a thread-safe dictionary keyed by <code>Type</code>.</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">using</span> System.Collections.Concurrent;
<span class="hljs-keyword">using</span> System.Threading.Channels;

<span class="hljs-keyword">namespace</span> <span class="hljs-title">InProcessBus</span>;

<span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">sealed</span> class <span class="hljs-title">MessageBus</span>(<span class="hljs-params">ChannelOptions? options = <span class="hljs-literal">null</span></span>) : IMessageBus</span>
{
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">readonly</span> ConcurrentDictionary&lt;Type, Channel&lt;<span class="hljs-keyword">object</span>&gt;&gt; _channels = <span class="hljs-keyword">new</span>();
    <span class="hljs-keyword">private</span> <span class="hljs-keyword">readonly</span> ChannelOptions _opts = options ?? <span class="hljs-keyword">new</span> UnboundedChannelOptions
    {
        SingleReader = <span class="hljs-literal">false</span>,
        SingleWriter = <span class="hljs-literal">false</span>,
        AllowSynchronousContinuations = <span class="hljs-literal">false</span>
    };

    <span class="hljs-function"><span class="hljs-keyword">private</span> Channel&lt;<span class="hljs-keyword">object</span>&gt; <span class="hljs-title">GetOrCreateChannel</span>(<span class="hljs-params">Type type</span>)</span>
        =&gt; _channels.GetOrAdd(type, _ =&gt; Channel.CreateUnbounded&lt;<span class="hljs-keyword">object</span>&gt;(_opts));

    <span class="hljs-function"><span class="hljs-keyword">public</span> ValueTask <span class="hljs-title">PublishAsync</span>&lt;<span class="hljs-title">T</span>&gt;(<span class="hljs-params">T message, CancellationToken token = <span class="hljs-keyword">default</span></span>)</span>
    {
        <span class="hljs-keyword">var</span> channel = GetOrCreateChannel(<span class="hljs-keyword">typeof</span>(T));
        <span class="hljs-keyword">return</span> channel.Writer.WriteAsync(message!, token);
    }

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> <span class="hljs-title">IAsyncEnumerable</span>&lt;<span class="hljs-title">T</span>&gt; <span class="hljs-title">SubscribeAsync</span>&lt;<span class="hljs-title">T</span>&gt;(<span class="hljs-params">
        [EnumeratorCancellation] CancellationToken token = <span class="hljs-keyword">default</span></span>)</span>
    {
        <span class="hljs-keyword">var</span> channel = GetOrCreateChannel(<span class="hljs-keyword">typeof</span>(T));

        <span class="hljs-keyword">while</span> (<span class="hljs-keyword">await</span> channel.Reader.WaitToReadAsync(token))
        {
            <span class="hljs-keyword">while</span> (channel.Reader.TryRead(<span class="hljs-keyword">out</span> <span class="hljs-keyword">var</span> item))
                <span class="hljs-function"><span class="hljs-keyword">yield</span> <span class="hljs-title">return</span> (<span class="hljs-params">T</span>)item!</span>;
        }
    }
}
</code></pre>
<p>With this, any component can publish messages of type <code>T</code>, and any number of subscribers can listen to that stream asynchronously. The simplicity is deceptive, this single class can orchestrate hundreds of thousands of in memory messages per second.</p>
<h2 id="heading-publishing-and-consuming-messages">Publishing and Consuming Messages</h2>
<p>Here’s how a background service might consume domain events.</p>
<pre><code class="lang-csharp"><span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">sealed</span> class <span class="hljs-title">EmailHandler</span>(<span class="hljs-params">MessageBus bus, ILogger&lt;EmailHandler&gt; log</span>) : BackgroundService</span>
{
    <span class="hljs-function"><span class="hljs-keyword">protected</span> <span class="hljs-keyword">override</span> <span class="hljs-keyword">async</span> Task <span class="hljs-title">ExecuteAsync</span>(<span class="hljs-params">CancellationToken stoppingToken</span>)</span>
    {
        <span class="hljs-keyword">await</span> <span class="hljs-keyword">foreach</span> (<span class="hljs-keyword">var</span> evt <span class="hljs-keyword">in</span> bus.SubscribeAsync&lt;UserRegistered&gt;(stoppingToken))
        {
            log.LogInformation(<span class="hljs-string">"Sending welcome email to {Email}"</span>, evt.Email);
            <span class="hljs-keyword">await</span> SendEmailAsync(evt.Email);
        }
    }

    <span class="hljs-function"><span class="hljs-keyword">private</span> <span class="hljs-keyword">static</span> Task <span class="hljs-title">SendEmailAsync</span>(<span class="hljs-params"><span class="hljs-keyword">string</span> to</span>)</span>
    {
        <span class="hljs-comment">// Simulate I/O latency</span>
        <span class="hljs-keyword">return</span> Task.Delay(<span class="hljs-number">100</span>);
    }
}
</code></pre>
<p>And somewhere else in the system:</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">await</span> bus.PublishAsync(<span class="hljs-keyword">new</span> UserRegistered(<span class="hljs-string">"alice@example.com"</span>));
</code></pre>
<p>That single call enqueues the message. The background consumer processes it in its own time, fully decoupled from the publisher.</p>
<h2 id="heading-using-bounded-channels-for-backpressure">Using Bounded Channels for Backpressure</h2>
<p>The example so far uses <code>UnboundedChannelOptions</code>, which risks growing indefinitely under heavy load. For production scenarios, bounded channels are safer.</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">var</span> bounded = <span class="hljs-keyword">new</span> BoundedChannelOptions(<span class="hljs-number">1000</span>)
{
    FullMode = BoundedChannelFullMode.Wait,
    SingleReader = <span class="hljs-literal">false</span>,
    SingleWriter = <span class="hljs-literal">false</span>
};

<span class="hljs-keyword">var</span> bus = <span class="hljs-keyword">new</span> MessageBus(bounded);
</code></pre>
<p>This ensures producers naturally apply backpressure, if consumers fall behind, the <code>PublishAsync</code> call will await until there’s capacity.</p>
<h2 id="heading-adding-filtering-and-topic-isolation">Adding Filtering and Topic Isolation</h2>
<p>Many systems need to filter messages or route them by topic. Since our bus already differentiates by message type, you can extend it easily with message filters or named topics.</p>
<pre><code class="lang-csharp"><span class="hljs-function"><span class="hljs-keyword">public</span> record <span class="hljs-title">LogEvent</span>(<span class="hljs-params"><span class="hljs-keyword">string</span> Category, <span class="hljs-keyword">string</span> Message</span>)</span>;

<span class="hljs-keyword">await</span> bus.PublishAsync(<span class="hljs-keyword">new</span> LogEvent(<span class="hljs-string">"Audit"</span>, <span class="hljs-string">"User login"</span>));
<span class="hljs-keyword">await</span> bus.PublishAsync(<span class="hljs-keyword">new</span> LogEvent(<span class="hljs-string">"Debug"</span>, <span class="hljs-string">"Cache miss"</span>));

<span class="hljs-keyword">await</span> <span class="hljs-keyword">foreach</span> (<span class="hljs-keyword">var</span> evt <span class="hljs-keyword">in</span> bus.SubscribeAsync&lt;LogEvent&gt;())
{
    <span class="hljs-keyword">if</span> (evt.Category == <span class="hljs-string">"Audit"</span>)
        Console.WriteLine(<span class="hljs-string">$"AUDIT: <span class="hljs-subst">{evt.Message}</span>"</span>);
}
</code></pre>
<p>Filtering is as simple as applying a <code>Where</code> clause inside an <code>await foreach</code> loop.</p>
<h2 id="heading-ensuring-safe-completion">Ensuring Safe Completion</h2>
<p>When shutting down, we want to complete all channels so that consumers can finish gracefully.</p>
<pre><code class="lang-csharp"><span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">Complete</span>(<span class="hljs-params"></span>)</span>
{
    <span class="hljs-keyword">foreach</span> (<span class="hljs-keyword">var</span> kvp <span class="hljs-keyword">in</span> _channels)
        kvp.Value.Writer.TryComplete();
}
</code></pre>
<p>In a hosted application, you’d typically call this during graceful shutdown in <code>IHostApplicationLifetime.ApplicationStopping</code>.</p>
<h2 id="heading-observability-and-diagnostics">Observability and Diagnostics</h2>
<p>Because everything happens in process, adding observability is straightforward.</p>
<p>You can instrument the bus using <code>ActivitySource</code> or simple metrics counters:</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">private</span> <span class="hljs-keyword">static</span> <span class="hljs-keyword">readonly</span> ActivitySource Tracer = <span class="hljs-keyword">new</span>(<span class="hljs-string">"InProcessBus"</span>);

<span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">async</span> ValueTask <span class="hljs-title">PublishAsync</span>&lt;<span class="hljs-title">T</span>&gt;(<span class="hljs-params">T message, CancellationToken token = <span class="hljs-keyword">default</span></span>)</span>
{
    <span class="hljs-keyword">using</span> <span class="hljs-keyword">var</span> act = Tracer.StartActivity(<span class="hljs-string">$"publish:<span class="hljs-subst">{<span class="hljs-keyword">typeof</span>(T).Name}</span>"</span>);
    <span class="hljs-keyword">var</span> channel = GetOrCreateChannel(<span class="hljs-keyword">typeof</span>(T));
    <span class="hljs-keyword">await</span> channel.Writer.WriteAsync(message!, token);
}
</code></pre>
<p>You could then export these spans via OpenTelemetry to see how fast messages flow through your system, how many awaiters are pending, and how long subscribers take to process them.</p>
<h2 id="heading-comparing-to-external-brokers">Comparing to External Brokers</h2>
<p>This design doesn’t replace distributed message queues, it complements them. An in process bus is <em>memory-local</em> and doesn’t persist messages across restarts. It’s perfect for high speed coordination between components within a single service, not for cross service delivery guarantees. Think of it as the internal plumbing of a service. When you later introduce Azure Service Bus or Kafka, your in process bus becomes the layer that connects local operations to external integration points.</p>
<h2 id="heading-example-in-a-modular-monolith">Example in a Modular Monolith</h2>
<p>Imagine a modular application with separate assemblies for Users, Orders, and Notifications.<br />Rather than reference each module directly, they communicate via the bus:</p>
<ul>
<li><p>The User module publishes a <code>UserRegistered</code> event.</p>
</li>
<li><p>The Notification module subscribes to that event and sends an email.</p>
</li>
<li><p>The Analytics module subscribes too and updates metrics.</p>
</li>
</ul>
<p>All modules live in the same process, yet remain loosely coupled, testable in isolation and replaceable without touching each other’s code.</p>
<h2 id="heading-a-diagnostic-example">A Diagnostic Example</h2>
<p>The bus can also power internal diagnostics. Suppose you want to stream structured logs to a live dashboard:</p>
<pre><code class="lang-csharp"><span class="hljs-keyword">await</span> <span class="hljs-keyword">foreach</span> (<span class="hljs-keyword">var</span> evt <span class="hljs-keyword">in</span> bus.SubscribeAsync&lt;LogEvent&gt;())
{
    Console.WriteLine(<span class="hljs-string">$"<span class="hljs-subst">{DateTime.UtcNow:O}</span> [<span class="hljs-subst">{evt.Category}</span>] <span class="hljs-subst">{evt.Message}</span>"</span>);
}
</code></pre>
<p>Because <code>SubscribeAsync</code> returns <code>IAsyncEnumerable&lt;T&gt;</code>, this stream can feed directly into SignalR, WebSockets, or even Blazor components. You’ve effectively built a lightweight reactive pipeline without any external dependency.</p>
<h2 id="heading-performance">Performance</h2>
<p>On a standard workstation, an unbounded <code>Channel&lt;T&gt;</code> can easily push over a million messages per second across tasks when consumers keep up. Bounded channels reduce that slightly due to waiting, but maintain predictable latency. The major performance advantage is memory locality, there’s no serialisation, no network IO, and almost zero GC pressure because messages move as raw object references.</p>
<h2 id="heading-threading-considerations">Threading Considerations</h2>
<p>Channels are already thread safe. Writers and readers can operate concurrently without locks, but you should avoid blocking inside consumers, always use async APIs. If multiple consumers subscribe to the same message type, they will share the same channel and thus compete for messages (each message delivered once). If you need fan-out (each consumer receives all messages), create a small multiplexer that writes to multiple channels per type.</p>
<p>These design decisions are explicit trade-offs, do you want queue semantics or broadcast semantics?<br />Because you control the implementation, you can choose either.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1760468345730/985c12a8-6ef3-4dda-9ba2-735f915af15a.png" alt class="image--center mx-auto" /></p>
<p>An in process message bus built on <code>Channels</code> and <code>IAsyncEnumerable&lt;T&gt;</code> demonstrates how far modern .NET concurrency has evolved. What once required third party frameworks or complex TPL Dataflow setups now fits into a few concise, testable classes.</p>
]]></content:encoded></item></channel></rss>