What’s New in .NET 11 Preview 3
Runtime, C# 15, ASP.NET Core, EF Core, and More

.NET 11 is now in preview, with Preview 3 published in April 2026. Microsoft’s current documentation says .NET 11 is still preview software, the final release is expected in November 2026, and the feature list was last updated for Preview 3. The .NET release notes also list .NET 11 as a Standard Term Support release, planned for support from November 10, 2026 to November 9, 2028. So treat everything below as production-relevant direction, not production-ready commitment. APIs can still move, preview language features can still change, and anything experimental should be isolated behind your codebase.
The important thing about .NET 11 is that its not only a language release. Its a runtime release, a library release, an ASP.NET Core release, an EF Core release, an SDK release, and a container supply-chain release. For experienced .NET engineers, the theme is clear, .NET 11 is tightening the platform around performance and better developer loops.
The runtime shift: .NET 11 is moving more work out of your code and into the platform
The runtime changes in .NET 11 improve code you already wrote. You dont need to rewrite a Web API endpoint, a message handler, an Azure Function, or a background service to benefit from better bounds-check elimination, switch folding, uint conversion improvements, interface dispatch improvements, and ReadyToRun devirtualisation. Microsoft’s runtime notes call out JIT work around redundant bounds checks, checked arithmetic, multi-target switch expressions, uint-to-float and uint-to-double casts, generic virtual calls in ReadyToRun images, and new Arm SVE2 intrinsics.
Thats good for engineers because most production systems carry hot code paths that nobody wants to touch. The strategic point is this, one of the strongest arguments for keeping services on current .NET versions is not new syntax. Its that the platform keeps finding performance in code you already own.
Look at this simple endpoint-style classifier:
// File: Features/Orders/ClassifyOrderStatus.cs
namespace Orders.Features;
public static class ClassifyOrderStatus {
public static bool IsSuccessfulHttpStatus(int statusCode) {
return statusCode is 200 or 201 or 202 or 204;
}
public static bool ShouldRetry(int statusCode)
{
return statusCode is 408 or 429 or 500 or 502 or 503 or 504;
}
}
In older runtimes, a multi-target pattern like statusCode is 200 or 201 or 202 or 204 could still compile well, but .NET 11’s JIT has specific work to fold small constant switch or pattern sets into simpler branchless checks. The business code stays readable, while the runtime has more freedom to produce better machine code.
The same applies to common span and array patterns:
// File: Infrastructure/Parsing/ChecksumCalculator.cs
namespace Infrastructure.Parsing;
public static class ChecksumCalculator {
public static int CalculateWindowedChecksum(ReadOnlySpan payload)
{ var checksum = 0;
for (var index = 0; index + 3 < payload.Length; index++)
{
checksum += payload[index];
checksum += payload[index + 1];
checksum += payload[index + 2];
checksum += payload[index + 3];
}
return checksum;
}
}
That index + 3 < payload.Length pattern is the sort of bounds-check scenario the runtime notes explicitly call out. The point is not that you should micro-optimise every loop. The point is that .NET is continuing to reward ordinary, readable, safe C#.
Runtime Async: the most interesting .NET 11 feature for debugging and diagnostics
Runtime Async is the feature I would watch most closely. It is still preview, and you still opt in with the runtime-async=on feature switch, but Preview 3 removed the need for true in net11.0 projects. Microsoft describes Runtime Async as a move toward runtime-managed suspension and resumption rather than compiler-generated async state machines, with cleaner live stack traces, better debugger behaviour, and lower overhead. Preview 3 also adds support for NativeAOT and ReadyToRun, plus allocation-related improvements in continuation handling.
That is a big deal because async stack traces are one of the oldest pain points in .NET diagnostics. Exception stack traces are already cleaned up in many normal cases, but live stack traces from profilers, debuggers, new StackTrace(), and diagnostic tools can still show state-machine noise. Runtime Async attacks that problem closer to the runtime.
<!-- File: Directory.Build.props -->
<Project>
<PropertyGroup>
<TargetFramework>net11.0</TargetFramework>
<LangVersion>preview</LangVersion>
<Nullable>enable</Nullable>
<ImplicitUsings>enable</ImplicitUsings>
<Features>runtime-async=on</Features>
</PropertyGroup>
</Project>
Now imagine an async call chain in an order processing service:
// File: Features/Orders/SubmitOrder/SubmitOrderHandler.cs
using System.Diagnostics;
namespace Orders.Features.SubmitOrder;
public sealed class SubmitOrderHandler(
IPaymentGateway paymentGateway,
IInventoryClient inventoryClient,
ILogger logger)
{
public async Task<Result> Handle( SubmitOrderCommand command, CancellationToken stopToken)
{
await ValidateInventory(command, stopToken); await AuthorisePayment(command, stopToken);
logger.LogInformation(
"Live async stack for order {OrderId}: {StackTrace}",
command.OrderId,
new StackTrace(fNeedFileInfo: true).ToString());
return Result.Success(new OrderSubmissionReceipt(command.OrderId));
}
private async Task ValidateInventory(
SubmitOrderCommand command,
CancellationToken stopToken)
{
await inventoryClient.ReserveAsync(command.Items, stopToken);
}
private async Task AuthorisePayment(
SubmitOrderCommand command,
CancellationToken stopToken)
{
await paymentGateway.AuthoriseAsync(command.Payment, stopToken);
}
}
Without Runtime Async, live stacks tend to include more compiler-generated async infrastructure. With Runtime Async, the stack is intended to look closer to the logical call chain you wrote. That is important for production support. When a distributed system is under pressure, you do not want to mentally reverse engineer generated async frames. You want to see the business operation, the activity, the handler, the client call, and the failing boundary.
The right way to think about Runtime Async today is as a diagnostic and performance investment, not something to blindly enable across production. Use it in a playground, then in internal tools, then in a service where you can compare call stacks, profiler output, allocations, and debugger behaviour. Do not enable it across regulated or revenue-critical workloads just because the syntax looks cool. Preview features should earn trust.
The hardware baseline change is boring until it breaks a server
.NET 11 updates minimum hardware requirements. On x86 and x64, the baseline moves from x86-64-v1 to x86-64-v2, which means the runtime can assume instructions such as SSE3, SSSE3, SSE4.1, SSE4.2, POPCNT, and CX16. ReadyToRun targets for Windows and Linux move to x86-64-v3, which adds AVX, AVX2, BMI1, BMI2, F16C, FMA, LZCNT, and MOVBE. Microsoft says .NET 11 can fail to run on older hardware with a message about missing baseline instruction sets.
This is the sort of change engineers should not ignore. It probably will not affect most cloud-hosted workloads, but it can affect older on-prem servers, industrial machines, lab environments, self-hosted build agents, forgotten VMs, or small edge devices. In a normal enterprise estate, these are exactly the machines nobody has inventoried properly.
The practical migration task is simple. Before you plan a .NET 11 rollout, check the hardware under your build agents, self-hosted runners, old IIS boxes, on-prem Windows services, and container hosts. For cloud workloads, check the VM SKUs and base images. For legacy production environments, do not assume the operating system support matrix tells the whole story. The runtime now cares more directly about CPU capability.
C# 15 collection expression arguments:
C# 15 currently lists collection expression arguments and union types as the main features. Collection expression arguments let you pass constructor or factory arguments to the target collection by putting with(...) as the first element in a collection expression. Microsoft’s examples show passing a capacity to List and a comparer to HashSet.
This is not a huge feature, but it removes friction in the exact places where collection expressions were previously slightly too limited. In real code, that means capacity hints, case-insensitive sets, custom comparers, and eventually richer dictionary-like creation scenarios.
// File: Features/Roles/RoleNormalizer.cs
namespace Security.Features.Roles;
public static class RoleNormalizer {
public static HashSet BuildRoleSet(IEnumerable roles)
{
return [with(StringComparer.OrdinalIgnoreCase), .. roles];
}
}
That example is small, but important. If you are dealing with Entra app roles, API scopes, vendor codes, country codes, or externally supplied identifiers, the comparer is not incidental. It is part of correctness. The old version was still fine:
var set = new HashSet(StringComparer.OrdinalIgnoreCase);
foreach (var role in roles) { set.Add(role); }
The new version is more compact, but still explicit about the comparer:
HashSet<string> set = [with(StringComparer.OrdinalIgnoreCase), .. roles];
Capacity is the other obvious case:
// File: Features/Submissions/FieldResolution/FieldResolutionResultBuilder.cs
namespace Submissions.Features.FieldResolution;
public static class FieldResolutionResultBuilder { public static List Build( IReadOnlyCollection incomingFields) { List resolved = [with(capacity: incomingFields.Count), .. incomingFields.Select(Map)];
return resolved;
}
private static ResolvedField Map(IncomingField field)
{
return new ResolvedField(
field.Name,
field.Value,
Confidence: field.Source == FieldSource.Deterministic ? 1.0m : 0.7m);
}
}
As a style rule, I would use this feature when the constructor argument is semantically important. A comparer is important. Capacity can be important in a hot path. But do not use this syntax just to show you know it exists. Clever collection syntax will annoy reviewers if it hides intent.
C# 15 union types: the feature to watch for domain modelling
C# 15 union types are much more interesting. A union represents a value that can be one of several case types. The docs show the union keyword, implicit conversion from each case type, and exhaustive switch expressions across all case types. Microsoft also notes that this is still preview territory, and some parts are not implemented yet in early .NET 11 previews.
The value for enterprise systems is obvious. We often model outcomes that are not exceptions, not nullable values, and not inheritance hierarchies. A submission can be accepted, rejected, or held for manual review. A payment can be authorised, declined, or pending 3DS. A policy can be quoted, referred, or blocked. Today, we often reach for generic Result, marker interfaces, abstract records, discriminated-union NuGet packages, or error codes. Native union types could give C# a first-class way to model these branches.
// File: Domain/Submissions/SubmissionDecision.cs
namespace Submissions.Domain;
public sealed record AcceptedSubmission( long SubmissionId, string DraftReference);
public sealed record RejectedSubmission( long SubmissionId, string ReasonCode, string Message);
public sealed record NeedsManualReviewSubmission( long SubmissionId, IReadOnlyList MissingFields, IReadOnlyList AmbiguousFields);
public union SubmissionDecision( AcceptedSubmission, RejectedSubmission, NeedsManualReviewSubmission);
You can then switch on the domain outcome:
// File: Features/Submissions/CreateDraft/CreateDraftResponseMapper.cs
namespace Submissions.Features.CreateDraft;
public static class CreateDraftResponseMapper {
public static IResult ToHttpResult(
SubmissionDecision decision) { return decision switch { AcceptedSubmission accepted => Results.Ok(new { accepted.SubmissionId, accepted.DraftReference }),
RejectedSubmission rejected =>
Results.BadRequest(new
{
rejected.SubmissionId,
rejected.ReasonCode,
rejected.Message
}),
NeedsManualReviewSubmission review =>
Results.Accepted($"/submissions/{review.SubmissionId}/review", new
{
review.SubmissionId,
review.MissingFields,
review.AmbiguousFields
})
};
}
}
The benefit is not syntax. The benefit is exhaustiveness. If you add a new FraudHoldSubmission case later, the compiler should force you back to the mapping code. That is the right kind of friction. It prevents a silent default branch from hiding a new business state.
My advice is to trial union types first in application-layer boundaries, not deep persistence models. Use them for command outcomes, service responses, domain decisions, parser results, and workflow state transitions. Avoid storing them directly until the serialisation and tooling story is stable.
System.Text.Json: the serialiser keeps moving toward contract-level control
The .NET 11 library updates include several System.Text.Json improvements. The ones that matter most are generic type metadata retrieval, JsonNamingPolicy.PascalCase, per-member naming policy overrides, and type-level ignore conditions. The generic metadata APIs help source generation, NativeAOT, and polymorphic serialisation scenarios because you can retrieve strongly typed JsonTypeInfo without manual downcasting.
This is cool because modern .NET services increasingly treat JSON contracts as strict boundaries. You are not just serialising POCOs anymore. You are versioning messages, emitting integration events, passing evidence envelopes, writing audit records, and trimming apps for AOT.
// File: Contracts/Events/SubmissionCreatedEvent.cs
using System.Text.Json.Serialization;
namespace Contracts.Events;
[JsonIgnore(Condition = JsonIgnoreCondition.WhenWritingNull)] public sealed class SubmissionCreatedEvent { [JsonNamingPolicy(JsonKnownNamingPolicy.CamelCase)] public string EventName { get; init; } = "submission.created";
public long SubmissionId { get; init; }
public string? ExternalReference { get; init; }
public string? Notes { get; init; }
}
// File: Infrastructure/Json/EventJsonSerializer.cs
using System.Text.Json;
using System.Text.Json.Serialization.Metadata;
using Contracts.Events;
namespace Infrastructure.Json;
public static class EventJsonSerializer { private static readonly JsonSerializerOptions Options = new(JsonSerializerDefaults.Web) { PropertyNamingPolicy = JsonNamingPolicy.PascalCase };
static EventJsonSerializer()
{
Options.MakeReadOnly();
}
public static string Serialize(SubmissionCreatedEvent integrationEvent)
{
JsonTypeInfo<SubmissionCreatedEvent> typeInfo =
Options.GetTypeInfo<SubmissionCreatedEvent>();
return JsonSerializer.Serialize(integrationEvent, typeInfo);
}
}
The type-level ignore condition removes noisy repetition. Before this, you often decorated every nullable property with [JsonIgnore(Condition = JsonIgnoreCondition.WhenWritingNull)], or pushed that behaviour globally into JsonSerializerOptions. The type-level form lets a contract own its default omission behaviour. That is useful when one payload should omit nulls, but another payload must emit explicit nulls because a downstream API distinguishes between "missing" and "clear this value".
The per-member naming policy is also useful in ugly integration work. A global policy might be PascalCase because a legacy API expects it, but one member might need camelCase or snake_case because the receiving system has inconsistent field rules. You should still prefer clean contracts, but .NET 11 gives you more precise tools when reality is messy.
Unicode and Rune APIs:
.NET 11 adds Rune-based operations across string APIs. The String class gains overloads for operations such as Contains, StartsWith, EndsWith, IndexOf, LastIndexOf, Replace, Split, and trimming with Rune. TextInfo also gains Rune-aware casing APIs.
This matters because char is a UTF-16 code unit, not a Unicode scalar value. If your system only sees ASCII identifiers, you may not care. But if you process names, addresses, document text, OCR output, imported email content, user-entered comments, emoji, multilingual submissions, or text from external systems, Unicode correctness becomes real.
public static string NormalizeBullets(string input)
{
return input.Replace(Bullet, ReplacementBullet);
}
public static bool StartsWithWarningSymbol(string input)
{
var warning = new Rune(0x26A0);
return input.StartsWith(warning, StringComparison.Ordinal);
}
public static string UppercaseFirstRune(string input, CultureInfo culture)
{
if (string.IsNullOrEmpty(input))
{
return input;
}
var enumerator = input.EnumerateRunes();
if (!enumerator.MoveNext())
{
return input;
}
var first = enumerator.Current;
var upper = culture.TextInfo.ToUpper(first);
return upper + input[first.Utf16SequenceLength..];
}
}
This is exactly the sort of API that prevents subtle bugs. Nobody wants a business system where a validation rule corrupts someone’s name or splits a string inside a surrogate pair. Rune-aware APIs make the correct thing easier.
Base64, compression, ZIP, and tar
.NET 11 adds new Base64 APIs and overloads to the System.Buffers.Text.Base64 type, including high-level convenience methods and lower-level span-based methods. The documentation calls out encoding to chars, encoding to UTF-8, decoding from chars, and decoding from UTF-8.
Thats a big thing for service code because Base64 is everywhere, JWT segments, binary payloads in JSON, API keys, encrypted blobs, email attachments, document processing, and protocol adapters. The performance-sensitive version should avoid unnecessary string and byte array churn.
// File: Infrastructure/Encoding/Base64PayloadCodec.cs
using System.Buffers.Text; using System.Text;
namespace Infrastructure.Encoding;
public static class Base64PayloadCodec {
public static string EncodePayload(ReadOnlySpan payload)
{ return Base64.EncodeToString(payload); }
public static byte[] DecodePayload(string encoded)
{
return Base64.DecodeFromChars(encoded);
}
public static string EncodeUtf8Text(string text)
{
ReadOnlySpan<byte> utf8 = Encoding.UTF8.GetBytes(text);
return Base64.EncodeToString(utf8);
}
}
On compression, .NET 11 moves Zstandard APIs into System.IO.Compression, alongside DeflateStream, GZipStream, and BrotliStream. ZIP handling also improves,ZipArchiveEntry gets access-mode overloads, CompressionMethod exposes the entry compression method, and Preview 3 adds CRC32 validation when reading ZIP entries. Corrupted or truncated archives that previously slipped through can now throw InvalidDataException.
That CRC32 change is a good example of a small feature that matters in real systems. If you ingest documents from email, Blob Storage, SFTP, partner APIs, or customer uploads, you want corruption detected early. Silent acceptance of a damaged archive is worse than a hard failure.
// File: Infrastructure/Archives/UploadedZipReader.cs
using System.IO.Compression;
namespace Infrastructure.Archives;
public sealed class UploadedZipReader {
public async Task<IReadOnlyList> ReadAsync(
Stream zipStream, CancellationToken stopToken)
{
using var archive = new ZipArchive(zipStream, ZipArchiveMode.Read);
var entries = new List<UploadedArchiveEntry>(archive.Entries.Count);
foreach (var entry in archive.Entries)
{
await using var entryStream = await entry.OpenAsync(
FileAccess.Read,
stopToken);
using var memory = new MemoryStream();
await entryStream.CopyToAsync(memory, stopToken);
entries.Add(new UploadedArchiveEntry(
entry.FullName,
entry.CompressionMethod.ToString(),
memory.ToArray()));
}
return entries;
}
}
public sealed record UploadedArchiveEntry( string Name, string CompressionMethod, byte[] Content);
Tar archive creation also gains format selection. Previously, CreateFromDirectory always produced Pax archives. .NET 11 adds overloads that allow Pax, Ustar, GNU, and V7, which is useful when you need compatibility with specific Linux tooling or deployment environments.
// File: Infrastructure/Artifacts/ArtifactPackageWriter.cs
using System.Formats.Tar;
namespace Infrastructure.Artifacts;
public sealed class ArtifactPackageWriter {
public async Task WriteLinuxCompatiblePackageAsync( string sourceDirectory, string outputPath, CancellationToken stopToken)
{
await TarFile.CreateFromDirectoryAsync( sourceDirectory, outputPath, includeBaseDirectory: true, format: TarEntryFormat.Gnu, cancellationToken: stopToken);
}
}
Low-level I/O pipes become easier to reason about
Preview 3 adds low-level I/O improvements around SafeFileHandle and RandomAccess. SafeFileHandle.Type can report whether a handle is a file, pipe, socket, directory, or other OS object. SafeFileHandle.CreateAnonymousPipe creates pipe pairs with independent async behaviour for each end. RandomAccess.Read and RandomAccess.Write now work with non-seekable handles such as pipes. On Windows, Process uses overlapped I/O for redirected stdout and stderr, reducing thread-pool blocking in process-heavy applications.
Most application developers will not touch these APIs directly. But platform engineers, library authors, CLI tool authors, test harness builders, and teams that wrap external processes should care.
// File: Infrastructure/Processes/ProcessOutputCapture.cs
using Microsoft.Win32.SafeHandles;
namespace Infrastructure.Processes;
public static class ProcessOutputCapture {
public static void CreatePipeForProcessOutput()
{
SafeFileHandle.CreateAnonymousPipe( out SafeFileHandle readEnd, out SafeFileHandle writeEnd, asyncRead: true, asyncWrite: false);
using (readEnd)
using (writeEnd)
{
Console.WriteLine($"Read handle type: {readEnd.Type}");
Console.WriteLine($"Write handle type: {writeEnd.Type}");
}
}
}
The big picture is that .NET keeps making lower-level system programming less awkward without forcing normal application code to become unsafe or platform-specific.
ASP.NET Core native OpenTelemetry support reduces instrumentation friction
ASP.NET Core in .NET 11 now natively adds OpenTelemetry semantic convention attributes to the HTTP server activity. The docs say the framework now includes required attributes by default, matching metadata previously available through OpenTelemetry.Instrumentation.AspNetCore. To collect the data, you subscribe to the Microsoft.AspNetCore activity source.
This is a good platform direction. OpenTelemetry should feel like part of the framework, not a bolt-on package you hope is configured correctly in every service.
// File: Api/Program.cs
using OpenTelemetry.Trace;
var builder = WebApplication.CreateBuilder(args);
builder.Services .AddOpenTelemetry() .WithTracing(tracing => { tracing .AddSource("Microsoft.AspNetCore") .AddSource("Orders.Api") .AddOtlpExporter(); });
var app = builder.Build();
app.MapPost("/orders", async ( SubmitOrderRequest request, SubmitOrderHandler handler, CancellationToken stopToken) => { using var activity = Diagnostics.ActivitySource.StartActivity("Submit order");
var result = await handler.Handle(request.ToCommand(), stopToken);
return result.IsSuccess
? Results.Accepted($"/orders/{result.Value.OrderId}", result.Value)
: Results.BadRequest(result.Error);
});
app.Run();
internal static class Diagnostics { public static readonly ActivitySource ActivitySource = new("Orders.Api"); }
The architecture impact is straightforward:
If you already use OpenTelemetry, this reduces package and configuration noise. If you do not, .NET 11 removes one excuse. Observability is no longer optional in serious distributed systems.
ASP.NET Core compression, Zstandard comes to request and response middleware
ASP.NET Core now supports Zstandard for both response compression and request decompression. The documentation says zstd support is added to existing response-compression and request-decompression middleware and is enabled by default. You can configure ZstandardCompressionProviderOptions to set quality, where higher quality means better compression but more CPU work.
// File: Api/Program.cs
using Microsoft.AspNetCore.ResponseCompression; using System.IO.Compression;
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddResponseCompression(options => { options.EnableForHttps = true; });
builder.Services.AddRequestDecompression();
builder.Services.Configure(options => { options.CompressionOptions = new ZstandardCompressionOptions { Quality = 6 }; });
var app = builder.Build();
app.UseResponseCompression(); app.UseRequestDecompression();
app.MapPost("/submissions/import", async ( HttpRequest request, SubmissionImportHandler handler, CancellationToken stopToken) => { var result = await handler.ImportAsync(request.Body, stopToken);
return result.IsSuccess
? Results.Accepted()
: Results.BadRequest(result.Error);
});
app.Run();
The senior engineering decision is not "turn quality to 22 because smaller is better." That is amateur thinking. Compression is a trade-off between network, CPU, latency, and payload shape. For APIs inside the same region or VNet, the extra CPU may not be worth it. For large JSON responses over public networks, it might be. For document ingestion, request decompression may be more valuable than response compression.
ASP.NET Core OpenAPI
ASP.NET Core 11 introduces support for generating OpenAPI descriptions for binary file responses. FileContentResult maps to an OpenAPI schema with type: string and format: binary. The OpenAPI package also supports OpenAPI 3.2.0 through an updated Microsoft.OpenApi dependency, with breaking changes from the underlying library.
This is useful for real APIs because file endpoints are common and often poorly described. Think generated PDFs, Excel exports, evidence bundles, signed documents, claim documents, invoice attachments, and report downloads.
// File: Features/Reports/DownloadReportEndpoint.cs
using System.Net.Mime; using Microsoft.AspNetCore.Mvc;
namespace Reports.Features.DownloadReport;
public static class DownloadReportEndpoint {
public static IEndpointRouteBuilder MapDownloadReport(this IEndpointRouteBuilder app) { app.MapGet("/reports/{reportId:long}/pdf", async ( long reportId, ReportPdfService pdfService, CancellationToken stopToken) => { byte[] content = await pdfService.BuildPdfAsync(reportId, stopToken);
return TypedResults.File(
content,
MediaTypeNames.Application.Pdf,
fileDownloadName: $"report-{reportId}.pdf");
})
.Produces<FileContentResult>(
StatusCodes.Status200OK,
MediaTypeNames.Application.Pdf)
.ProducesProblem(StatusCodes.Status404NotFound);
return app;
}
}
// File: Api/Program.cs
builder.Services.AddOpenApi(options => { options.OpenApiVersion = Microsoft.OpenApi.OpenApiSpecVersion.OpenApi3_2; });
Good OpenAPI metadata is not decoration. It affects client generation, test automation, developer portal quality, contract reviews, and API governance.
ASP.NET Core Identity
ASP.NET Core Identity now uses TimeProvider instead of direct DateTime and DateTimeOffset access for time-related operations. Microsoft calls out deterministic testing for token expiration, lockout durations, and security stamp validation.
That sounds minor until you have flaky tests around lockout windows, email confirmation tokens, password reset expiry, or security stamp refresh.
// File: Tests/Auth/IdentityTokenExpiryTests.cs
using Microsoft.AspNetCore.Identity;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Time.Testing;
namespace Auth.Tests;
public sealed class IdentityTokenExpiryTests {
[Fact]
public async Task PasswordResetToken_ShouldExpire_WhenClockMovesBeyondConfiguredWindow() { var fakeTime = new FakeTimeProvider( new DateTimeOffset(2026, 04, 25, 10, 0, 0, TimeSpan.Zero));
var services = new ServiceCollection();
services.AddSingleton<TimeProvider>(fakeTime);
services.AddIdentity<IdentityUser, IdentityRole>();
using var provider = services.BuildServiceProvider();
fakeTime.Advance(TimeSpan.FromHours(3));
await Task.CompletedTask;
}
}
The code above is intentionally skeletal because full Identity tests require stores and token providers. The point is the seam. Time is now injectable. That is how it should be.
Blazor .NET 11 keeps closing server-side rendering gaps
Blazor gets several practical updates in .NET 11. The DisplayName component can render names from [Display] and [DisplayName] metadata. NavigateTo and NavLink support relative navigation using RelativeToCurrentUri. Static SSR gets TempData support for POST-Redirect-GET flows and one-time notifications. A new Blazor Web Worker template provides infrastructure for running .NET code in a Web Worker so heavy client-side work does not block the UI thread. Virtualize now adapts to variable-height items at runtime, with the default overscan count changing from 3 to 15 in .NET 11.
For line-of-business systems, TempData is probably the sleeper feature. If you have classic MVC flows or Razor Pages flows, TempData is familiar. Blazor static SSR needed a cleaner answer for flash messages and redirect state.
@* File: Components/Pages/CreateProgram.razor *@
@page "/programs/create"
@using Microsoft.AspNetCore.Components.Forms
<label>
<DisplayName For="@(() => Model.Name)" />
<InputText @bind-Value="Model.Name" />
</label>
<button type="submit">Create</button>
@if (TempData?.TryGetValue("SuccessMessage", out var message) == true) {
@message
}
@code { [CascadingParameter] public ITempData? TempData { get; set; }
[Inject]
public NavigationManager Navigation { get; set; } = null!;
public CreateProgramModel Model { get; set; } = new();
private Task SubmitAsync()
{
TempData?["SuccessMessage"] = "Program created.";
Navigation.NavigateTo("details", new NavigationOptions
{
RelativeToCurrentUri = true
});
return Task.CompletedTask;
}
}
Virtualisation improvements matter for dashboards, document result screens, audit logs, and admin grids where rows are not uniform height. Previously, virtualised lists could get spacing and scroll behaviour wrong when content varied. .NET 11’s runtime measurement improves that.
Output cache policy provider
ASP.NET Core in .NET 11 adds IOutputCachePolicyProvider, which lets applications determine base policies, resolve named policies, and support dynamic policy selection. Microsoft explicitly calls out examples such as policies from external configuration, databases, or tenant-specific caching rules.
This is useful in SaaS systems. You may want different cache rules by tenant, plan, route, data sensitivity, or deployment ring. Hardcoding all of that in startup code is brittle.
// File: Infrastructure/Caching/TenantOutputCachePolicyProvider.cs
using Microsoft.AspNetCore.OutputCaching;
using Microsoft.Extensions.Options;
namespace Infrastructure.Caching;
public sealed class TenantOutputCachePolicyProvider( IOptionsMonitor options) : IOutputCachePolicyProvider
{
public IReadOnlyList GetBasePolicies() { return []; }
public ValueTask<IOutputCachePolicy?> GetPolicyAsync(string policyName)
{
TenantCachePolicy? configuredPolicy =
options.CurrentValue.Policies.GetValueOrDefault(policyName);
if (configuredPolicy is null)
{
return ValueTask.FromResult<IOutputCachePolicy?>(null);
}
IOutputCachePolicy policy = new TenantOutputCachePolicy(configuredPolicy);
return ValueTask.FromResult<IOutputCachePolicy?>(policy);
}
}
The important part is not the exact implementation. The important part is the boundary. Framework-level caching becomes something you can wire into configuration and tenancy rather than treating it as static middleware setup.
Kestrel performance
Kestrel’s HTTP/1.1 request parser now uses a non-throwing code path for malformed requests. Instead of throwing BadHttpRequestException on every parse failure, it returns a result struct indicating success, incomplete, or error states. Microsoft says this can improve throughput by up to 20 to 40 percent in scenarios with many malformed requests, such as port scanning, malicious traffic, or misconfigured clients, with no impact on valid request processing. HTTP logging also pools response buffering streams, and HTTP/3 starts processing requests earlier without waiting for the control stream and SETTINGS frame first.
This is a good example of mature framework work. Your application code may never see these changes, but your service is exposed to the internet, internal scanners, broken clients, load balancers, and security tools. Bad input should be cheap to reject.
EF Core 11
EF Core 11 requires the .NET 11 SDK to build and the .NET 11 runtime to run. It does not run on earlier .NET versions or .NET Framework. That is an important baseline point for migration planning.
The EF Core 11 changes are substantial. They include complex types and JSON columns on TPT and TPC inheritance, better SQL for to-one joins, MaxBy and MinBy translation, SQL Server vector search support, SQL Server JSON APIs, full-text search improvements, Cosmos DB complex types, transactional batches, bulk execution, session token management, and migration workflow improvements.
EF Core MaxBy and MinBy
EF Core 11 translates LINQ MaxByAsync and MinByAsync, plus sync counterparts. These methods return the element with the maximum or minimum key, not just the key value. Microsoft shows a query for the blog with the most posts translating to SELECT TOP(1) with an ORDER BY count subquery.
This is one of those features that makes query code read like business intent.
// File: Features/Programs/GetMostActiveProgram/GetMostActiveProgramHandler.cs
using Microsoft.EntityFrameworkCore;
namespace Programs.Features.GetMostActiveProgram;
public sealed class GetMostActiveProgramHandler(UnderwritingDbContext dbContext) { public async Task<ProgramSummary?> Handle(CancellationToken stopToken) { Program program = await dbContext.Programs .AsNoTracking() .MaxByAsync(program => program.Policies.Count, stopToken);
return program is null
? null
: new ProgramSummary(program.Id, program.Name, program.Policies.Count);
}
}
Before this, many teams wrote OrderByDescending(...).FirstOrDefaultAsync(). That still works. But MaxByAsync communicates the intent more directly. For code review and maintenance, that matters.
EF Core better SQL for to-one joins: fewer pointless joins, less database work
EF Core 11 improves SQL generation for reference navigation includes. In split queries, EF previously added unnecessary joins to reference navigations in SQL generated for collection queries. EF Core 11 prunes those joins. It also removes redundant keys from ORDER BY clauses where a reference navigation key is already functionally determined by the parent key. Microsoft cites benchmark scenarios with 29 percent improvement for a common split query case and 22 percent improvement in a single-query case, with the usual warning that actual performance depends on schema and data.
This is exactly the kind of EF improvement senior engineers should care about. It reduces the tax of using a higher-level ORM without asking developers to rewrite every query.
// File: Features/Blogs/GetBlogDashboard/GetBlogDashboardHandler.cs
using Microsoft.EntityFrameworkCore;
namespace Blogs.Features.GetBlogDashboard;
public sealed class GetBlogDashboardHandler(BloggingDbContext dbContext) { public async Task<IReadOnlyList> Handle(CancellationToken stopToken)
{
return await dbContext.Blogs .AsNoTracking() .Include(blog => blog.Owner) .Include(blog => blog.Posts) .AsSplitQuery() .Select(blog => new BlogDashboardRow( blog.Id, blog.Name, blog.Owner.DisplayName, blog.Posts.Count)) .ToListAsync(stopToken); }
}
The code looks ordinary. That is the point. Better SQL should not require heroic application code.
EF Core vector search: RAG-style workloads enter normal data access
EF Core 11 supports SQL Server vector indexes and VECTOR_SEARCH() for approximate search. Microsoft describes these as experimental SQL Server features, subject to change, and says EF APIs for them are also subject to change. EF 11 can create vector indexes through migrations and exposes a VectorSearch() extension method that translates to SQL Server’s VECTOR_SEARCH() table-valued function.
This is important because vector search is moving from specialist AI systems into ordinary line-of-business applications. Search over support tickets, underwriting documents, product descriptions, claims notes, emails, policy wording, and knowledge bases is becoming normal. EF support means teams can start integrating those workloads into familiar data access patterns, while still being careful about performance and architecture.
// File: Infrastructure/Persistence/SubmissionDocumentConfiguration.cs
using Microsoft.EntityFrameworkCore;
using Microsoft.EntityFrameworkCore.Metadata.Builders;
namespace Infrastructure.Persistence;
public sealed class SubmissionDocumentConfiguration : IEntityTypeConfiguration {
public void Configure(EntityTypeBuilder builder) { builder.HasKey(document => document.Id);
builder.Property(document => document.Title)
.HasMaxLength(300);
builder.HasVectorIndex(document => document.Embedding, "cosine");
}
}
// File: Features/Search/SearchSimilarDocuments/SearchSimilarDocumentsHandler.cs
using Microsoft.EntityFrameworkCore;
namespace Search.Features.SearchSimilarDocuments;
public sealed class SearchSimilarDocumentsHandler( UnderwritingDbContext dbContext, IEmbeddingGenerator embeddingGenerator) {
public async Task<IReadOnlyList> Handle( SearchSimilarDocumentsQuery query, CancellationToken stopToken) {
var embedding = await embeddingGenerator.GenerateAsync( query.SearchText, stopToken);
var results = await dbContext.SubmissionDocuments
.VectorSearch(
document => document.Embedding,
embedding,
"cosine",
topN: 10)
.Select(result => new SearchResult(
result.Value.Id,
result.Value.Title,
result.Distance))
.ToListAsync(stopToken);
return results;
}
}
The architectural caveat is serious. Do not mistake EF support for a full RAG architecture. Vector search is one component. You still need chunking, embedding generation, metadata design, authorisation filtering, result ranking, prompt construction, observability, and safety controls.
EF Core vector properties are no longer loaded by default
EF Core 11 changes how vector properties are loaded. SqlVector columns are no longer included in SELECT statements when materialising entities because vectors can contain hundreds or thousands of floating-point values. Microsoft cites a minimal benchmark with almost 9x performance improvement locally and around 22x against remote Azure SQL, while noting results depend on entity shape, vector properties, and latency.
This is the correct default. Most applications ingest embeddings and search with them, but do not need to display the raw vector values.
// File: Features/Documents/GetDocuments/GetDocumentsHandler.cs
using Microsoft.EntityFrameworkCore;
namespace Documents.Features.GetDocuments;
public sealed class GetDocumentsHandler(UnderwritingDbContext dbContext) {
public async Task<IReadOnlyList> Handle(CancellationToken stopToken) {
return await dbContext.SubmissionDocuments .AsNoTracking() .OrderBy(document => document.Title) .Select(document => new DocumentRow( document.Id, document.Title, document.CreatedAt)) .ToListAsync(stopToken);
}
}
The rule is simple. Do not load vectors unless you need vectors. If you need them for diagnostics, exports, or re-indexing, explicitly project them.
EF Core JSON support: SQL Server JSON becomes more usable
EF Core 11 introduces EF.Functions.JsonPathExists(), translating to SQL Server’s JSON_PATH_EXISTS, available since SQL Server 2022. It also introduces EF.Functions.JsonContains() and can translate certain LINQ Contains queries over primitive collections stored as JSON to SQL Server 2025’s JSON_CONTAINS, replacing older OPENJSON-based translation when compatibility level is set appropriately.
// File: Features/Programs/SearchPrograms/SearchProgramsHandler.cs
using Microsoft.EntityFrameworkCore;
namespace Programs.Features.SearchPrograms;
public sealed class SearchProgramsHandler(UnderwritingDbContext dbContext) {
public async Task<IReadOnlyList> Handle( SearchProgramsQuery query, CancellationToken stopToken)
{
return await dbContext.Programs .AsNoTracking() .Where(program => EF.Functions.JsonPathExists( program.ConfigurationJson, "\(.referralRules")) .Where(program => EF.Functions.JsonContains( program.ConfigurationJson, query.RequiredTag, "\).tags") == 1) .Select(program => new ProgramSearchRow( program.Id, program.Name)) .ToListAsync(stopToken);
}
}
This is useful, but be disciplined. JSON columns in SQL Server are not a free pass to avoid modelling. Use them when the shape is genuinely variable, externally owned, or document-like. Do not use them because you do not want to design a relational schema. EF making JSON easier is good. Teams abusing JSON as a junk drawer is not.
EF Core full-text search
EF Core 11 can configure SQL Server full-text catalogs and indexes in the model, allowing migrations to create and manage them. It also supports table-valued full-text functions such as FREETEXTTABLE() and CONTAINSTABLE(), returning result objects with both entity and ranking value.
// File: Infrastructure/Persistence/BloggingDbContext.cs
using Microsoft.EntityFrameworkCore;
namespace Infrastructure.Persistence;
public sealed class BloggingDbContext(DbContextOptions options) : DbContext(options) { public DbSet Blogs => Set();
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
modelBuilder.HasFullTextCatalog("ftCatalog");
modelBuilder.Entity<Blog>()
.HasFullTextIndex(blog => blog.FullName)
.HasKeyIndex("PK_Blogs")
.OnCatalog("ftCatalog");
}
// File: Features/Blogs/SearchBlogs/SearchBlogsHandler.cs
using Microsoft.EntityFrameworkCore;
namespace Blogs.Features.SearchBlogs;
public sealed class SearchBlogsHandler(BloggingDbContext dbContext)
{
public async Task<IReadOnlyList> Handle( string searchTerm, CancellationToken stopToken)
{
return await dbContext.Blogs .FreeTextTable(blog => blog.FullName, searchTerm) .Select(result => new BlogSearchResult( result.Value.Id, result.Value.FullName, result.Rank)) .OrderByDescending(result => result.Rank) .ToListAsync(stopToken);
}
}
This matters for teams that want repeatable database deployments. Hand-written SQL in migrations is sometimes necessary, but the more the model can express, the easier it is to reason about drift.
EF Core Cosmos DB provider
EF Core 11 improves the Azure Cosmos DB provider. Complex types are fully supported and embedded as nested JSON objects or arrays. Transactional batches are used by default for best-effort atomicity and improved performance when saving changes within a single partition. Bulk execution can be enabled for high-throughput writes. Session token APIs allow read-your-writes consistency across contexts and app instances.
// File: Infrastructure/Persistence/OrdersDbContext.cs
using Microsoft.EntityFrameworkCore;
namespace Infrastructure.Persistence;
public sealed class OrdersDbContext(DbContextOptions options) : DbContext(options)
{
public DbSet Orders => Set();
protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
optionsBuilder.UseCosmos(
"<connection string>",
databaseName: "OrdersDB",
cosmosOptions =>
{
cosmosOptions.BulkExecutionEnabled();
cosmosOptions.SessionTokenManagementMode(
SessionTokenManagementMode.SemiAutomatic);
});
}
protected override void OnModelCreating(ModelBuilder modelBuilder)
{
modelBuilder.Entity<OrderDocument>()
.ComplexProperty(order => order.ShippingAddress);
}
// File: Features/Orders/CreateOrder/CreateOrderHandler.cs
using Microsoft.EntityFrameworkCore;
namespace Orders.Features.CreateOrder;
public sealed class CreateOrderHandler(
OrdersDbContext dbContext)
{
public async Task<string?> Handle( OrderDocument order, CancellationToken stopToken) { dbContext.Orders.Add(order);
await dbContext.SaveChangesAsync(stopToken);
return dbContext.Database.GetSessionToken();
}
For distributed systems, the session token feature is the most interesting. If one request writes a document and another request lands on a different instance, you may need to carry the session token to guarantee the read sees the write. That is the sort of consistency detail that separates toy demos from production systems.
EF Core migrations
EF Core 11 adds the ability to exclude foreign-key constraints from migrations while keeping the relationship in the EF model. This is useful for legacy databases, sync scenarios, or schemas where the application model has a relationship but the database intentionally does not enforce it. The model snapshot also records the latest migration ID, so divergent migration trees are detected earlier through source-control conflicts. The dotnet ef database update command also supports creating and applying a migration in one step with --add.
// File: Infrastructure/Persistence/ProgramConfiguration.cs
using Microsoft.EntityFrameworkCore;
using Microsoft.EntityFrameworkCore.Metadata.Builders;
namespace Infrastructure.Persistence;
public sealed class ProgramConfiguration : IEntityTypeConfiguration
{
public void Configure(EntityTypeBuilder builder) { builder.HasMany(program => program.Policies) .WithOne(policy => policy.Program) .HasForeignKey(policy => policy.ProgramId) .ExcludeForeignKeyFromMigrations();
}
}
The migration snapshot change is not flashy, but it is useful. In teams, two developers creating migrations on different branches is common. The earlier you discover divergence, the less painful it is.
SDK and CLI
The .NET 11 SDK gets several practical updates. Linux and macOS installer sizes are reduced through assembly deduplication with symbolic links. Microsoft says analysis found 35 percent of the SDK directory consisted of duplicate files, and lists reductions such as the Linux x64 tarball dropping from 230 MB to 189 MB, the deb from 164 MB to 122 MB, and the rpm from 165 MB to 122 MB. Windows deduplication is planned for a future preview.
The CLI also gets solution filter support, file-based app includes, dotnet run -e for environment variables, and dotnet watch improvements such as Aspire integration, crash recovery, and better Ctrl+C handling for Windows desktop apps.
dotnet new slnf --name Underwriting.WorkingSet.slnf
dotnet sln Underwriting.WorkingSet.slnf add src/UWPrograms/UWPrograms.csproj dotnet sln Underwriting.WorkingSet.slnf add src/UWNotes/UWNotes.csproj dotnet sln Underwriting.WorkingSet.slnf list
For large modular monoliths, solution filters are not a toy. They let you load or build a subset of projects without hacking the main solution.
The new dotnet run -e option is also welcome:
dotnet run -e ASPNETCORE_ENVIRONMENT=Development -e LOG_LEVEL=Debug -e Features__RuntimeAsyncDiagnostics=true
That is cleaner than mutating shell state or editing launch settings for one-off local runs.
File-based apps now support #:include, which makes them less disposable:
// File: scripts/import-programs.cs
#:include helpers/csv.cs #:include models/program-import-row.cs
using static ImportHelpers.Csv;
var rows = ReadRows("programs.csv");
foreach (var row in rows)
{
Console.WriteLine($"{row.Code}: {row.Name}");
}
For serious application code, use projects. For scripts, probes, repros, migration helpers, and small operational tools, file-based apps are becoming more useful.
Analyser improvements: fewer noisy warnings, better signal
.NET 11 improves CA1873, the analyser for potentially expensive logging arguments. The docs say property accesses, GetType(), GetHashCode(), and GetTimestamp() are no longer flagged, diagnostics apply only to Information level and below by default, and messages now explain the reason an argument was flagged, such as method invocation, object creation, boxing, string interpolation, collection expression, await expression, or with expression.
That is good analyser design. A noisy analyser is worse than no analyser because teams learn to ignore it. A precise analyser teaches better habits.
// File: Features/Orders/ProcessOrderHandler.cs
namespace Orders.Features.ProcessOrder;
public sealed class ProcessOrderHandler(ILogger logger)
{
public void Handle(Order order) { logger.LogInformation( "Processing order {OrderId} with total {Total}", order.Id, order.Total);
if (logger.IsEnabled(LogLevel.Debug))
{
logger.LogDebug(
"Order detail: {OrderDetail}",
BuildExpensiveDebugView(order));
}
}
private static object BuildExpensiveDebugView(Order order)
{
return new
{
order.Id,
Lines = order.Lines.Select(line => new
{
line.Sku,
line.Quantity,
line.Price
})
};
}
The point is not that every debug log needs a guard. The point is that analysers should push you toward guarding genuinely expensive work without nagging you about harmless property access.
Container images
In Preview 3, all .NET container images are cryptographically signed by Microsoft according to the Notary Project specification. The release notes show verification through Notation CLI or ORAS CLI.
notation inspect mcr.microsoft.com/dotnet/sdk:11.0.100-preview.3
oras discover mcr.microsoft.com/dotnet/sdk:11.0.100-preview.3
This is good because container security is moving from "scan this image after the fact” toward “prove the thing you pulled is the thing the publisher signed”. For enterprise teams, especially finance, insurance, healthcare, and government, signed base images should become part of the pipeline conversation.
The practical recommendation is to start with audit mode. Verify signatures in CI, collect failures, understand registry behaviour, then move toward enforcement. Do not turn on hard blocking in a mature pipeline until you know how your private registries, mirrors, and build agents behave.
WebAssembly and browser workloads.
.NET 11 improves WebAssembly support with WebCIL payload loading, better debugging symbols, and more direct marshalling for float[], Span, and ArraySegment across JavaScript boundaries.
The improved float marshalling is especially relevant for graphics, charts, audio, signal processing, or ML-ish browser workloads.
The general trend is that .NET is not just a server runtime anymore. It is a runtime that spans server, desktop, mobile, browser, cloud, containers, and edge. You may not use all of those targets, but the shared runtime investment feeds back into the parts you do use.
Breaking changes worth watching
The .NET 11 breaking changes page is still a work in progress, but it already lists library behaviour changes such as CRC32 validation when reading ZIP entries, DeflateStream and GZipStream writing headers and footers for empty payloads, DateOnly and TimeOnly parsing behaviour changes, Environment.TickCount consistency changes, and MemoryStream capacity and exception behaviour updates.
EF Core 11 also has notable breaking changes. The Cosmos DB provider fully removes sync I/O support, Microsoft.Data.SqlClient moves to 7.0, EF throws by default when no migrations are found, EFOptimizeContext is removed, EF tools packages no longer directly reference Microsoft.EntityFrameworkCore.Design, vector properties are no longer loaded by default, and empty owned collections in Cosmos now return an empty collection rather than null.
For a serious migration, do not skim these. Breaking changes are rarely evenly distributed. One team sees nothing. Another team has a production ingestion pipeline that depends on old ZIP behaviour. Another has Cosmos code still using sync calls. Another has a build pipeline relying on old EF tooling package references. Review the changes against your actual code paths.
A realistic .NET 11 adoption strategy
For production systems, especially systems with external integrations, regulated data, or meaningful uptime requirements, the right adoption pattern is controlled exploration.
Start with developer machines and non-critical projects. Upgrade a small internal service, benchmark it, and look for build warnings. Then test libraries and shared packages. After that, trial runtime features such as Runtime Async in services where diagnostics matter and rollback is easy. For ASP.NET Core, test OpenTelemetry output, compression negotiation, OpenAPI generation, and caching behaviour. For EF Core, compare generated SQL, migration output, and query performance before touching production.
Your goal during preview is not to "migrate early". Your goal is to reduce uncertainty. By the time .NET 11 reaches GA, you should already know which features you want, which ones you will avoid, which dependencies block you, which code needs changes, and which services benefit enough to justify early movement.
What I would use early, and what I would wait on
I would trial SDK improvements immediately. Solution filters, dotnet run -e, file-based app includes, and analyser improvements are low-risk developer experience wins.
I would also trial OpenTelemetry changes early because observability configuration is easier to validate outside production. If you already use OpenTelemetry, compare spans and attributes. If you do not, use .NET 11 as a trigger to fix that.
I would test EF Core query improvements early but deploy carefully. EF upgrades can change generated SQL, migrations, and provider behaviour. That does not mean avoid them. It means inspect them.
I would experiment with union types, but I would not build core domain persistence around them yet. Use them in sample branches, internal tools, or application-layer outcomes. The feature is promising, but it is still preview.
I would be cautious with Runtime Async. It may become one of the most important .NET 11 features, but because it changes async lowering and runtime behaviour, I would test it hard before betting production services on it.
And I would treat vector search as architecture work, not a convenience API. EF support is welcome, but retrieval systems need design, not just LINQ.
.NET 11 Preview is not a flashy release in the way some developers expect. It is better than that. It is a platform-hardening release. Runtime Async attacks async diagnostics. The JIT keeps removing overhead from normal C#. C# 15 starts bringing native union modelling into reach. The BCL fills gaps around Unicode, JSON, compression, archive handling, and low-level I/O. ASP.NET Core tightens observability, compression, OpenAPI, Identity testability, and Blazor SSR. EF Core moves further into JSON, vector search, full-text search, Cosmos DB, and better SQL. The SDK improves daily loops. Container images get signed.
For .NET engineers, the takeaway is direct, .NET 11 is worth tracking now, not because you should deploy preview bits everywhere, but because the platform direction is clear. The winning teams will not wait until GA week to discover what changed. They will already have tested the runtime, inspected the generated SQL, validated their observability, checked their hardware baseline, and decided which features belong in their architecture.
SOURCES:
https://learn.microsoft.com/en-us/dotnet/core/whats-new/dotnet-11/overview





