Advanced Dependency Injection in .NET

Dependency injection in .NET looks simple at first. You register a service, inject an interface, and move on.
builder.Services.AddScoped<IOrderService, OrderService>();
That is the easy part.
The hard part starts when the application grows. Services become long-lived. Background workers need scoped dependencies. Multiple implementations appear. Configuration has to be validated. Factories creep in. HTTP clients need different policies. EF Core contexts start leaking into singletons. Someone injects IServiceProvider everywhere and calls it flexibility.
At that point, dependency injection stops being a framework feature and becomes an architecture concern.
Modern .NET gives you a capable built-in container. It supports the common lifetimes, constructor injection, open generics, keyed services, options, hosted services, logging, configuration, and integration with ASP.NET Core. C# 12 also gives us primary constructors, which are a good fit for dependency injection because they keep service dependencies visible without the boilerplate of field assignment constructors.
But a good DI setup is not about using every feature. It is about making dependencies honest.
A dependency graph should tell the truth about your application. It should show what a class needs, how long those dependencies live, where configuration enters the system, and where infrastructure decisions are made.
When DI is used well, your code becomes easier to reason about. When it is used badly, it becomes a service locator with nicer syntax.
This post goes deep into the parts of .NET dependency injection that usually cause real production problems: lifetimes, factories, options, keyed services, decorators, hosted services, EF Core, HttpClient, validation, and the hidden footguns that appear in large systems.
All examples use modern C# primary constructors where they make the code clearer.
The container is not your architecture
The built-in .NET container is intentionally simple. That is a strength.
It is not trying to be Autofac, or a full composition framework. It does not push you toward complex registration conventions. It gives you enough structure to wire a modern application without turning service registration into a second programming language.
That does not mean you should place all design decisions inside Program.cs.
This is where many .NET applications begin to rot. The service collection becomes a dumping ground.
builder.Services.AddScoped<IUserService, UserService>();
builder.Services.AddScoped<IOrderService, OrderService>();
builder.Services.AddScoped<IInvoiceService, InvoiceService>();
builder.Services.AddScoped<IEmailService, EmailService>();
builder.Services.AddScoped<INotificationService, NotificationService>();
builder.Services.AddScoped<IPaymentService, PaymentService>();
builder.Services.AddScoped<IReportService, ReportService>();
This works, but it does not scale as a design.
A senior-level .NET application should treat DI registration as composition. Each module, feature area, or infrastructure concern should own its own registration boundary.
var builder = WebApplication.CreateBuilder(args);
builder.Services
.AddApiDefaults()
.AddOrdersModule(builder.Configuration)
.AddBillingModule(builder.Configuration)
.AddNotifications(builder.Configuration)
.AddPersistence(builder.Configuration)
.AddObservability(builder.Configuration);
var app = builder.Build();
app.MapOrdersEndpoints();
app.MapBillingEndpoints();
app.Run();
That is not just cleaner. It creates ownership.
The Orders module decides how Orders are composed. The Billing module decides how Billing is composed. Infrastructure is registered in one place. Cross-cutting concerns are obvious.
A good extension method should not hide magic. It should group related registrations.
public static class OrdersModuleRegistration
{
public static IServiceCollection AddOrdersModule(
this IServiceCollection services,
IConfiguration configuration)
{
services.AddScoped<IOrderRepository, SqlOrderRepository>();
services.AddScoped<IOrderNumberGenerator, OrderNumberGenerator>();
services.AddScoped<IPlaceOrderHandler, PlaceOrderHandler>();
services.AddOptions<OrderOptions>()
.Bind(configuration.GetSection(OrderOptions.SectionName))
.ValidateDataAnnotations()
.ValidateOnStart();
return services;
}
}
That is the right level of abstraction. It hides noise, not behaviour.
The important thing is that DI should compose your architecture. It should not become your architecture.
Primary constructors make DI cleaner, but they do not fix bad design
Primary constructors reduce ceremony.
Instead of writing fields, constructor parameters, assignments, and braces for every dependency, you can put dependencies directly on the type declaration.
public sealed class PlaceOrderHandler(
IOrderRepository orders,
IPaymentGateway payments,
IClock clock,
ILogger<PlaceOrderHandler> logger)
{
public async Task HandleAsync(
PlaceOrderCommand command,
CancellationToken stopToken)
{
logger.LogInformation(
"Placing order for customer {CustomerId}.",
command.CustomerId);
var order = Order.Place(
command.CustomerId,
command.Lines,
clock.UtcNow);
await payments.AuthoriseAsync(command.Payment, stopToken);
await orders.SaveAsync(order, stopToken);
}
}
The dependencies are still explicit. The class still tells the truth. You have just removed the constructor boilerplate.
This is a good fit for application handlers, endpoint services, validators, background processors, infrastructure clients, and decorators.
But primary constructors do not make a bad dependency graph good. If a class has twelve injected services, moving them into the class declaration does not solve the problem.
public sealed class CustomerApplicationService(
ICustomerRepository customers,
IOrderRepository orders,
IInvoiceRepository invoices,
IPaymentGateway payments,
IEmailSender emails,
ISmsSender sms,
IPdfGenerator pdfs,
IBlobStorage blobs,
IAuditWriter audit,
IUserContext userContext,
IClock clock,
ILogger<CustomerApplicationService> logger)
{
public Task DoEverythingAsync(CancellationToken stopToken)
{
throw new NotImplementedException();
}
}
This still smells. The problem was never the old constructor syntax. The problem is that the class is doing too much.
A cleaner design splits behaviours by use case.
public sealed class RegisterCustomerHandler(
ICustomerRepository customers,
IAuditWriter audit,
IClock clock)
{
public async Task HandleAsync(
RegisterCustomerCommand command,
CancellationToken stopToken)
{
var customer = Customer.Register(
command.Email,
command.Name,
clock.UtcNow);
await customers.AddAsync(customer, stopToken);
await audit.WriteAsync(
"CustomerRegistered",
customer.Id,
stopToken);
}
}
public sealed class CreateCustomerInvoiceHandler(
IInvoiceRepository invoices,
IPdfGenerator pdfs,
IBlobStorage blobs)
{
public async Task HandleAsync(
CreateCustomerInvoiceCommand command,
CancellationToken stopToken)
{
var invoice = await invoices.GetAsync(
command.InvoiceId,
stopToken);
var pdf = await pdfs.GenerateAsync(invoice, stopToken);
await blobs.SaveAsync(
$"invoices/{invoice.Id}.pdf",
pdf,
stopToken);
}
}
Primary constructors make good DI less noisy. They also make bloated classes look obviously bloated, which is useful.
Lifetimes are design decisions
Most DI mistakes are lifetime mistakes.
.NET gives you three core lifetimes: transient, scoped, and singleton. A transient service is created each time it is requested. A scoped service is created once per scope. In ASP.NET Core, that usually means once per HTTP request. A singleton is created once for the application lifetime.
The dangerous part is not choosing the wrong lifetime in isolation. The dangerous part is mixing lifetimes incorrectly.
Longer-lived services must not depend on shorter-lived services.
A singleton should not depend on a scoped service. A scoped service should be careful depending on transient services that hold disposable or expensive state. A transient service should not pretend to be stateless if it secretly caches request-specific data.
This is broken:
public sealed class OrderCache(OrdersDbContext dbContext)
{
public Task<Order?> GetAsync(
int orderId,
CancellationToken stopToken)
{
return dbContext.Orders
.FindAsync([orderId], stopToken)
.AsTask();
}
}
And then:
builder.Services.AddDbContext<OrdersDbContext>(options =>
{
options.UseSqlServer(connectionString);
});
builder.Services.AddSingleton<OrderCache>();
The singleton OrderCache captures a scoped OrdersDbContext. That is a broken object graph.
Even if it appears to work in development, it is conceptually wrong. A singleton lives for the whole application. A DbContext is designed to represent a unit of work. It is not thread-safe. It should not be shared across requests.
The fix is not to make the DbContext singleton. The fix is to change the design.
For a singleton service that genuinely needs to run scoped work, inject IServiceScopeFactory, create a scope for the operation, resolve the scoped dependency inside that scope, then dispose the scope.
public sealed class OrderCache(
IMemoryCache cache,
IServiceScopeFactory scopeFactory)
{
public async Task<OrderSummary?> GetAsync(
int orderId,
CancellationToken stopToken)
{
var cacheKey = $"orders:summary:{orderId}";
if (cache.TryGetValue(cacheKey, out OrderSummary? cached))
{
return cached;
}
using var scope = scopeFactory.CreateScope();
var dbContext = scope.ServiceProvider
.GetRequiredService<OrdersDbContext>();
var order = await dbContext.Orders
.Where(x => x.Id == orderId)
.Select(x => new OrderSummary(
x.Id,
x.OrderNumber,
x.Status,
x.Total))
.SingleOrDefaultAsync(stopToken);
if (order is not null)
{
cache.Set(cacheKey, order, TimeSpan.FromMinutes(5));
}
return order;
}
}
This is acceptable when you genuinely need a singleton orchestration object to resolve scoped work. But do not reach for this pattern too quickly. If the cache is used only inside request handling, a scoped service is usually simpler.
A cleaner version is often this:
public interface IOrderSummaryReader
{
Task<OrderSummary?> GetAsync(
int orderId,
CancellationToken stopToken);
}
public sealed class SqlOrderSummaryReader(OrdersDbContext dbContext)
: IOrderSummaryReader
{
public Task<OrderSummary?> GetAsync(
int orderId,
CancellationToken stopToken)
{
return dbContext.Orders
.Where(x => x.Id == orderId)
.Select(x => new OrderSummary(
x.Id,
x.OrderNumber,
x.Status,
x.Total))
.SingleOrDefaultAsync(stopToken);
}
}
Then decorate or wrap that reader with caching.
public sealed class CachedOrderSummaryReader(
IOrderSummaryReader inner,
IMemoryCache cache)
: IOrderSummaryReader
{
public async Task<OrderSummary?> GetAsync(
int orderId,
CancellationToken stopToken)
{
var cacheKey = $"orders:summary:{orderId}";
if (cache.TryGetValue(cacheKey, out OrderSummary? cached))
{
return cached;
}
var order = await inner.GetAsync(orderId, stopToken);
if (order is not null)
{
cache.Set(cacheKey, order, TimeSpan.FromMinutes(5));
}
return order;
}
}
The registration can compose the concrete reader and the decorator.
builder.Services.AddMemoryCache();
builder.Services.AddScoped<SqlOrderSummaryReader>();
builder.Services.AddScoped<IOrderSummaryReader>(sp =>
{
var inner = sp.GetRequiredService<SqlOrderSummaryReader>();
var cache = sp.GetRequiredService<IMemoryCache>();
return new CachedOrderSummaryReader(inner, cache);
});
This version keeps database access scoped. Cache storage stays singleton. The class using both is scoped, which is safe.
This is the point many teams miss. DI lifetimes are not just container settings. They describe how your application state moves through time.
Validate scopes before production does it for you
Scope validation catches some of the most expensive DI mistakes early.
In development, ASP.NET Core usually gives you sensible validation defaults. But you should still be deliberate, especially in worker services, integration tests, custom hosts, and CI pipelines.
var builder = WebApplication.CreateBuilder(args);
builder.Host.UseDefaultServiceProvider((context, options) =>
{
var isDevelopment = context.HostingEnvironment.IsDevelopment();
options.ValidateScopes = isDevelopment;
options.ValidateOnBuild = isDevelopment;
});
ValidateScopes catches scoped services being resolved from the root provider. ValidateOnBuild checks that services can be constructed when the provider is built.
Do not switch these on blindly in every production environment without thinking. Some graphs use factories or runtime-only registrations that can make build-time validation awkward. But in development and CI, validation is a gift. It finds lifetime mistakes before users do.
The worst version of this problem is not the exception. The worst version is no exception.
A scoped service captured by a singleton may not fail immediately. It may behave strangely under load, leak state between requests, or create concurrency bugs that only happen on a busy day.
That is why scope validation matters.
Avoid IServiceProvider unless you are at a boundary
IServiceProvider is not evil. But injecting it into normal application services is usually a mistake.
This is a service locator:
public sealed class PlaceOrderHandler(IServiceProvider serviceProvider)
{
public async Task HandleAsync(
PlaceOrderCommand command,
CancellationToken stopToken)
{
var repository = serviceProvider
.GetRequiredService<IOrderRepository>();
var paymentGateway = serviceProvider
.GetRequiredService<IPaymentGateway>();
await paymentGateway.TakePaymentAsync(command.Payment, stopToken);
await repository.SaveAsync(command.Order, stopToken);
}
}
This hides the real dependencies. The class looks like it needs only IServiceProvider, but it actually needs an order repository and a payment gateway.
The correct version is direct.
public sealed class PlaceOrderHandler(
IOrderRepository orders,
IPaymentGateway payments)
{
public async Task HandleAsync(
PlaceOrderCommand command,
CancellationToken stopToken)
{
await payments.TakePaymentAsync(command.Payment, stopToken);
await orders.SaveAsync(command.Order, stopToken);
}
}
There are valid places for IServiceProvider. Composition roots can use it. Factories can use it. Background services can use IServiceScopeFactory. Framework integration points sometimes need it. But domain services and application handlers usually should not.
If IServiceProvider is injected because the class genuinely creates scopes or bridges a framework boundary, it may be fine. If it is injected to avoid listing dependencies in the constructor, it is hiding design debt.
Factories are for runtime decisions, not laziness
Factories are often abused.
A good factory handles a runtime decision that constructor injection cannot express cleanly.
A bad factory is just a service locator with a nicer name.
Suppose you have multiple exporters.
public interface IReportExporter
{
string Format { get; }
Task ExportAsync(
Report report,
Stream output,
CancellationToken stopToken);
}
public sealed class PdfReportExporter : IReportExporter
{
public string Format => "pdf";
public Task ExportAsync(
Report report,
Stream output,
CancellationToken stopToken)
{
return Task.CompletedTask;
}
}
public sealed class CsvReportExporter : IReportExporter
{
public string Format => "csv";
public Task ExportAsync(
Report report,
Stream output,
CancellationToken stopToken)
{
return Task.CompletedTask;
}
}
You can inject IEnumerable<IReportExporter> and choose the implementation.
public sealed class ReportExporterFactory(
IEnumerable<IReportExporter> exporters)
{
private readonly IReadOnlyDictionary<string, IReportExporter> _exporters =
exporters.ToDictionary(
x => x.Format,
StringComparer.OrdinalIgnoreCase);
public IReportExporter GetRequired(string format)
{
if (_exporters.TryGetValue(format, out var exporter))
{
return exporter;
}
throw new NotSupportedException(
$"Report format '{format}' is not supported.");
}
}
Registration is simple.
builder.Services.AddScoped<IReportExporter, PdfReportExporter>();
builder.Services.AddScoped<IReportExporter, CsvReportExporter>();
builder.Services.AddScoped<ReportExporterFactory>();
The consuming service stays clean.
public sealed class ExportReportHandler(ReportExporterFactory exporters)
{
public async Task HandleAsync(
ExportReportCommand command,
Stream output,
CancellationToken stopToken)
{
var exporter = exporters.GetRequired(command.Format);
await exporter.ExportAsync(command.Report, output, stopToken);
}
}
That is a valid factory. The runtime input is the report format. The factory hides lookup mechanics, not dependencies.
That is different from this:
public sealed class LazyEverythingFactory(IServiceProvider serviceProvider)
{
public T Create<T>() where T : notnull
{
return serviceProvider.GetRequiredService<T>();
}
}
That factory adds no domain meaning. It just moves service location somewhere else.
Factories should represent meaningful creation logic. They should not exist merely because constructor injection made a dependency graph uncomfortable.
Keyed services are useful, but do not turn them into stringly typed architecture
Keyed services let you register multiple implementations of the same service type under different keys, then resolve the specific one you need.
This is useful when the distinction is infrastructural and stable.
For example, you might have two file stores.
public interface IFileStore
{
Task SaveAsync(
string path,
Stream content,
CancellationToken stopToken);
}
public sealed class PublicFileStore : IFileStore
{
public Task SaveAsync(
string path,
Stream content,
CancellationToken stopToken)
{
return Task.CompletedTask;
}
}
public sealed class PrivateFileStore : IFileStore
{
public Task SaveAsync(
string path,
Stream content,
CancellationToken stopToken)
{
return Task.CompletedTask;
}
}
Register them with keys.
builder.Services.AddKeyedScoped<IFileStore, PublicFileStore>("public");
builder.Services.AddKeyedScoped<IFileStore, PrivateFileStore>("private");
Then inject a keyed service where the dependency is known at compile time.
public sealed class UploadPublicAssetHandler(
[FromKeyedServices("public")] IFileStore fileStore)
{
public Task HandleAsync(
Stream content,
CancellationToken stopToken)
{
return fileStore.SaveAsync(
"assets/logo.png",
content,
stopToken);
}
}
This is clear enough. The handler specifically needs the public file store.
But be careful. Keyed services can become a string-based decision engine.
public sealed class FileStoreRouter(IServiceProvider serviceProvider)
{
public IFileStore Get(string key)
{
return serviceProvider.GetRequiredKeyedService<IFileStore>(key);
}
}
This may be okay at an infrastructure boundary. But if key comes from user input, database values, or loosely controlled configuration, you now have runtime service selection hidden behind strings.
A safer approach is to use a domain enum and centralise the mapping.
public enum FileVisibility
{
Public = 1,
Private = 2
}
public sealed class FileStoreSelector(
[FromKeyedServices("public")] IFileStore publicStore,
[FromKeyedServices("private")] IFileStore privateStore)
{
public IFileStore Select(FileVisibility visibility)
{
return visibility switch
{
FileVisibility.Public => publicStore,
FileVisibility.Private => privateStore,
_ => throw new ArgumentOutOfRangeException(nameof(visibility))
};
}
}
This keeps the keys near the composition layer and gives the rest of your application a type-safe model.
Use keyed services for stable infrastructure variation. Do not use them as a substitute for proper domain modelling.
Options should be validated, not trusted
Configuration is one of the most common sources of production failure.
A missing API key. A malformed URL. A timeout set to zero. A feature toggle accidentally left blank. These are not rare events. They happen constantly.
The options pattern gives strongly typed access to related configuration values.
public sealed class PaymentGatewayOptions
{
public const string SectionName = "PaymentGateway";
public required string BaseUrl { get; init; }
public required string ApiKey { get; init; }
public int TimeoutSeconds { get; init; } = 30;
}
Do not inject IConfiguration deep into your application and read random keys.
public sealed class PaymentGateway(IConfiguration configuration)
{
public Task ChargeAsync(
PaymentRequest request,
CancellationToken stopToken)
{
var apiKey = configuration["PaymentGateway:ApiKey"]; return Task.CompletedTask;
}
}
That is weak. The key is stringly typed. The value might be missing. The failure happens too late.
Bind and validate options during startup.
builder.Services.AddOptions<PaymentGatewayOptions>()
.Bind(builder.Configuration.GetSection(PaymentGatewayOptions.SectionName))
.Validate(options => Uri.TryCreate(
options.BaseUrl,
UriKind.Absolute,
out _),
"PaymentGateway:BaseUrl must be an absolute URL.")
.Validate(options => !string.IsNullOrWhiteSpace(options.ApiKey),
"PaymentGateway:ApiKey is required.")
.Validate(options => options.TimeoutSeconds is >= 1 and <= 300,
"PaymentGateway:TimeoutSeconds must be between 1 and 300.")
.ValidateOnStart();
If configuration is invalid, fail the application at startup. Do not wait until the first customer tries to pay.
Now inject options properly.
public sealed class PaymentGateway(
IOptions<PaymentGatewayOptions> options,
HttpClient httpClient)
{
private readonly PaymentGatewayOptions _options = options.Value;
public Task ChargeAsync(
PaymentRequest request,
CancellationToken stopToken)
{
httpClient.BaseAddress ??= new Uri(_options.BaseUrl);
return Task.CompletedTask;
}
}
For normal application services, IOptions<T> is usually fine. For per-request reloadable configuration in ASP.NET Core, IOptionsSnapshot<T> may be useful. For services that need change notifications or named options, IOptionsMonitor<T> may fit better.
But do not default to IOptionsMonitor<T> everywhere. Most services do not need live reload semantics. They just need valid configuration.
The senior-level move is to make invalid configuration impossible to ignore.
Do not inject raw primitive configuration everywhere
Options are better than raw configuration, but you can go further.
Sometimes a service does not need an entire options object. It needs a concept.
public sealed record TokenIssuerSettings(
string Issuer,
string Audience,
TimeSpan Lifetime);
public sealed class TokenIssuer(TokenIssuerSettings settings)
{
public SecurityToken CreateToken(UserIdentity user)
{
throw new NotImplementedException();
}
}
Compose that concept at the boundary.
builder.Services.AddSingleton(sp =>
{
var options = sp.GetRequiredService<IOptions<AuthOptions>>().Value;
return new TokenIssuerSettings(
options.Issuer,
options.Audience,
TimeSpan.FromMinutes(options.TokenLifetimeMinutes));
});
builder.Services.AddSingleton<TokenIssuer>();
This is especially useful when your option class mirrors configuration, but your domain or infrastructure service needs a cleaner value object.
Configuration classes are external input models. They are not always the best internal model.
HttpClient belongs in DI, but not as a singleton you create yourself
HttpClient is another common DI footgun.
This is weak:
builder.Services.AddSingleton(new HttpClient());
This is worse:
public sealed class PaymentGateway
{
public async Task ChargeAsync(
PaymentRequest request,
CancellationToken stopToken)
{
using var httpClient = new HttpClient();
await httpClient.PostAsJsonAsync(
"/payments",
request,
stopToken);
}
}
The modern .NET approach is to use IHttpClientFactory, typed clients, or keyed clients depending on the use case.
A typed client is often the cleanest option.
public sealed class PaymentGatewayClient(HttpClient httpClient)
{
public async Task<PaymentResult> ChargeAsync(
PaymentRequest request,
CancellationToken stopToken)
{
using var response = await httpClient.PostAsJsonAsync(
"payments",
request,
stopToken);
response.EnsureSuccessStatusCode();
var result = await response.Content
.ReadFromJsonAsync<PaymentResult>(
cancellationToken: stopToken);
return result ?? throw new InvalidOperationException(
"Payment gateway returned an empty response.");
}
}
Register it like this:
builder.Services.AddHttpClient<PaymentGatewayClient>((sp, client) =>
{
var options = sp
.GetRequiredService<IOptions<PaymentGatewayOptions>>()
.Value;
client.BaseAddress = new Uri(options.BaseUrl);
client.Timeout = TimeSpan.FromSeconds(options.TimeoutSeconds);
client.DefaultRequestHeaders.Add(
"X-Api-Key",
options.ApiKey);
});
Then inject the typed client.
public sealed class TakePaymentHandler(
PaymentGatewayClient paymentGateway)
{
public Task<PaymentResult> HandleAsync(
PaymentRequest request,
CancellationToken stopToken)
{
return paymentGateway.ChargeAsync(request, stopToken);
}
}
This keeps HTTP configuration in composition, not scattered through application code.
Typed clients also make tests clearer. Your handler depends on a payment gateway client, not on a random HttpClient with unknown configuration.
BackgroundService is singleton, so scoped dependencies need scopes
Hosted services and background workers are another lifetime trap.
When you register a hosted service, it is effectively long-lived. You cannot safely inject scoped services directly into it and treat them as if they belong to each iteration.
This is wrong:
public sealed class InvoiceWorker(InvoicesDbContext dbContext)
: BackgroundService
{
protected override async Task ExecuteAsync(CancellationToken stopToken)
{
while (!stopToken.IsCancellationRequested)
{
var invoices = await dbContext.Invoices
.Where(x => x.Status == InvoiceStatus.Pending)
.ToListAsync(stopToken);
await Task.Delay(TimeSpan.FromMinutes(1), stopToken);
}
}
}
The worker is long-lived. The DbContext is scoped. Bad match.
This is better:
public sealed class InvoiceWorker(
IServiceScopeFactory scopeFactory,
ILogger<InvoiceWorker> logger)
: BackgroundService
{
protected override async Task ExecuteAsync(CancellationToken stopToken)
{
while (!stopToken.IsCancellationRequested)
{
try
{
using var scope = scopeFactory.CreateScope();
var processor = scope.ServiceProvider
.GetRequiredService<IInvoiceBatchProcessor>();
await processor.ProcessPendingAsync(stopToken);
}
catch (OperationCanceledException)
when (stopToken.IsCancellationRequested)
{
break;
}
catch (Exception ex)
{
logger.LogError(
ex,
"Invoice worker failed while processing pending invoices.");
}
await Task.Delay(TimeSpan.FromMinutes(1), stopToken);
}
}
}
Then put the scoped logic in a scoped service.
public interface IInvoiceBatchProcessor
{
Task ProcessPendingAsync(CancellationToken stopToken);
}
public sealed class InvoiceBatchProcessor(
InvoicesDbContext dbContext,
ILogger<InvoiceBatchProcessor> logger)
: IInvoiceBatchProcessor
{
public async Task ProcessPendingAsync(CancellationToken stopToken)
{
var invoices = await dbContext.Invoices
.Where(x => x.Status == InvoiceStatus.Pending)
.Take(100)
.ToListAsync(stopToken);
foreach (var invoice in invoices)
{
invoice.MarkProcessing();
}
await dbContext.SaveChangesAsync(stopToken);
logger.LogInformation(
"Marked {InvoiceCount} invoices as processing.",
invoices.Count);
}
}
Registration:
builder.Services.AddHostedService<InvoiceWorker>();
builder.Services.AddScoped<IInvoiceBatchProcessor, InvoiceBatchProcessor>();
The worker controls scheduling. The scoped processor controls unit-of-work behaviour. The DbContext lives and dies inside the scope.
That separation prevents a whole class of production bugs.
Decorators are better than spreading cross-cutting code everywhere
The built-in container does not have first-class decorator registration like some third-party containers. But you can still apply the decorator pattern manually, or use a library if your team accepts that dependency.
The goal is simple. Keep cross-cutting behaviour out of business logic.
Suppose you have this handler contract:
public interface ICommandHandler<TCommand>
{
Task HandleAsync(
TCommand command,
CancellationToken stopToken);
}
A real handler should focus on the use case.
public sealed class PlaceOrderHandler(OrdersDbContext dbContext)
: ICommandHandler<PlaceOrderCommand>
{
public async Task HandleAsync(
PlaceOrderCommand command,
CancellationToken stopToken)
{
var order = Order.Place(
command.CustomerId,
command.Lines);
dbContext.Orders.Add(order);
await dbContext.SaveChangesAsync(stopToken);
}
}
Now add logging without polluting the handler.
public sealed class LoggingCommandHandler<TCommand>(
ICommandHandler<TCommand> inner,
ILogger<LoggingCommandHandler<TCommand>> logger)
: ICommandHandler<TCommand>
{
public async Task HandleAsync(
TCommand command,
CancellationToken stopToken)
{
var commandName = typeof(TCommand).Name;
logger.LogInformation(
"Handling command {CommandName}.",
commandName);
try
{
await inner.HandleAsync(command, stopToken);
logger.LogInformation(
"Handled command {CommandName}.",
commandName);
}
catch (Exception ex)
{
logger.LogError(
ex,
"Command {CommandName} failed.",
commandName);
throw;
}
}
}
Manual registration for one command can look like this:
builder.Services.AddScoped<PlaceOrderHandler>();
builder.Services.AddScoped<ICommandHandler<PlaceOrderCommand>>(sp =>
{
var inner = sp.GetRequiredService<PlaceOrderHandler>();
var logger = sp.GetRequiredService<
ILogger<LoggingCommandHandler<PlaceOrderCommand>>>();
return new LoggingCommandHandler<PlaceOrderCommand>(
inner,
logger);
});
That is fine for a small number of handlers. If you have many handlers and many decorators, manual registration becomes painful. At that point, either introduce a scanning and decorator library carefully or use a pattern that fits your architecture.
The key point is that decorators should preserve the dependency graph. They should make behaviour explicit at the boundary, not hide it inside random base classes or global static helpers.
Interceptors are powerful, but they are not a dumping ground
Interceptors sit lower than decorators. They are useful when you need to hook into infrastructure behaviour.
EF Core interceptors are a good example. You can use a SaveChangesInterceptor to add audit fields, publish outbox messages, or enforce persistence rules.
public sealed class AuditSaveChangesInterceptor(
IUserContext userContext,
IClock clock)
: SaveChangesInterceptor
{
public override InterceptionResult<int> SavingChanges(
DbContextEventData eventData,
InterceptionResult<int> result)
{
ApplyAuditValues(eventData.Context);
return base.SavingChanges(eventData, result);
}
public override ValueTask<InterceptionResult<int>> SavingChangesAsync(
DbContextEventData eventData,
InterceptionResult<int> result,
CancellationToken stopToken = default)
{
ApplyAuditValues(eventData.Context);
return base.SavingChangesAsync(eventData, result, stopToken);
}
private void ApplyAuditValues(DbContext? dbContext)
{
if (dbContext is null)
{
return;
}
var now = clock.UtcNow;
var userId = userContext.UserId;
foreach (var entry in dbContext.ChangeTracker
.Entries<IAuditableEntity>())
{
if (entry.State == EntityState.Added)
{
entry.Entity.CreatedAtUtc = now;
entry.Entity.CreatedBy = userId;
}
if (entry.State == EntityState.Modified)
{
entry.Entity.UpdatedAtUtc = now;
entry.Entity.UpdatedBy = userId;
}
}
}
}
Register the interceptor and add it to the context.
builder.Services.AddScoped<AuditSaveChangesInterceptor>();
builder.Services.AddDbContext<OrdersDbContext>((sp, options) =>
{
var connectionString = builder.Configuration
.GetConnectionString("Orders");
var auditInterceptor = sp
.GetRequiredService<AuditSaveChangesInterceptor>();
options.UseSqlServer(connectionString);
options.AddInterceptors(auditInterceptor);
});
This is a good use of DI. The interceptor has dependencies. The DbContext registration composes those dependencies.
But interceptors can become dangerous when teams use them to hide business workflows.
Auditing in an interceptor is reasonable. Updating denormalised projections may be reasonable. Writing an outbox message can be reasonable if the design is clear.
Calling external APIs from a SaveChangesInterceptor is usually a bad idea. Sending emails from an interceptor is usually a bad idea. Making domain decisions in an interceptor is usually a bad idea.
The lower the abstraction, the less business meaning it should contain.
Open generics can remove noise
Open generic registrations are useful when the same implementation shape applies to many closed types.
For example:
public interface IRepository<TEntity>
where TEntity : class
{
Task<TEntity?> GetByIdAsync(
int id,
CancellationToken stopToken);
Task AddAsync(
TEntity entity,
CancellationToken stopToken);
}
public sealed class EfRepository<TEntity>(DbContext dbContext)
: IRepository<TEntity>
where TEntity : class
{
public async Task<TEntity?> GetByIdAsync(
int id,
CancellationToken stopToken)
{
return await dbContext.Set<TEntity>()
.FindAsync([id], stopToken);
}
public async Task AddAsync(
TEntity entity,
CancellationToken stopToken)
{
await dbContext.Set<TEntity>()
.AddAsync(entity, stopToken);
}
}
Registration:
builder.Services.AddScoped(typeof(IRepository<>), typeof(EfRepository<>));
This can be useful, but it can also be overused.
Generic repositories often become leaky abstractions over EF Core. If every query needs custom includes, projections, filters, sorting, pagination, and aggregate-specific rules, a generic repository may add little value.
Open generics are better for genuinely generic infrastructure patterns, such as validators, pipeline behaviours, serialisers, mappers, and decorators.
public interface IValidator<T>
{
ValidationResult Validate(T instance);
}
public sealed class DataAnnotationsValidator<T> : IValidator<T>
{
public ValidationResult Validate(T instance)
{
throw new NotImplementedException();
}
}
builder.Services.AddScoped(
typeof(IValidator<>),
typeof(DataAnnotationsValidator<>));
That kind of registration removes repetition without pretending all domain persistence is the same.
Use open generics when the abstraction is genuinely generic. Do not use them to force a generic design over non-generic business behaviour.
TryAdd is for defaults, not application decisions
TryAdd is useful when you are writing reusable libraries or module registrations that should provide defaults without overriding application choices.
services.TryAddSingleton<IClock, SystemClock>();
This says: if the application has not already registered an IClock, use SystemClock.
That is good library behaviour.
But inside an application, overusing TryAdd can hide registration mistakes.
services.TryAddScoped<IPaymentGateway, FakePaymentGateway>();
That is dangerous if someone expected the real payment gateway to be registered.
For application code, prefer explicit registrations. For library code, module defaults, and test overrides, TryAdd has a clear purpose.
A reusable package might do this:
public static class NotificationsRegistration
{
public static IServiceCollection AddNotifications(
this IServiceCollection services,
IConfiguration configuration)
{
services.AddOptions<NotificationOptions>()
.Bind(configuration.GetSection(NotificationOptions.SectionName))
.ValidateDataAnnotations()
.ValidateOnStart();
services.TryAddSingleton<IClock, SystemClock>();
services.TryAddScoped<IEmailRenderer, DefaultEmailRenderer>();
services.TryAddScoped<INotificationSender, SmtpNotificationSender>();
return services;
}
}
If you support overriding, document it and test it. Silent registration order bugs are painful.
Service registration order
The built-in container preserves registration order in important ways.
When resolving a single service, the last registration usually wins.
builder.Services.AddScoped<INotificationSender, SmtpNotificationSender>();
builder.Services.AddScoped<INotificationSender, SendGridNotificationSender>();
Injecting INotificationSender gives you SendGridNotificationSender.
When resolving IEnumerable<INotificationSender>, you get all registrations in order.
public sealed class CompositeNotificationSender(
IEnumerable<INotificationSender> senders)
{
private readonly IReadOnlyList<INotificationSender> _senders =
senders.ToList();
public async Task SendAsync(
Notification notification,
CancellationToken stopToken)
{
foreach (var sender in _senders)
{
await sender.SendAsync(notification, stopToken);
}
}
}
That behaviour is useful, but relying on registration order too heavily can make your app fragile.
If order is business-critical, model it explicitly.
public interface INotificationChannel
{
int Priority { get; }
Task SendAsync(
Notification notification,
CancellationToken stopToken);
}
public sealed class NotificationDispatcher(
IEnumerable<INotificationChannel> channels)
{
private readonly IReadOnlyList<INotificationChannel> _channels =
channels
.OrderBy(x => x.Priority)
.ToList();
public async Task DispatchAsync(
Notification notification,
CancellationToken stopToken)
{
foreach (var channel in _channels)
{
await channel.SendAsync(notification, stopToken);
}
}
}
Registration order is fine for composition mechanics. It should not be the only place where business order exists.
Avoid static service access
Static service access is one of the fastest ways to ruin a clean dependency graph.
public static class ServiceLocator
{
public static IServiceProvider Services { get; set; } = default!;
}
Then:
public sealed class Order
{
public void Place()
{
var clock = ServiceLocator.Services
.GetRequiredService<IClock>();
CreatedAtUtc = clock.UtcNow;
}
public DateTimeOffset CreatedAtUtc { get; private set; }
}
This creates hidden dependencies, makes tests awkward, and couples your domain model to the container.
Domain entities should not resolve services. They should receive values or collaborate with domain services outside the entity.
public sealed class Order
{
public int CustomerId { get; private init; }
public List<OrderLine> Lines { get; private init; } = [];
public DateTimeOffset CreatedAtUtc { get; private init; }
public static Order Place(
int customerId,
IReadOnlyCollection<OrderLine> lines,
DateTimeOffset now)
{
return new Order
{
CustomerId = customerId,
Lines = lines.ToList(),
CreatedAtUtc = now
};
}
}
The handler supplies the time.
public sealed class PlaceOrderHandler(
OrdersDbContext dbContext,
IClock clock)
{
public async Task HandleAsync(
PlaceOrderCommand command,
CancellationToken stopToken)
{
var order = Order.Place(
command.CustomerId,
command.Lines,
clock.UtcNow);
dbContext.Orders.Add(order);text.SaveChangesAsync(stopToken);
}
}
This keeps the domain model clean. The entity does not know where time came from. It just receives the value it needs.
Be careful with disposable transients
The .NET container disposes services it creates when the owning scope is disposed. That sounds helpful, but it can surprise people.
If you register a disposable transient and resolve many instances from the same scope, those instances may be held for disposal until the scope ends.
That can be a problem if the transient owns scarce resources.
public sealed class TemporaryFileWriter : IDisposable
{
private readonly FileStream _stream;
public TemporaryFileWriter(string path)
{
_stream = File.OpenWrite(path);
}
public void Dispose()
{
_stream.Dispose();
}
}
Do not register and resolve this casually as a transient if you need precise disposal timing.
A factory is clearer.
public interface ITemporaryFileWriterFactory
{
TemporaryFileWriter Create(string path);
}
public sealed class TemporaryFileWriterFactory
: ITemporaryFileWriterFactory
{
public TemporaryFileWriter Create(string path)
{
return new TemporaryFileWriter(path);
}
}
Usage:
public sealed class ExportFileHandler(
ITemporaryFileWriterFactory factory)
{
public Task HandleAsync(
string path,
CancellationToken stopToken)
{
using var writer = factory.Create(path);
return Task.CompletedTask;
}
}
Register the factory.
builder.Services.AddSingleton<
ITemporaryFileWriterFactory,
TemporaryFileWriterFactory>();
The point is ownership. If the caller must control disposal, a factory often communicates that better than container-managed transients.
Use DI to protect module boundaries
In a modular monolith, DI can either preserve boundaries or destroy them.
The bad version is where every module registers every implementation publicly and any feature can inject anything.
public sealed class BillingService(
OrdersDbContext ordersDbContext,
UsersDbContext usersDbContext,
BillingDbContext billingDbContext)
{
public Task CreateInvoiceAsync(CancellationToken stopToken)
{
throw new NotImplementedException();
}
}
This is how a modular monolith becomes a distributed ball of mud without the network.
A better design exposes module contracts and hides internals.
public interface IOrdersReader
{
Task<OrderBillingSnapshot?> GetBillingSnapshotAsync(
int orderId,
CancellationToken stopToken);
}
Billing depends on the Orders contract, not the Orders database.
public sealed class BillingInvoiceCreator(
IOrdersReader ordersReader,
BillingDbContext billingDbContext)
{
public async Task CreateAsync(
int orderId,
CancellationToken stopToken)
{
var order = await ordersReader.GetBillingSnapshotAsync(
orderId,
stopToken);
if (order is null)
{
throw new InvalidOperationException(
$"Order {orderId} was not found.");
}
var invoice = Invoice.Create(
order.OrderId,
order.CustomerId,
order.Total,
order.Currency);
billingDbContext.Invoices.Add(invoice);
await billingDbContext.SaveChangesAsync(stopToken);
}
}
The Orders module owns its implementation.
internal sealed class OrdersReader(OrdersDbContext dbContext)
: IOrdersReader
{
public Task<OrderBillingSnapshot?> GetBillingSnapshotAsync(
int orderId,
CancellationToken stopToken)
{
return dbContext.Orders
.Where(x => x.Id == orderId)
.Select(x => new OrderBillingSnapshot(
x.Id,
x.CustomerId,
x.Total,
x.Currency))
.SingleOrDefaultAsync(stopToken);
}
}
Registration can expose only the interface.
public static class OrdersModuleRegistration
{
public static IServiceCollection AddOrdersModule(
this IServiceCollection services,
IConfiguration configuration)
{
services.AddDbContext<OrdersDbContext>(options =>
{
var connectionString = configuration
.GetConnectionString("Orders");
options.UseSqlServer(connectionString);
});
services.AddScoped<IOrdersReader, OrdersReader>();
return services;
}
}
This is where DI becomes architecture enforcement. The module can keep its concrete types internal. Other modules depend on contracts.
That does not make boundaries perfect, but it makes violations more obvious.
Do not inject your way around bad boundaries
A dependency is not harmless just because it is injected.
This is still coupling:
public sealed class UsersController(
OrdersDbContext orders,
BillingDbContext billing,
ShippingDbContext shipping)
{
public Task<IActionResult> GetUserSummaryAsync(
int userId,
CancellationToken stopToken)
{
throw new NotImplementedException();
}
}
DI did not make this clean. It just made the coupling compile.
When you see dependencies crossing feature or module boundaries, ask what the consuming code actually needs.
It probably does not need another module’s DbContext. It needs a query, a command, a policy decision, or a snapshot.
Replace infrastructure dependencies with application contracts.
public interface IUserAccountSummaryReader
{
Task<UserAccountSummary?> GetAsync(
int userId,
CancellationToken stopToken);
}
That interface can compose data internally without leaking every persistence detail to the caller.
DI should make boundaries visible. It should not be used to tunnel through them.
A practical registration structure for real applications
For a medium-to-large .NET application, I like this shape:
var builder = WebApplication.CreateBuilder(args);
builder.Services
.AddPresentation()
.AddApplication()
.AddInfrastructure(builder.Configuration)
.AddModules(builder.Configuration);
var app = builder.Build();
app.UseExceptionHandler();
app.UseAuthentication();
app.UseAuthorization();
app.MapApiEndpoints();
app.Run();
Presentation registration contains controllers, minimal API endpoint helpers, filters, API behaviour, Swagger, authentication, and authorization.
public static class PresentationRegistration
{
public static IServiceCollection AddPresentation(
this IServiceCollection services)
{
services.AddProblemDetails();
services.AddEndpointsApiExplorer();
services.AddSwaggerGen();
services.AddAuthentication();
services.AddAuthorization();
return services;
}
}
Application registration contains handlers, validators, policies, domain services, and use-case orchestration.
public static class ApplicationRegistration
{
public static IServiceCollection AddApplication(
this IServiceCollection services)
{
services.AddScoped<IClock, SystemClock>();
services.AddScoped<IPlaceOrderHandler, PlaceOrderHandler>();
services.AddScoped<ICancelOrderHandler, CancelOrderHandler>();
services.AddScoped<
IValidator<PlaceOrderCommand>,
PlaceOrderCommandValidator>();
return services;
}
}
Infrastructure registration contains databases, message brokers, HTTP clients, blob storage, email providers, options, and interceptors.
public static class InfrastructureRegistration
{
public static IServiceCollection AddInfrastructure(
this IServiceCollection services,
IConfiguration configuration)
{
services.AddOptions<PaymentGatewayOptions>()
.Bind(configuration.GetSection(
PaymentGatewayOptions.SectionName))
.ValidateDataAnnotations()
.ValidateOnStart();
services.AddDbContext<OrdersDbContext>((sp, options) =>
{
var connectionString = configuration
.GetConnectionString("Orders");
options.UseSqlServer(connectionString);
});
services.AddHttpClient<PaymentGatewayClient>((sp, client) =>
{
var options = sp
.GetRequiredService<IOptions<PaymentGatewayOptions>>()
.Value;
client.BaseAddress = new Uri(options.BaseUrl);
client.Timeout = TimeSpan.FromSeconds(
options.TimeoutSeconds);
});
return services;
}
}
Module registration composes feature areas.
public static class ModuleRegistration
{
public static IServiceCollection AddModules(
this IServiceCollection services,
IConfiguration configuration)
{
services.AddOrdersModule(configuration);
services.AddBillingModule(configuration);
services.AddNotificationsModule(configuration);
return services;
}
}
This is not the only valid structure. But it has one big advantage: when something is registered in the wrong place, it feels wrong.
That is what you want from architecture.
The hidden footguns
The first hidden footgun is injecting scoped services into singletons. This is the classic lifetime bug. It usually appears with DbContext, user context, request context, tenant context, or anything based on IHttpContextAccessor.
The second hidden footgun is injecting IServiceProvider into normal services. That hides dependencies and moves errors from startup to runtime.
The third hidden footgun is reading configuration directly from IConfiguration deep inside application code. That delays validation and spreads magic strings across the system.
The fourth hidden footgun is turning factories into service locators. A factory should model runtime creation or selection. It should not be a generic wrapper around GetRequiredService.
The fifth hidden footgun is using singleton services to hold request-specific state. If a value differs by user, tenant, request, culture, or correlation ID, it probably does not belong in a singleton field.
The sixth hidden footgun is using DI to share mutable objects. A singleton cache, queue, or connection manager can be fine. A singleton List<T>, mutable options object, or stateful workflow object is usually asking for concurrency bugs.
The seventh hidden footgun is letting every module inject every other module’s internals. The container will allow it. Your architecture should not.
The eighth hidden footgun is over-abstracting everything. Not every class needs an interface. Interfaces are useful when you need substitution, boundaries, testing seams, or multiple implementations. Creating IFoo for every Foo is often just noise.
Primary constructors do not remove these problems. They make them more visible.
When should you replace the built-in container?
Most applications should not.
The built-in .NET container is good enough for the majority of ASP.NET Core apps, worker services, APIs, modular monoliths, and cloud services.
Consider a third-party container only when you have a real need for features the built-in container does not provide cleanly, such as advanced convention scanning, richer decorators, child containers, property injection for legacy code, or complex conditional registrations.
Even then, be honest. Sometimes the need for a more powerful container is a sign that your composition model has become too clever.
A boring DI setup is usually a good DI setup.
A senior engineer’s checklist for DI reviews
When reviewing a .NET dependency graph, do not start by asking whether the code uses DI. That bar is too low.
Ask whether the lifetimes match the behaviour. Ask whether singleton services are truly stateless or thread-safe. Ask whether scoped dependencies are contained within request scopes or manually created scopes. Ask whether options are validated at startup. Ask whether factories represent real runtime decisions. Ask whether modules expose contracts instead of internals. Ask whether constructor dependencies reveal a class that is doing too much.
Most importantly, ask whether the dependency graph tells the truth.
That is the real value of dependency injection.
Not testability by itself. Not interfaces everywhere. Not cleaner constructors. Not fashionable architecture.
A well-designed dependency graph shows what the system needs to do its work. A badly designed one hides decisions until runtime.
In small applications, you can get away with that. In serious systems, you eventually pay for every hidden dependency.
Sources
https://learn.microsoft.com/en-us/dotnet/csharp/whats-new/tutorials/primary-constructors
https://learn.microsoft.com/en-us/dotnet/core/extensions/dependency-injection/overview
https://learn.microsoft.com/en-us/aspnet/core/fundamentals/dependency-injection
https://learn.microsoft.com/en-us/dotnet/core/extensions/httpclient-factory-keyed-di
https://learn.microsoft.com/en-us/dotnet/core/extensions/options
https://learn.microsoft.com/en-us/dotnet/core/extensions/scoped-service





