Add Idempotency to a Distributed .NET System Without MediatR

Idempotency is not a MediatR feature. Its not a pipeline behaviour. Its not a middleware trick. In a distributed system, idempotency is a consistency guarantee around a side effect. That guarantee belongs close to the boundary where the side effect is created, backed by a durable store, protected by a unique constraint, and tied to the same transaction as the business change.
MediatR can be a convenient place to hang cross-cutting behaviour, but it should never be the reason idempotency works. The real protection comes from the database, the message broker contract, and the application service that owns the operation.
For a modern .NET system, the best design is usually this: use an Idempotency-Key at the HTTP edge, persist an idempotency record with a unique constraint, execute the business change and outbox write in the same transaction, return the stored response for safe retries, and use a separate inbox/processed-message table on message consumers. The Idempotency-Key header is useful for retrying unsafe HTTP methods such as POST and PATCH, but the header itself is only a protocol convention. MDN still marks it as experimental, and the durable guarantee comes from your server-side design, not from the header existing on the request.
The problem idempotency is actually solving
A distributed system does not fail cleanly. A client can send a request, your API can commit the database transaction, and the TCP connection can drop before the client receives the response. From the client’s point of view, the operation is unknown. From your system’s point of view, the operation already happened.
The dangerous retry is not the one that fails before doing anything. The dangerous retry is the one that succeeds twice.
HTTP gives you some natural idempotency for methods such as PUT and DELETE when they are designed properly. POST is different. POST /orders means "create a new thing". If the client repeats that request, the server has no way to know whether the client means "retry the same order creation" or "create another order" unless the client sends a stable operation identity.
That is the job of the idempotency key.
The architecture
The architecture I would use in a serious .NET system looks like this.
The API accepts the key. The application service owns the use case. The idempotency table records whether this logical operation has already completed. The domain tables hold the actual business state. The outbox table records integration messages that must be published after the transaction commits. Consumers use an inbox table, also called a processed-message table, to make message handling safe under redelivery.
API idempotency and message idempotency are related, but they are not the same thing. API idempotency protects the command entering your system. Consumer idempotency protects each downstream side effect when messages are redelivered, duplicated, delayed, or replayed.
Azure Service Bus duplicate detection can help by dropping duplicate broker messages with the same MessageId during a configured detection window, but it should be treated as a useful broker feature, not a replacement for consumer-side idempotency. Microsoft’s own documentation describes duplicate detection as a time-windowed broker behaviour, and also notes that messages should still be designed to be safely reprocessed.
Dont put the whole thing in middleware
A common mistake is to build this as ASP.NET Core middleware that reads the request, checks Redis, runs the endpoint, captures the response, and stores it. That looks cool because it is generic. It is also often wrong.
Middleware doesn't understand the business operation. It doesn't know which database transaction matters. It does not know whether the endpoint created an order, scheduled a payment, sent an email, or published an event. It can cache HTTP responses, but it cannot safely guarantee that the side effect happened exactly once.
ASP.NET Core endpoint filters are useful for validation, request inspection, and cross-cutting endpoint logic. Microsoft’s documentation explicitly gives Minimal API filters as a way to run code before and after handlers, inspect parameters, and intercept response behaviour. That makes them a decent place to require an idempotency key, but not the best place to own the transaction.
The core idempotency decision should live in the application service that performs the use case.
The database table
Start with a proper idempotency table. Do not rely on a distributed cache as the source of truth. A cache can improve performance later, but the guarantee should be in the same durable store as the business write.
Heres a SQL Server version.
CREATE TABLE dbo.ApiIdempotencyRecords ( Id BIGINT IDENTITY(1,1) NOT NULL CONSTRAINT PK_ApiIdempotencyRecords PRIMARY KEY,
Scope NVARCHAR(200) NOT NULL,
[Key] NVARCHAR(200) NOT NULL,
RequestHash CHAR(64) NOT NULL,
Status TINYINT NOT NULL,
ResponseStatusCode INT NULL,
ResponseContentType NVARCHAR(100) NULL,
ResponseBody NVARCHAR(MAX) NULL,
ResourceType NVARCHAR(100) NULL,
ResourceId NVARCHAR(100) NULL,
CreatedUtc DATETIME2 NOT NULL,
CompletedUtc DATETIME2 NULL,
ExpiresUtc DATETIME2 NOT NULL,
RowVersion ROWVERSION NOT NULL,
CONSTRAINT UQ_ApiIdempotencyRecords_Scope_Key UNIQUE (Scope, [Key])
);
CREATE INDEX IX_ApiIdempotencyRecords_ExpiresUtc ON dbo.ApiIdempotencyRecords (ExpiresUtc);
The unique constraint is the most important line in the whole design.
CONSTRAINT UQ_ApiIdempotencyRecords_Scope_Key UNIQUE (Scope, [Key])
Without that constraint, you have a convention. With that constraint, you have a guarantee.
The Scope prevents accidental key collision across different operations. A key for POST /orders should not collide with a key for POST /payments. In a multi-tenant system, include the tenant in the scope. In a user-scoped system, include the authenticated subject where appropriate.
The RequestHash prevents key reuse with a different payload. If the same key is used again with the same request, it is a retry. If the same key is used with a different request body, that is a client bug or abuse, and the API should return 409 Conflict.
The stored response lets you return the original result when the client retries after a lost response.
EF Core model
public enum IdempotencyStatus : byte { InProgress = 1, Completed = 2, Failed = 3 }
public sealed class ApiIdempotencyRecord { private ApiIdempotencyRecord() { }
public long Id { get; private set; }
public string Scope { get; private set; } = string.Empty;
public string Key { get; private set; } = string.Empty;
public string RequestHash { get; private set; } = string.Empty;
public IdempotencyStatus Status { get; private set; }
public int? ResponseStatusCode { get; private set; }
public string? ResponseContentType { get; private set; }
public string? ResponseBody { get; private set; }
public string? ResourceType { get; private set; }
public string? ResourceId { get; private set; }
public DateTimeOffset CreatedUtc { get; private set; }
public DateTimeOffset? CompletedUtc { get; private set; }
public DateTimeOffset ExpiresUtc { get; private set; }
public byte[] RowVersion { get; private set; } = [];
public static ApiIdempotencyRecord Start(
string scope,
string key,
string requestHash,
DateTimeOffset now,
TimeSpan ttl)
{
return new ApiIdempotencyRecord
{
Scope = scope,
Key = key,
RequestHash = requestHash,
Status = IdempotencyStatus.InProgress,
CreatedUtc = now,
ExpiresUtc = now.Add(ttl)
};
}
public void Complete(
int statusCode,
string contentType,
string responseBody,
string resourceType,
string resourceId,
DateTimeOffset now)
{
Status = IdempotencyStatus.Completed;
ResponseStatusCode = statusCode;
ResponseContentType = contentType;
ResponseBody = responseBody;
ResourceType = resourceType;
ResourceId = resourceId;
CompletedUtc = now;
}
internal sealed class ApiIdempotencyRecordConfiguration : IEntityTypeConfiguration { public void Configure(EntityTypeBuilder builder) {
builder.ToTable("ApiIdempotencyRecords", "dbo");
builder.HasKey(x => x.Id);
builder.Property(x => x.Scope)
.HasMaxLength(200)
.IsRequired();
builder.Property(x => x.Key)
.HasMaxLength(200)
.IsRequired();
builder.Property(x => x.RequestHash)
.HasMaxLength(64)
.IsRequired()
.IsFixedLength();
builder.Property(x => x.Status)
.HasConversion<byte>()
.IsRequired();
builder.Property(x => x.ResponseContentType)
.HasMaxLength(100);
builder.Property(x => x.ResourceType)
.HasMaxLength(100);
builder.Property(x => x.ResourceId)
.HasMaxLength(100);
builder.Property(x => x.RowVersion)
.IsRowVersion();
builder.HasIndex(x => new { x.Scope, x.Key })
.IsUnique();
builder.HasIndex(x => x.ExpiresUtc);
}
EF Core supports optimistic concurrency through concurrency tokens, and SQL Server rowversion is the usual fit for this kind of record. EF Core also uses transactions for SaveChanges, and when an explicit transaction is already active it creates savepoints before saving, which matters when you are composing application-service logic with several persistence steps.
Request fingerprinting
The idempotency key alone is not enough. The same key must only be valid for the same logical request.
public static class RequestFingerprint {
private static readonly JsonSerializerOptions JsonOptions = new(JsonSerializerDefaults.Web) { WriteIndented = false };
public static string Create<TRequest>(
string method,
string route,
string tenantId,
TRequest request)
{
var canonical = JsonSerializer.Serialize(new
{
method = method.ToUpperInvariant(),
route,
tenantId,
body = request
}, JsonOptions);
var bytes = Encoding.UTF8.GetBytes(canonical);
var hash = SHA256.HashData(bytes);
return Convert.ToHexString(hash);
}
}
For high-value APIs, do not hash random raw JSON text. Hash a normalised command model. Two JSON payloads can be semantically identical but textually different because of whitespace or property order. If the API has already bound the request to a C# record, hashing the command representation is usually good enough.
Minimal API endpoint without MediatR
This is deliberately plain. The endpoint validates the protocol-level input and delegates to an application service.
app.MapPost("/api/orders", async ( CreateOrderRequest request, HttpContext httpContext, CreateOrderService service, CancellationToken stopToken) => { var idempotencyKey = httpContext.Request.Headers["Idempotency-Key"].ToString();
if (string.IsNullOrWhiteSpace(idempotencyKey))
{
return Results.Problem(
title: "Missing idempotency key",
detail: "Send an Idempotency-Key header for this operation.",
statusCode: StatusCodes.Status400BadRequest);
}
var tenantId = httpContext.User.FindFirst("tenant_id")?.Value;
if (string.IsNullOrWhiteSpace(tenantId))
{
return Results.Problem(
title: "Missing tenant",
statusCode: StatusCodes.Status403Forbidden);
}
var outcome = await service.CreateAsync(
tenantId,
idempotencyKey,
request,
stopToken);
return outcome.ToResult();
})
.WithName("CreateOrder");
A thin endpoint is fine. You do not need MediatR to keep this clean. You need a use-case class with a clear public method.
public sealed record CreateOrderRequest( string CustomerReference, IReadOnlyList Lines);
public sealed record CreateOrderLineRequest( string Sku, int Quantity);
public sealed record CreateOrderResponse( int OrderId, string OrderNumber); The application service
The service performs four jobs in one transaction. It reserves the idempotency key, creates the order, writes the outbox message, and stores the response snapshot.
public sealed class CreateOrderService {
private static readonly TimeSpan IdempotencyTtl = TimeSpan.FromHours(24);
private readonly OrdersDbContext _db;
private readonly TimeProvider _timeProvider;
public CreateOrderService(
OrdersDbContext db,
TimeProvider timeProvider)
{
_db = db;
_timeProvider = timeProvider;
}
public async Task<CreateOrderOutcome> CreateAsync(
string tenantId,
string idempotencyKey,
CreateOrderRequest request,
CancellationToken stopToken)
{
var scope = $"tenant:{tenantId}:orders:create";
var requestHash = RequestFingerprint.Create(
method: "POST",
route: "/api/orders",
tenantId: tenantId,
request: request);
await using var transaction = await _db.Database.BeginTransactionAsync(stopToken);
var now = _timeProvider.GetUtcNow();
var idempotencyRecord = ApiIdempotencyRecord.Start(
scope,
idempotencyKey,
requestHash,
now,
IdempotencyTtl);
_db.ApiIdempotencyRecords.Add(idempotencyRecord);
try
{
await _db.SaveChangesAsync(stopToken);
}
catch (DbUpdateException ex) when (ex.IsUniqueConstraintViolation())
{
await transaction.RollbackAsync(stopToken);
return await ReplayOrRejectAsync(
scope,
idempotencyKey,
requestHash,
stopToken);
}
var order = Order.Create(
tenantId: tenantId,
customerReference: request.CustomerReference,
lines: request.Lines.Select(x => new OrderLineInput(x.Sku, x.Quantity)).ToList());
_db.Orders.Add(order);
var response = new CreateOrderResponse(
OrderId: order.Id,
OrderNumber: order.OrderNumber);
var responseBody = JsonSerializer.Serialize(response, JsonSerializerOptions.Web);
idempotencyRecord.Complete(
statusCode: StatusCodes.Status201Created,
contentType: "application/json",
responseBody: responseBody,
resourceType: "order",
resourceId: order.Id.ToString(CultureInfo.InvariantCulture),
now: _timeProvider.GetUtcNow());
_db.OutboxMessages.Add(OutboxMessage.From(
messageId: $"order-created:{order.Id}",
type: "OrderCreated",
payload: JsonSerializer.Serialize(new OrderCreatedIntegrationEvent(
order.Id,
order.OrderNumber,
tenantId)
))
);
await _db.SaveChangesAsync(stopToken);
await transaction.CommitAsync(stopToken);
return CreateOrderOutcome.Created(response);
}
private async Task<CreateOrderOutcome> ReplayOrRejectAsync(
string scope,
string idempotencyKey,
string requestHash,
CancellationToken stopToken)
{
var existing = await _db.ApiIdempotencyRecords
. AsNoTracking()
.SingleAsync(x => x.Scope == scope && x.Key == idempotencyKey, stopToken);
if (!StringComparer.Ordinal.Equals(existing.RequestHash, requestHash))
{
return CreateOrderOutcome.Conflict(
"The supplied idempotency key has already been used with a different request payload.");
}
if (existing.Status == IdempotencyStatus.Completed &&
existing.ResponseStatusCode is not null &&
existing.ResponseBody is not null)
{
return CreateOrderOutcome.Replayed(
existing.ResponseStatusCode.Value,
existing.ResponseContentType ?? "application/json",
existing.ResponseBody);
}
return CreateOrderOutcome.InProgress(
"A request with the same idempotency key is already being processed.");
}
}
The unique constraint turns concurrent duplicate requests into one winner and one replay. If two API instances receive the same request at the same time, both try to insert the same (Scope, Key). One succeeds. The other hits the database constraint and must inspect the existing record.
public static class DbUpdateExceptionExtensions {
public static bool IsUniqueConstraintViolation(this DbUpdateException exception) {
return exception.InnerException is SqlException sqlException && sqlException.Number is 2601 or 2627; }
}
SQL Server error 2601 means a duplicate key row cannot be inserted into a unique index. Error 2627 means a unique constraint violation. For PostgreSQL, you would check for SQL state 23505 instead.
The result type
You do not need a framework result abstraction. A simple discriminated result style is enough.
public abstract record CreateOrderOutcome { public sealed record CreatedOutcome(CreateOrderResponse Response) : CreateOrderOutcome;
public sealed record ReplayedOutcome(
int StatusCode,
string ContentType,
string Body) : CreateOrderOutcome;
public sealed record ConflictOutcome(string Message) : CreateOrderOutcome;
public sealed record InProgressOutcome(string Message) : CreateOrderOutcome;
public static CreateOrderOutcome Created(CreateOrderResponse response)
{
return new CreatedOutcome(response);
}
public static CreateOrderOutcome Replayed(
int statusCode,
string contentType,
string body)
{
return new ReplayedOutcome(statusCode, contentType, body);
}
public static CreateOrderOutcome Conflict(string message)
{
return new ConflictOutcome(message);
}
public static CreateOrderOutcome InProgress(string message)
{
return new InProgressOutcome(message);
}
}
public static class CreateOrderOutcomeExtensions {
public static IResult ToResult(this CreateOrderOutcome outcome) { return outcome switch { CreateOrderOutcome.CreatedOutcome created => Results.Created( $"/api/orders/{created.Response.OrderId}", created.Response),
CreateOrderOutcome.ReplayedOutcome replayed =>
Results.Text(
replayed.Body,
replayed.ContentType,
Encoding.UTF8,
replayed.StatusCode),
CreateOrderOutcome.ConflictOutcome conflict =>
Results.Problem(
title: "Idempotency key conflict",
detail: conflict.Message,
statusCode: StatusCodes.Status409Conflict),
CreateOrderOutcome.InProgressOutcome inProgress =>
Results.Problem(
title: "Request already in progress",
detail: inProgress.Message,
statusCode: StatusCodes.Status409Conflict),
_ => Results.Problem(statusCode: StatusCodes.Status500InternalServerError)
};
}
}
You can return 409 Conflict for an in-progress duplicate. Some APIs use 425 Too Early or 202 Accepted with a polling resource. I prefer 409 for synchronous command endpoints unless the API has an operation-status resource.
Why the outbox belongs in the same transaction
The outbox is not optional in a distributed system where the command creates state and publishes an event. This is the failure you're avoiding.
The order exists, but the event does not. Retrying the API request must not create a second order just to get another chance at publishing the event.
The fix is to write the integration event to an outbox table in the same database transaction as the order. A background publisher later sends it to the broker.
That means the API retry logic and the event publication recovery logic are separate. The API idempotency key prevents duplicate commands. The outbox prevents lost events.
When to Use Libraries for the Outbox Implementation
You do not have to hand-roll the outbox. In fact, if your system is already message-heavy, a library is often the better choice.
The important distinction is this:
The outbox pattern is the architectural guarantee.
The library is only the implementation.
The guarantee you need is simple to state, when your application changes business state and needs to publish a message, both facts must be recorded durably together. Either the business change and the outgoing message are both committed, or neither is committed. The actual publishing to the broker can happen afterwards.
A hand-rolled outbox gives you control and transparency. A library gives you tested infrastructure, retries, batching, duplicate detection, message storage, cleanup, and usually better operational tooling. The trade-off is dependency weight, framework coupling, and less control over the exact persistence model.
The main .NET options
For modern .NET systems, the serious options are usually MassTransit, NServiceBus, Wolverine, CAP, Brighter, or a small custom outbox.
MassTransit
MassTransit is a strong default if you are already using it for consumers, sagas, retries, RabbitMQ, Azure Service Bus, or broker abstraction. Its Entity Framework Core outbox adds inbox and outbox storage tables to your DbContext. The documented EF Core implementation uses InboxState, OutboxMessage, and OutboxState tables, and includes a hosted delivery service for bus outbox messages. It also supports both a bus outbox for messages published outside consumers and a consumer outbox for messages published while handling an incoming message.
Use MassTransit when your application already thinks in terms of consumers, messages, sagas, retries, and broker-backed workflows. Dont add it just to avoid writing a 100-line outbox table and publisher.
NServiceBus
NServiceBus is the enterprise-grade option. It is commercial, mature, and very strong when you are building a serious message-driven system with long-running workflows, retries, monitoring, operational tooling, and multiple endpoints. Its outbox is designed to keep business data and outgoing messages consistent without relying on distributed transactions. The docs are very explicit that the outbox stores outgoing messages in the same database transaction as business data, then dispatches them afterwards.
The big advantage is reliability and operational maturity. The downside is cost, conceptual weight, and platform commitment. If the system is genuinely message-driven, it can be worth it. If you only need to publish OrderCreated after saving an order, it is probably too much.
Wolverine
Wolverine is a good fit if you like a code-first, low-ceremony .NET messaging model and want tight integration with EF Core. Its EF Core support can apply transactional inbox/outbox mechanics inside message handlers or HTTP endpoints, which is interesting because it can cover both command handling and message handling paths. The docs note that Wolverine can use EF Core transactional middleware with HTTP endpoints and message handlers, and can persist outgoing messages in the same transaction as normal EF Core changes.
Use Wolverine when you want an integrated application framework for handlers, messaging, local queues, durable execution, and EF Core-backed reliability. Be more cautious if your team prefers very explicit ASP.NET Core services and does not want another application model.
CAP
CAP is a lighter event bus and outbox option. It uses a local message table with the application database to avoid losing event messages when services call each other. Its docs describe it as implementing the outbox pattern and providing a simpler publishing and subscription model without requiring your handlers to inherit from framework interfaces.
CAP can be a practical middle ground when you want an outbox-backed event bus but do not want the heavier mental model of NServiceBus or MassTransit. I would consider it for straightforward microservice integration where the team wants simple publish/subscribe semantics.
Brighter
Brighter is another option if you like command processor and pipeline-based architecture. Its documentation describes outbox and inbox support, and its SQL Server outbox package is positioned around reliable publishing with transactional consistency and guaranteed delivery.
Use Brighter when the command processor model fits your codebase. Do not pick it only because it has an outbox. The surrounding programming model matters.
Hand-rolled outbox
A custom outbox is still a good choice when your requirements are simple.
For example, if your API saves an aggregate and needs to publish one or two integration events afterwards, a hand-rolled table can be cleaner than adding a full messaging framework.
A simple version usually needs:
CREATE TABLE dbo.OutboxMessages
(
Id BIGINT IDENTITY(1,1) NOT NULL PRIMARY KEY,
MessageId NVARCHAR(200) NOT NULL,
Type NVARCHAR(300) NOT NULL,
Payload NVARCHAR(MAX) NOT NULL,
CreatedUtc DATETIME2 NOT NULL,
PublishedUtc DATETIME2 NULL,
PublishAttempts INT NOT NULL DEFAULT 0,
LastError NVARCHAR(MAX) NULL,
CONSTRAINT UQ_OutboxMessages_MessageId UNIQUE (MessageId)
);
Then the application service writes the business change and the outbox message in the same EF Core transaction:
await using var transaction = await db.Database.BeginTransactionAsync(stopToken);
db.Orders.Add(order);
db.OutboxMessages.Add(new OutboxMessage(
messageId: $"order-created:{order.Id}",
type: "OrderCreated",
payload: JsonSerializer.Serialize(orderCreated)));
await db.SaveChangesAsync(stopToken);
await transaction.CommitAsync(stopToken);
A background worker then polls unpublished messages, publishes them to the broker, and marks them as published.
That is not glamorous, but it is easy to understand and easy to debug.
How to choose
Use a library when messaging is central to the system. If you have many consumers, retries, delayed messages, sagas, workflows, dead-letter handling, broker abstraction, and operational monitoring, use MassTransit, NServiceBus, Wolverine, CAP, or Brighter. You will get more value from the library than from maintaining your own infrastructure.
Use a hand-rolled outbox when messaging is secondary. If you mainly have an ASP.NET Core API with EF Core and only need to publish a few integration events after successful commits, a custom outbox table and publisher is often the better fit.
The decision is not about whether libraries are better than custom code. The decision is about where your complexity lives.
If your complexity is in business workflows and message handling, use a library.
If your complexity is low and you want full control over persistence, diagnostics, and deployment, hand-roll the outbox.
What you should not do is skip the outbox entirely because the broker has retries. Broker retries do not solve the dual-write problem. The dual-write problem exists between your database commit and your message publish. That boundary needs an outbox, whether the implementation is a library or your own table.
Consumer idempotency
Your consumer must assume the same message can arrive more than once. This is true even when your broker usually behaves well. Network failures, lock-loss, redelivery, manual replay, dead-letter reprocessing, and operational repairs all produce duplicates.
Use a processed-message table.
CREATE TABLE dbo.ProcessedMessages (
Id BIGINT IDENTITY(1,1) NOT NULL CONSTRAINT PK_ProcessedMessages PRIMARY KEY, ConsumerName NVARCHAR(200) NOT NULL, MessageId NVARCHAR(200) NOT NULL, ProcessedUtc DATETIME2 NOT NULL,
CONSTRAINT UQ_ProcessedMessages_ConsumerName_MessageId
UNIQUE (ConsumerName, MessageId)
);
Then make the insert part of the same transaction as the consumer side effect.
public sealed class OrderCreatedConsumer { private const string ConsumerName = "billing.order-created";
private readonly BillingDbContext _db;
private readonly TimeProvider _timeProvider;
public OrderCreatedConsumer(
BillingDbContext db,
TimeProvider timeProvider)
{
_db = db;
_timeProvider = timeProvider;
}
public async Task HandleAsync(
OrderCreatedIntegrationEvent message,
string messageId,
CancellationToken stopToken)
{
await using var transaction = await _db.Database.BeginTransactionAsync(stopToken);
_db.ProcessedMessages.Add(new ProcessedMessage(
ConsumerName,
messageId,
_timeProvider.GetUtcNow()));
try
{
await _db.SaveChangesAsync(stopToken);
}
catch (DbUpdateException ex) when (ex.IsUniqueConstraintViolation())
{
await transaction.RollbackAsync(stopToken);
return;
}
var invoice = Invoice.CreateForOrder(
orderId: message.OrderId,
tenantId: message.TenantId,
orderNumber: message.OrderNumber);
_db.Invoices.Add(invoice);
await _db.SaveChangesAsync(stopToken);
await transaction.CommitAsync(stopToken);
}
}
The key detail is that the processed-message insert and the side effect are committed together. If the consumer crashes before commit, the message can be retried and processed. If it crashes after commit but before acknowledging the broker message, the retry hits the unique constraint and exits safely.
Handling external APIs
Do not call external systems inside the same request transaction and pretend it is safe. You cannot include Stripe, SendGrid, a legacy SOAP API, or a third-party underwriting platform in your SQL transaction.
For external calls, prefer this pattern, accept the command idempotently, store your local state and outbox message transactionally, then let a worker perform the external call. The worker should also use an idempotency key if the external API supports one. If the external API does not support one, store your own attempt state and make the operation naturally convergent where possible.
For example, instead of "send email now inside the order endpoint", store OrderCreated, publish it via the outbox, and let a notification worker process it using its own ProcessedMessages table. If the email provider supports an idempotency key or custom message ID, use a stable value such as order-confirmation:{orderId}.
Expiry and cleanup
Idempotency records do not need to live forever. They need to live longer than the client’s retry window. For payment-like operations, keep them longer. For ordinary create commands, 24 hours or 7 days is often enough, depending on your clients and queues.
Cleanup should only remove completed or failed records that are past ExpiresUtc.
DELETE TOP (1000) FROM dbo.ApiIdempotencyRecords WHERE ExpiresUtc < SYSUTCDATETIME() AND Status IN (2, 3);
Run that as a scheduled job. Do not delete InProgress records too aggressively. If you support recovery from crashed in-progress operations, add a LockedUntilUtc or LastSeenUtc column and a clear operational policy.
When Redis is acceptable
Redis is acceptable as an optimisation, not as the primary guarantee for business-critical writes.
A good Redis use case is caching completed idempotency responses after the database transaction commits. A poor Redis use case is using SETNX as the only thing preventing duplicate payments, duplicate orders, or duplicate policy issuance. Redis can be part of a serious design, but then you must be very clear about persistence, failover, eviction, backup, and what happens during Redis unavailability.
For most .NET business systems, the boring SQL unique constraint is the better default.
What not to do
Do not generate the idempotency key on the server for a POST request. The whole point is that the client can retry the same logical operation with the same key after an unknown result.
Dont use a timestamp as the key. Use a UUID, ULID, or another high-entropy unique value generated per logical operation.
Dont allow the same key to be used with a different request body.
Dont store only the key. Store the request hash, status, response snapshot, timestamps, and scope.
Dont rely only on Azure Service Bus duplicate detection, Kafka compaction, RabbitMQ deduplication plugins, or any broker feature. Broker deduplication and consumer idempotency solve different parts of the problem.
Dont put every idempotency decision in a generic middleware layer. Middleware can enforce the presence of a key. It should not pretend to own the business transaction.
The clean .NET shape
The clean version is not complicated.
This is the important design point: the use case owns the transaction. The endpoint owns HTTP concerns. The database owns uniqueness. The outbox owns reliable publication. The consumer inbox owns redelivery safety.
That is a better design than hiding the whole thing inside MediatR.
MediatR would only give you a convenient interception point. It would not give you the idempotency guarantee. If your transaction boundary, unique key, outbox, and consumer inbox are wrong, a MediatR behaviour will not save you. If those pieces are right, you do not need MediatR at all.
For a modern .NET distributed system, build idempotency as a first-class application boundary.
Use Idempotency-Key on unsafe HTTP operations. Validate it at the API edge. Compute a request fingerprint. Insert an idempotency record with a unique (Scope, Key) constraint. Execute the domain write, outbox insert, and response snapshot in the same EF Core transaction. On retry, return the stored response if the payload matches, reject the request if the payload differs, and report a safe conflict if the first request is still in progress.
Then apply the same thinking to messaging. Every consumer that performs a side effect should have a processed-message table with a unique (ConsumerName, MessageId) constraint. Broker duplicate detection is useful, but the durable consumer guarantee belongs in your data model.
Sources
https://wolverinefx.net/guide/durability/efcore/outbox-and-inbox.html
https://docs.particular.net/nservicebus/outbox/?utm_source=chatgpt.com
https://masstransit.massient.com/configuration/middleware/outbox





