Skip to main content

Command Palette

Search for a command to run...

Async locking in C#

what actually works & what quietly breaks

Updated
8 min read
Async locking in C#

If you write C# and you use async and await seriously, you will eventually run into locking problems that feel unfamiliar. The rules you learned around lock, Monitor, and critical sections start to fall apart once continuations, thread hopping, and cooperative scheduling enter the scene. For mid-level engineers and above its important to have a good understanding why classical locking does not translate cleanly to asynchronous code, what the real failures look like in production systems, and how to apply the correct async-safe patterns without turning your codebase into a coordination nightmare.

Im not going to repeat the usual advice like “don’t use lock with async”. Instead, we’ll break down why, then work through practical patterns that hold up under load.

The mental model shift async requires

Traditional locking in C# assumes a simple invariant.

A thread enters a critical section.
That thread leaves the critical section.
The runtime enforces mutual exclusion in between.

Async code violates the first assumption.

When execution hits an await, you are no longer in control of which thread executes the remainder of the method. The continuation might resume on a different worker thread, later, or not at all if the operation is cancelled or faults.

This immediately breaks the idea of thread ownership.

A lock does not protect logical execution. It protects a thread bound region. Async code is not thread bound.

This is literally the root issue everything else builds on.

Why lock + await is fundamentally broken

Take the naïve example everyone eventually tries.

lock (_sync)
{
    await DoWorkAsync();
}

The compiler rejects this, which is good. But understanding why matters.

A lock expands into Monitor.Enter and Monitor.Exit. The runtime must see a well defined pairing between those calls on the same thread.

When the method suspends at await, control returns to the caller and the thread is released back to the pool while the monitor is still held. When the continuation resumes, it may resume on a different thread.

There is no legal way for the runtime to re-associate the original lock with a different thread. That is why the compiler blocks this construct entirely.

If this were allowed, you would deadlock entire thread pools all the time.

So the rule is deeper than “you aren’t allowed to do this”. The rule is that thread-based mutual exclusion and asynchronous execution live at different layers of the abstraction stack.

The real problem you are actually trying to solve

Most people think they need a lock.

In reality, they usually need one of three things:

  1. Exclusive access to a resource across async boundaries

  2. Ordering guarantees between asynchronous operations

  3. Protection against concurrent mutation of shared state

lock only solves #3, and only in synchronous code.

Async locking is about solving these problems without blocking threads.

Blocking is expensive. Thread pool starvation is real. Async scalability depends on allowing threads to return to the pool whenever work is waiting on I/O.

So any solution that blocks threads to maintain exclusivity defeats the reason you used async in the first place.

SemaphoreSlim - the async primitive

The most widely usable async-compatible locking primitive in the BCL is SemaphoreSlim.

Not because it is perfect, but because it meets two critical requirements.

It can be awaited asynchronously.
It does not depend on thread identity.

At its simplest, a semaphore with an initial and maximum count of 1 behaves like a mutex.

private readonly SemaphoreSlim _mutex = new(1, 1);

public async Task UpdateAsync()
{
    await _mutex.WaitAsync();
    try
    {
        await DoWorkAsync();
    }
    finally
    {
        _mutex.Release();
    }
}

This works because WaitAsync does not block a thread. It returns a Task that completes when the semaphore becomes available.

Ownership is logical, not thread affine.

Why SemaphoreSlim is still easy to misuse

Although SemaphoreSlim is async safe, that does not mean it is idiot proof.

Forgotten Release

Unlike lock, the compiler cannot protect you here. If control flow exits early or an exception escapes without hitting Release, you have a permanent leak.

This is semantically closer to manually managing file handles than using a lock block.

Cancellation edge cases

If you pass a CancellationToken to WaitAsync, and the wait is cancelled after the semaphore has been acquired but before you reach your try block, you can leak the semaphore without realising it.

This is rare, but in high throughput or fault heavy systems, it happens.

Over serialisation

Using a single semaphore for a logically partitionable resource can crater your throughput without showing any obvious bug symptoms.

If your protected state can be sharded, it should be.

The disposable lock pattern

To reduce the surface area for mistakes, many people encapsulate async locking in a disposable abstraction.

public sealed class AsyncLock
{
    private readonly SemaphoreSlim _semaphore = new(1, 1);

    public async Task<IDisposable> LockAsync()
    {
        await _semaphore.WaitAsync();
        return new Releaser(_semaphore);
    }

    private sealed class Releaser : IDisposable
    {
        private readonly SemaphoreSlim _semaphore;

        public Releaser(SemaphoreSlim semaphore)
        {
            _semaphore = semaphore;
        }

        public void Dispose()
        {
            _semaphore.Release();
        }
    }
}

The usage becomes structurally similar to lock.

using (await _asyncLock.LockAsync())
{
    await DoWorkAsync();
}

This pattern improves correctness dramatically by making the release deterministic via IDisposable.

One subtle point though - Dispose is synchronous. That is acceptable because releasing a semaphore is not an async operation.

Why async locks should protect as little as possible

Async locks are cheaper than blocking locks, but they are not free.

Every awaited wait adds allocation pressure and scheduling overhead. Worse still, long critical sections increase tail latency non linearly.

A good async locking rule of thumb is this:

Protect state mutation, not work.

This pattern is bad:

await _mutex.WaitAsync();
try
{
    await CallExternalApiAsync();
    UpdateSharedState();
}
finally
{
    _mutex.Release();
}

This is better:

var result = await CallExternalApiAsync();

await _mutex.WaitAsync();
try
{
    UpdateSharedState(result);
}
finally
{
    _mutex.Release();
}

Hold the lock only while touching shared state. Everything else should happen outside.

Async locks are about coordination, not safety

This distinction is important.

Async locks do not protect you from unsafe code, torn writes, or low-level memory visibility issues. The C# memory model still applies.

Async locks coordinate logical concurrency, not instruction level.

If you are working with low-level mutable structures, you still need to think in terms of memory barriers and atomic operations.

Most application code does not need this. Infrastructure code often does.

ConcurrentDictionary does not eliminate the need for async locking

A common misconception is that thread-safe collections remove the need for async coordination.

They dont.

They prevent corruption of the data structure itself.

This is not atomic:

if (!_dict.ContainsKey(key))
{
    var value = await BuildValueAsync();
    _dict[key] = value;
}

Two concurrent callers can both observe the key as missing and both build the value.

Async locking is often about preventing duplicated work, not just preventing corruption.

The “async lock per key” pattern

In high throughput systems, global locks are scalability killers.

A common refinement is per key locking.

private readonly ConcurrentDictionary<string, AsyncLock> _locks = new();

public async Task ProcessAsync(string key)
{
    var asyncLock = _locks.GetOrAdd(key, _ => new AsyncLock());

    using (await asyncLock.LockAsync())
    {
        await DoWorkAsync(key);
    }
}

You still need an eviction strategy, otherwise the dictionary grows forever. That is a design problem, not a syntax problem.

Async locking and database code

One of the most common async locking mistakes is trying to serialise database operations in memory.

This is often a smell.

Databases already implement concurrency control. If you find yourself protecting database writes with in process async locks, ask yourself why.

Valid reasons exist, such as enforcing application level invariants or throttling external side effects. But locking to “avoid race conditions” usually means the invariant belongs in the database via constraints or transactions.

Async locks should sit above persistence, not try to re-implement it.

When async locking is the wrong solution entirely

There are problems async locks cannot solve cleanly.

Ordering problems

If operations must occur in a strict sequence, queues or channels are a better fit.

Backpressure

A lock does not apply backpressure. It just queues waiters. If load spikes, waiters accumulate, latency explodes, and you still process everything.

Bounded channels or rate limiters are usually better here.

Cross process coordination

Async locks are in-process only. They do not coordinate across instances, containers, or machines.

If you need distributed locking, you are in a different design space altogether.

Testing async locking behaviour

Unit tests rarely surface locking bugs. You need stress.

The simplest test is not clever.

Spawn many concurrent tasks.
Hammer the code.
Assert invariants hold.

Example.

await Task.WhenAll(
    Enumerable.Range(0, 1000)
        .Select(_ => UpdateAsync())
);

Then run it repeatedly.

Async locking bugs are statistical. They fail under load, not under logic inspection.

A practical decision checklist

When you feel the urge to add a lock to async code, pause and ask yourself:

Is this protecting shared mutable state, or work?
Can the state be isolated or partitioned instead?
Can the invariant live in the database or downstream system?
Does this need ordering, or just mutual exclusion?
What happens to throughput if contention increases tenfold?

If you cannot answer these, adding a lock will only move the problem around.

Async locking is not a replacement for lock. It is a different tool with different trade-offs.

Used correctly, async locks allow high-throughput, scalable coordination without blocking threads.

Used lazily, they hide architectural problems until load turns them into outages.