AsyncLock Class
A pooled async mutual exclusion lock for coordinating access to shared resources.
Namespace
CryptoHives.Foundation.Threading.Async.Pooled
Syntax
public sealed class AsyncLock : IResettable
Overview
AsyncLock provides async mutual exclusion, similar to SemaphoreSlim(1,1) but optimized for the common async locking pattern. It returns a small value-type releaser that implements IDisposable/IAsyncDisposable so the lock can be released with a using pattern. The implementation uses pooled IValueTaskSource instances to minimize allocations in high-throughput scenarios and a local reusable waiter to avoid allocations for the first queued waiter.
Benefits
- Zero-allocation fast path: When the lock is uncontended the operation completes synchronously without heap allocations.
- Pooled Task Sources: Reuses
IValueTaskSource<Releaser>instances from an object pool when waiters are queued. - ValueTask-Based: Returns
ValueTask<Releaser>for minimal allocation when the lock is available. - RAII Pattern: Uses disposable lock handles for automatic release.
- Cancellation Support (optimized): Supports
CancellationTokenfor queued waiters; on .NET 6+ registration usesUnsafeRegisterwith a static delegate to reduce execution-context capture and per-registration overhead. - High Performance: Optimized for both uncontended and contended scenarios while keeping allocations low.
Constructor
public AsyncLock(
IGetPooledManualResetValueTaskSource<Releaser>? pool = null)
| Parameter | Description |
|---|---|
pool |
Optional custom pool for ValueTaskSource instances. |
Note: Unlike other primitives in this library,
AsyncLockalways runs continuations asynchronously (hardcoded totrue). This prevents potential deadlocks in common lock usage patterns.
Properties
| Property | Type | Description |
|---|---|---|
IsTaken |
bool |
Gets whether the lock is currently held by a caller or queued handoff. |
Methods
LockAsync
public ValueTask<Releaser> LockAsync(CancellationToken cancellationToken = default)
Asynchronously acquires the lock. Returns a disposable that releases the lock when disposed.
Parameters:
cancellationToken- Optional cancellation token; only observed if the lock cannot be acquired immediately.
Returns: A ValueTask<Releaser> that completes when the lock is acquired. Dispose the result to release the lock.
Notes on allocations and cancellation:
- The fast path (uncontended) completes synchronously and performs no heap allocations.
- The implementation maintains a local waiter instance that serves the first queued waiter without allocating. Subsequent waiters use instances obtained from the configured object pool; if the pool is exhausted a new instance is allocated.
- Passing a
CancellationTokenwill register a callback when the waiter is queued. On .NET 6+ the code usesUnsafeRegistertogether with a static delegate and a small struct context to minimize capture and reduce allocation/ExecutionContext overhead. Even so, cancellation registrations and creatingTaskobjects for pre-cancelled tokens may allocate; prefer avoiding cancellation tokens unless necessary for the scenario.
Throws:
OperationCanceledException- If the operation is canceled via the cancellation token.
TryReset
public bool TryReset()
Implements IResettable to allow returning this instance to a DefaultObjectPool<AsyncLock>.
Behavior:
- Attempts to acquire the internal spin lock. If the lock is already held by a concurrent operation, the method returns
falseimmediately and the pool discards the instance. - If the lock is acquired but the logical lock is currently held (
IsTaken == true) or waiters are queued, the method returnsfalse— the instance is still in active use and must not be recycled. - Otherwise the local waiter is reset and the method returns
true.
Thread Safety: TryReset() is safe to call concurrently with other operations. It will simply return false if the instance is in use.
Example:
// Using AsyncLock with an object pool
var pool = new DefaultObjectPool<AsyncLock>(
new DefaultPooledObjectPolicy<AsyncLock>());
var lk = pool.Get();
try
{
using (await lk.LockAsync(ct))
{
// critical section
}
}
finally
{
pool.Return(lk); // calls TryReset() internally
}
Thread Safety
Thread-safe. All public methods are thread-safe and can be called concurrently from multiple threads.
Performance Characteristics
- Uncontended Lock: O(1), synchronous completion (no allocation)
- Contended Lock: O(1) to enqueue waiter; waiter instances are reused from the object pool (allocation only if pool is exhausted)
- Lock Release: O(1) to signal next waiter
- Memory: Minimal allocations due to pooled task sources and local waiter reuse
Benchmark Results
The benchmarks compare various AsyncLock implementations:
- PooledAsyncLock: The pooled implementation from this library
- ProtoPromiseAsyncLock: The implementation from the Proto.Promises.Threading library
- RefImplAsyncLock: The reference implementation from Stephen Toub's blog, which does not support cancellation tokens
- NitoAsyncLock: The implementation from the Nito.AsyncEx library
- NeoSmartAsyncLock: The implementation from the NeoSmart.AsyncLock library
- AsyncNonKeyedLocker: An implementation from the AsyncKeyedLock.AsyncNonKeyedLocker library which uses SemaphoreSlim internally
- SemaphoreSlim: The .NET built-in synchronization primitive
- VS.Threading AsyncSemaphore: The Microsoft.VisualStudio.Threading semaphore used as a lock-compatible baseline
Single Lock Benchmark
This benchmark measures the performance of acquiring and releasing a single lock in an uncontended scenario.
In order to understand the impact of moving from a lock or Interlocked implementation to an async lock, the InterlockedIncrement, lock and .NET 9 Lock with EnterScope() are also measured with a integer increment as workload.
The benchmark shows both throughput (operations per second) and allocations per operation. ProtoPromise is currently a strong uncontended competitor and can beat the pooled implementation on raw throughput, while the pooled implementation stays allocation-free and keeps the same API shape and cancellation behavior used throughout this library. VS.Threading is also included as a semaphore-based comparison point, but in the published uncontended results it trails both ProtoPromise and the pooled implementation.
The new .NET 9 Lock primitive shows slighlty better performance than the well known lock on an object, but AsyncLock remains competitive due to the fast path implementation with Interlocked variable based state.
| Description | Mean | Ratio | Allocated |
|---|---|---|---|
| Lock · Increment · System | 0.0010 ns | 0.000 | - |
| Lock · Interlocked.Add · System | 0.1764 ns | 0.021 | - |
| Lock · Interlocked.Inc · System | 0.1925 ns | 0.023 | - |
| Lock · Interlocked.Exchange · System | 0.5166 ns | 0.062 | - |
| Lock · Interlocked.CmpX · System | 0.8476 ns | 0.102 | - |
| Lock · Lock · System | 3.0566 ns | 0.368 | - |
| Lock · Lock.EnterScope · System | 3.1459 ns | 0.379 | - |
| SpinLock · SpinLock · CryptoHives | 3.2782 ns | 0.395 | - |
| Lock · lock() · System | 4.0113 ns | 0.483 | - |
| LockAsync · AsyncLock · ProtoPromise | 7.0620 ns | 0.851 | - |
| LockAsync · AsyncLock · Pooled | 8.3011 ns | 1.000 | - |
| LockAsync · AsyncSemaphore · VS.Threading | 15.7214 ns | 1.894 | - |
| LockAsync · SemaphoreSlim · System | 16.4273 ns | 1.979 | - |
| LockAsync · AsyncLock · RefImpl | 17.8237 ns | 2.147 | - |
| LockAsync · AsyncLock · NonKeyed | 19.4962 ns | 2.349 | - |
| LockAsync · AsyncLock · Nito.AsyncEx | 37.5934 ns | 4.529 | 320 B |
| SpinWait · SpinOnce · System | 41.3498 ns | 4.981 | - |
| SpinLock · SpinLock · System | 44.9287 ns | 5.412 | - |
| LockAsync · AsyncLock · NeoSmart | 55.3936 ns | 6.673 | 208 B |
Multiple Concurrent Lock Benchmark
This benchmark measures performance under contention with multiple concurrent lock requests (iterations).
The benchmark shows both throughput (operations per second) and allocations per operation. Zero iterations duplicates the uncontended scenario.
It is noticable that all implementations except the pooled one and ProtoPromise require memory allocations on contention, as long as the ValueTask is not converted to Task.
ProtoPromise is particularly competitive here and can outperform the pooled AsyncLock in several low- and mid-contention cases, especially when comparing pure throughput. The pooled implementation still distinguishes itself by combining allocation-free ValueTask usage with built-in cancellation support and predictable behavior when integrated with the rest of this library. VS.Threading is included as another real-world baseline, but its semaphore-based path is slower and allocates under contention in the published results.
| Description | Iterations | cancellationType | Mean | Ratio | Allocated |
|---|---|---|---|---|---|
| Multiple · AsyncLock · Pooled (ValueTask) | 0 | None | 9.944 ns | 1.00 | - |
| Multiple · AsyncLock · Pooled (Task) | 0 | None | 10.279 ns | 1.03 | - |
| Multiple · AsyncLock · ProtoPromise | 0 | None | 11.144 ns | 1.12 | - |
| Multiple · SemaphoreSlim · System | 0 | None | 17.640 ns | 1.77 | - |
| Multiple · AsyncSemaphore · VS.Threading | 0 | None | 18.594 ns | 1.87 | - |
| Multiple · AsyncLock · RefImpl | 0 | None | 19.394 ns | 1.95 | - |
| Multiple · AsyncLock · NonKeyed | 0 | None | 20.600 ns | 2.07 | - |
| Multiple · AsyncLock · Nito | 0 | None | 39.078 ns | 3.93 | 320 B |
| Multiple · AsyncLock · NeoSmart | 0 | None | 58.457 ns | 5.88 | 208 B |
| Multiple · AsyncLock · Pooled (ValueTask) | 0 | NotCancelled | 10.323 ns | 1.00 | - |
| Multiple · AsyncLock · Pooled (Task) | 0 | NotCancelled | 10.642 ns | 1.03 | - |
| Multiple · AsyncLock · ProtoPromise | 0 | NotCancelled | 11.886 ns | 1.15 | - |
| Multiple · SemaphoreSlim · System | 0 | NotCancelled | 17.517 ns | 1.70 | - |
| Multiple · AsyncSemaphore · VS.Threading | 0 | NotCancelled | 19.447 ns | 1.88 | - |
| Multiple · AsyncLock · NonKeyed | 0 | NotCancelled | 21.326 ns | 2.07 | - |
| Multiple · AsyncLock · Nito | 0 | NotCancelled | 38.806 ns | 3.76 | 320 B |
| Multiple · AsyncLock · NeoSmart | 0 | NotCancelled | 56.755 ns | 5.50 | 208 B |
| Multiple · AsyncLock · Pooled (ValueTask) | 1 | None | 28.165 ns | 1.00 | - |
| Multiple · AsyncLock · ProtoPromise | 1 | None | 36.486 ns | 1.30 | - |
| Multiple · SemaphoreSlim · System | 1 | None | 42.075 ns | 1.49 | 88 B |
| Multiple · AsyncSemaphore · VS.Threading | 1 | None | 64.412 ns | 2.29 | 168 B |
| Multiple · AsyncLock · RefImpl | 1 | None | 76.101 ns | 2.70 | 216 B |
| Multiple · AsyncLock · Nito | 1 | None | 91.453 ns | 3.25 | 728 B |
| Multiple · AsyncLock · NeoSmart | 1 | None | 116.605 ns | 4.14 | 416 B |
| Multiple · AsyncLock · Pooled (Task) | 1 | None | 480.060 ns | 17.05 | 271 B |
| Multiple · AsyncLock · NonKeyed | 1 | None | 543.285 ns | 19.29 | 352 B |
| Multiple · AsyncLock · Pooled (ValueTask) | 1 | NotCancelled | 46.467 ns | 1.00 | - |
| Multiple · AsyncLock · ProtoPromise | 1 | NotCancelled | 78.408 ns | 1.69 | - |
| Multiple · AsyncSemaphore · VS.Threading | 1 | NotCancelled | 79.121 ns | 1.70 | 168 B |
| Multiple · AsyncLock · NeoSmart | 1 | NotCancelled | 119.281 ns | 2.57 | 416 B |
| Multiple · AsyncLock · Nito | 1 | NotCancelled | 394.059 ns | 8.48 | 968 B |
| Multiple · AsyncLock · Pooled (Task) | 1 | NotCancelled | 516.424 ns | 11.12 | 272 B |
| Multiple · SemaphoreSlim · System | 1 | NotCancelled | 565.700 ns | 12.18 | 504 B |
| Multiple · AsyncLock · NonKeyed | 1 | NotCancelled | 660.724 ns | 14.22 | 640 B |
| Multiple · AsyncLock · ProtoPromise | 10 | None | 261.829 ns | 0.84 | - |
| Multiple · SemaphoreSlim · System | 10 | None | 270.938 ns | 0.87 | 880 B |
| Multiple · AsyncLock · Pooled (ValueTask) | 10 | None | 309.915 ns | 1.00 | - |
| Multiple · AsyncSemaphore · VS.Threading | 10 | None | 511.846 ns | 1.65 | 1680 B |
| Multiple · AsyncLock · Nito | 10 | None | 542.904 ns | 1.75 | 4400 B |
| Multiple · AsyncLock · RefImpl | 10 | None | 619.827 ns | 2.00 | 2160 B |
| Multiple · AsyncLock · NeoSmart | 10 | None | 634.247 ns | 2.05 | 2288 B |
| Multiple · AsyncLock · Pooled (Task) | 10 | None | 3,191.047 ns | 10.30 | 1352 B |
| Multiple · AsyncLock · NonKeyed | 10 | None | 3,509.004 ns | 11.32 | 2296 B |
| Multiple · AsyncLock · ProtoPromise | 10 | NotCancelled | 466.087 ns | 0.91 | - |
| Multiple · AsyncLock · Pooled (ValueTask) | 10 | NotCancelled | 515.058 ns | 1.00 | - |
| Multiple · AsyncLock · NeoSmart | 10 | NotCancelled | 628.808 ns | 1.22 | 2288 B |
| Multiple · AsyncSemaphore · VS.Threading | 10 | NotCancelled | 700.242 ns | 1.36 | 1680 B |
| Multiple · AsyncLock · Nito | 10 | NotCancelled | 3,204.949 ns | 6.22 | 6800 B |
| Multiple · AsyncLock · Pooled (Task) | 10 | NotCancelled | 3,474.221 ns | 6.75 | 1352 B |
| Multiple · SemaphoreSlim · System | 10 | NotCancelled | 4,349.064 ns | 8.45 | 3888 B |
| Multiple · AsyncLock · NonKeyed | 10 | NotCancelled | 4,972.770 ns | 9.66 | 5176 B |
| Multiple · AsyncLock · ProtoPromise | 100 | None | 2,582.434 ns | 0.81 | - |
| Multiple · SemaphoreSlim · System | 100 | None | 2,587.874 ns | 0.82 | 8800 B |
| Multiple · AsyncLock · Pooled (ValueTask) | 100 | None | 3,169.397 ns | 1.00 | - |
| Multiple · AsyncSemaphore · VS.Threading | 100 | None | 4,740.432 ns | 1.50 | 21120 B |
| Multiple · AsyncLock · Nito | 100 | None | 5,232.310 ns | 1.65 | 41120 B |
| Multiple · AsyncLock · RefImpl | 100 | None | 6,004.082 ns | 1.89 | 21600 B |
| Multiple · AsyncLock · NeoSmart | 100 | None | 6,037.141 ns | 1.91 | 21008 B |
| Multiple · AsyncLock · Pooled (Task) | 100 | None | 34,158.735 ns | 10.78 | 12215 B |
| Multiple · AsyncLock · NonKeyed | 100 | None | 35,626.170 ns | 11.24 | 21800 B |
| Multiple · AsyncLock · ProtoPromise | 100 | NotCancelled | 4,536.588 ns | 0.91 | - |
| Multiple · AsyncLock · Pooled (ValueTask) | 100 | NotCancelled | 4,987.139 ns | 1.00 | - |
| Multiple · AsyncLock · NeoSmart | 100 | NotCancelled | 5,916.785 ns | 1.19 | 21008 B |
| Multiple · AsyncSemaphore · VS.Threading | 100 | NotCancelled | 6,643.265 ns | 1.33 | 21120 B |
| Multiple · AsyncLock · Pooled (Task) | 100 | NotCancelled | 33,360.289 ns | 6.69 | 12216 B |
| Multiple · AsyncLock · Nito | 100 | NotCancelled | 33,478.485 ns | 6.71 | 65120 B |
| Multiple · SemaphoreSlim · System | 100 | NotCancelled | 43,340.455 ns | 8.69 | 37792 B |
| Multiple · AsyncLock · NonKeyed | 100 | NotCancelled | 51,865.969 ns | 10.40 | 50600 B |
Benchmark Analysis
Key Findings:
Uncontended Performance:
AsyncLockperforms comparably to or better thanSemaphoreSlimin uncontended scenarios due to the optimized fast path that avoids allocations entirely.Memory Efficiency: The pooled
IValueTaskSourceapproach significantly reduces allocations compared toTaskCompletionSource-based implementations. This is especially beneficial in high-throughput scenarios.Contended Scenarios: Under contention, the local waiter optimization ensures the first queued waiter incurs no allocation, while subsequent waiters benefit from pool reuse. ProtoPromise can outperform the pooled implementation in several published throughput measurements, while
SemaphoreSlimis also competitive in some cases but always at the cost of allocations.ValueTask Advantage: Returning
ValueTask<Releaser>instead ofTaskallows always allocation free completion.
When to Choose AsyncLock:
- High-throughput scenarios where lock acquisition is frequent
- Memory-sensitive applications where allocation pressure matters
- Scenarios where locks are typically contended or allocation free cancellation support is needed
Best Practices
DO: Use the using pattern to ensure lock release and await the result directly
// Good: Minimal time holding lock
public async Task UpdateAsync(Data newData)
{
// Prepare outside lock
var processed = await PrepareDataAsync(newData);
// using ensures lock is released
using (await _lock.LockAsync())
{
_data = processed;
}
}
DO: Keep critical sections short
// Good: Minimal time holding lock
using (await _lock.LockAsync())
{
_data = processed;
}
DO: Prefer avoiding CancellationToken for hot-path locks
Cancellation registrations allocate a small control structure. For hot-path code, omit the token when possible, or perform an early cancellationToken.IsCancellationRequested check before calling LockAsync to avoid allocations from Task.FromCanceled.
DO: Configure a larger pool under high contention
If you expect many concurrent waiters, provide a custom object pool with a larger retention size so allocations are avoided when the pool can satisfy requests.
DON'T: Create new locks repeatedly
// Bad: Creating new lock each time
public async Task OperationAsync()
{
var lock = new AsyncLock(); // Don't do this!
Task.Run(async ()=> await Work(lock));
Task.Run(async ()=> await Work(lock));
}
public async Task Work(AsyncLock lock)
{
using (await lock.LockAsync())
{
// Work...
}
}
DON'T: Hold the lock during long-running operations
// Bad: Holding lock during slow operation
using (await _lock.LockAsync())
{
await SlowDatabaseQueryAsync(); // Don't hold lock!
}
DON'T: Nest locks (may deadlock)
// Bad: Risk of deadlock
using (await _lock1.LockAsync())
{
using (await _lock2.LockAsync()) // Deadlock risk!
{
// Work...
}
}
See Also
- Threading Package Overview
- AsyncAutoResetEvent - Auto-reset event variant
- AsyncManualResetEvent - Manual-reset event variant
- AsyncReaderWriterLock - Async reader-writer lock
- AsyncCountdownEvent - Async countdown event
- AsyncBarrier - Async barrier synchronization primitive
- AsyncSemaphore - Async semaphore primitive
- Benchmarks - Benchmark description
© 2026 The Keepers of the CryptoHives