Lock keyword gets an upgrade in .NET9

Marek Sirkovský
8 min readNov 4, 2024

--

Have you ever thought about why theobject type is commonly used as the default type for locking in C#? I mean this notorious code:

private static readonly object _lock = new object();
...
lock (_lock)
{
// critical section
}

The answer is simple: An instance of any reference type is guaranteed to be unique in the memory and has special features that can be used for locking, so why not use the simplest type in .NET — object?
Moreover, there are extra arguments supporting this decision. Microsoft aimed to keep the language and its threading model simple without requiring developers to learn additional, special-purpose types. Since objects can be anything, developers have the flexibility to use existing instances rather than having to create dedicated lock instances.

Photo by Pawel Czerwinski on Unsplash

But it seems that the days of using an object as the default Lock type are over. With .NET 9, we’ve got a new type designed specially for locking — System.Threading.Lock. While it’s not a groundbreaking feature, it’s still a nice enhancement.

Let’s quickly see how locks have evolved. If you’re not interested in the history, feel free to skip ahead to the next section.

Locking in .NET

The lock statement, introduced in .NET 1.0 (2002), is syntactic sugar for the Monitor class, which uses Monitor.Enter and Monitor.Exit to manage thread access to critical sections.

In .NET 2.0 (2005), ReaderWriterLock was added for scenarios where multiple threads could read, but only one could write, though it faced performance issues and deadlocks.

.NET 3.5 (2007) addressed this with ReaderWriterLockSlim, offering better performance and scalability.

.NET 4.0 introduced new concurrency primitives like SemaphoreSlim, ManualResetEventSlim, and SpinLock, allowing more efficient thread synchronization.

In glorious .NET 4.5 (2012), async/await transformed concurrency by reducing the need for explicit locks (at least for a few scenarios).
This pattern inspired other languages, such as JavaScript, Python, C++, and many more, though it originated from F# asynchronous workflows(2007)

With .NET Core (2016+), the focus shifted to lightweight locking mechanisms, such as the AsyncLock pattern using SemaphoreSlim for asynchronous tasks. Concurrent collections like ConcurrentDictionary and ConcurrentQueue further minimized the need for explicit locks.

C# 8.0 introduced async streams and System.Threading.Channels for more advanced concurrency, along with ValueTask, a more efficient alternative to Task for synchronous completions.

How does a lock work?

After the brief historical overview, take a deep breath and delve into the intricacies of locking. As I mentioned in the blog post's introduction, there are “special features that can be used for locking.” Let’s explore what these special features are.

SyncBlock vs ThinLock in .NET

Internal locking mechanisms in .NET use two distinct concepts: SyncBlock and ThinLock. While these terms aren’t officially documented in detail, they refer to internal mechanisms used by the Common Language Runtime (CLR) to manage object-level synchronization in C#.

ThinLock

Let’s start with the simpler one. ThinLock is used in cases where a lock is only being used in a simple, uncontested manner. An uncontested manner means only one thread enters and leaves the critical section.

If you have the following method:

private static readonly object _lock = new object();

static void DoSomething(string threadName)
{
Console.WriteLine($"{threadName} is attempting to enter the critical section.");

// Lock the critical section to prevent other threads from entering
lock (_lock)
{
Console.WriteLine($"{threadName} has entered the critical section.");
Thread.Sleep(2000); // Simulate work being done in the critical section
Console.WriteLine($"{threadName} is leaving the critical section.");
}

Console.WriteLine($"{threadName} has exited the critical section.");
}

Then the example of uncontested locking (ThinLock) looks like this:

// Create two threads that will try to enter the critical section.
var thread1 = new Thread(ThreadMethod);
var thread2 = new Thread(ThreadMethod);

// Start the first thread
thread1.Start("Thread 1");
// We wait for the first thread before starting a new one.
thread1.Join();

// Start the second thread.
thread2.Start("Thread 2");
thread2.Join();

Threads sequentially enter the critical section. But what happens when two threads attempt to access it simultaneously?

// Create two threads that will try to enter the critical section
var thread1 = new Thread(ThreadMethod);
var thread2 = new Thread(ThreadMethod);

// Start both threads - they compete to get the lock first.
thread1.Start("Thread 1");
thread2.Start("Thread 2");

// Wait for both threads to finish
thread1.Join();
thread2.Join();

That is a classic example of a contested lock. In this case, SyncBlock is used instead of the simpler ThinLock.

Now that we understand the roles of ThinLock and SyncBlock, let’s examine how they work.

ThinLock

Before the CLR allocates a SyncBlock, it tries to manage locks using a more lightweight mechanism directly in the object’s header. ThinLocks rely on the object header in .NET, which includes a lock word (or sync block index) field.

When an object is locked, the CLR first attempts to store the lock information (such as the thread ID of the owner and the lock state) in the object’s header itself.

If the lock is not contested, the ThinLock is sufficient, and no SyncBlock is needed. If contention arises the CLR escalates the ThinLock into a SyncBlock.

SyncBlock (Fat Lock)

A SyncBlock (short for synchronization block) is a data structure used internally by the .NET CLR to store synchronization-related information about an object. SyncBlock is part of an internal table known as the SyncBlock table that the CLR maintains.

When a ThinLock is escalated to a SyncBlock, an external structure is allocated and stored. Each SyncBlock contains detailed information about the lock, the owning thread, recursion counts, and information about waiting threads.

The CLR automatically uses ThinLocks to improve performance and reduce memory overhead. Developers don’t have direct control over when ThinLocks or SyncBlocks are used.

Do we need a new Lock type?

As you can see, the current ThinLock and SyncBlock are well-tested and operate quite reliably. So, what are the reasons for introducing the new Lock type?

Locking with an object has always been weird

Early in my career, I found using the object type for locking confusing. With so many specialized types available for different scenarios — such as streaming, threading, and globalization — it felt strange that there wasn’t a dedicated type for locking. The new Lock type resolves this by providing a clear, semantically meaningful way to manage locks. Let’s see how to use it.

// old way
private static readonly object _lock = new();
lock (_lock)
{
// critical section
}

// new way
private static readonly Lock _lock = new Lock();
lock (_lock)
{
// critical section
}

That’s it. The only difference is that you use the Lock type instead of object.

As I mentioned, the C# compiler rewrites the code to use the Monitor class when using the lock with an object.

private static readonly object _lock = new();
lock(_lock)
{
// critical section
}

is rewritten(lowered) to:

object @lock = _lock;
bool lockTaken = false;
try
{
Monitor.Enter(@lock, ref lockTaken);
}
finally
{
if (lockTaken)
{
Monitor.Exit(@lock);
}
}

The new Lock type behaves a bit differently. The lock statement:

private static readonly Lock _lock = new();
lock (_lock)
{
// critical section
}

is lowered to the following code:

var scope = _lock.EnterScope();
try
{
// critical section
}
finally
{
scope.Dispose();
}

Instead of Monitor class, the Lock type utilizes the new type called Scope. An instance of this type is responsible for holding the lock as long as it exists.

  • _lock.EnterScope(): This method begins a locking scope and returns a disposable Lock.Scope object. When the lock is acquired, it signals that the critical section is protected.
  • scope.Dispose(): Once the critical section finishes, the Dispose method is called in the finally block. This ensures that the lock is always released, even if an exception occurs in the critical section. Releasing the lock allows other threads to acquire it and proceed.

The Dispose Pattern vs the Lock Pattern?

Have you noticed that this lowering is basically the Dispose Pattern?

private static readonly Lock _lock = new();

//C# compiler generates the same lowered code for both code snippets:

//1.
using(_lock.EnterScope())
{
// critical section
}

//2.
lock (_lock)
{
// critical section
}

The C# team used the Dispose Pattern because there are hidden similarities between the lock and using keywords. There are differences, of course, but both patterns control the lifecycle of something (a resource in using and a critical section in lock), and both ensure cleanup or release occurs, whether the block completes successfully or throws an exception.

So what should we use? Lock or Using?

A lock is better from a semantic point of view. Exposing a scope feels like a Leaky abstraction. Another good argument for utilizing the lock keyword is pattern matching. There is a proposal to support multiple types of locking in the future.

private static readonly SpinLock _spinLock = new();
lock(_spinLock) // This doesn't work today, but it might in the future.
{

}

However, a combination of “using & scope” is probably beneficial if you need more granular control over the time when the lock should be released.

var scope = _lock.EnterScope();
try
{
// do some work

// release the lock prematurely
scope.Dispose();

// do some more work
}
finally
{
scope.Dispose();
}

Pattern matching

Since we’re talking about pattern matching, there are (of course) some edge cases. As we’ve seen, the C# compiler converts the lock keyword into two different forms depending on the object’s type. Take a look at the code snippet below and try to guess whether the compiler generates a traditional monitor lock or the new scope-based lock.

// compile time type => object
// runtime type => Lock
object l = new System.Threading.Lock();
lock(l)
{
}

In this case, the compiler utilizes the old method — Monitor.Enter. Pattern matching relies on the compile time type rather than the runtime value. I agree this may be somewhat confusing, but the C# compiler fortunately issues a warning in these cases:

A value of type 'System.Threading.Lock' converted to a different type 
will use likely unintended monitor-based locking in 'lock' statement.

I’m also aware that my previous code:

object l = new System.Threading.Lock();

might seem a bit contrived. However, the warning is handy in the following types of situations when using an object instead of the Lock type is not explicit.

private static readonly Lock _lock = new(); // the new Lock type
private static void Run()
{
// lockParam is "silently" casted to an object type.
ThreadPool.QueueUserWorkItem(lockParam =>
{
lock (lockParam) // Old way(Monitor) is used!
{

}
}, _lock); // Compiler shows warning for this line
}

Performance of the new Lock type.

The new lock type aims to improve performance. As you saw, the simplest solution is to avoid promoting to a SyncBlock as much as possible. But supporting only Thinlock wouldn’t be super useful. That means System.Threading.Lock can be promoted to a SyncBlock, but its implementation is(at least according to Microsoft) lighter than that of the traditional Monitor. NET runtime tries to avoid the costs associated with SyncBlocks whenever possible, promoting only when contention levels justify it.

Still missing async

Unfortunately, support for asynchronous locking is lacking because the mechanisms for synchronous and asynchronous locking are fundamentally different. On a positive note, the C# compiler’s ability to translate the lock keyword into the Dispose Pattern is encouraging. Perhaps in the future, we will be able to write something like this:

//There is no type such as LockAsync in standard .NET
private static readonly LockAsync _lock = new();
await using(_lockAsync.EnterScope()) //not working now
{
}

// or
private static readonly LockAsync _lock = new();
await lock(_lockAsync) //not working now
{
}

Until then, we can use libraries that support asynchronous locking, such as Stephen Cleary’s AsyncEx.

--

--