SQL Server: Details of Locking
- SQL Server Locking Mechanism
- Transaction Isolation Levels
- Lock Types and Compatibility
- Summary
In the first article on transactions and locking, we introduced you to one of SQL Server's advanced functionalities: handling transactions and locking. This article delves into the details of locking, which greatly affects SQL Server performance.
SQL Server Locking Mechanism
SQL Server uses a certain amount of memory each time it acquires and releases locks. Servers with more memory can support more concurrent users because they can acquire and release more locks. With SQL Server 7.0 and later, memory is managed dynamically, although you can set the maximum amount of memory allotted to SQL Server. The amount of memory available to the server will play a major role in determining the lock granularity that SQL Server will choose for each transaction.
To understand lock granularity, it's best to think about how SQL Server stores data. The data is stored on 8KB data pages. A set of eight contiguous pages makes up an extent. When you create a database, you allocate a certain amount of disk space for that database. As you populate the database, the data pages start to fill up, and SQL Server uses up more of the disk space by creating new pages and extents. The data storage structure being locked (row, page, extent) defines the lock granularity.
In SQL Server 2000, the lowest level of granularity is a row. SQL Server can lock individual rows, data pages, extents, or entire tables. Suppose that you execute an UPDATE statement affecting a single row. SQL Server will lock that row so no other users can modify the same row until your transaction completes. On the other hand, the rest of the users are free to update rows not affected by your UPDATE. So the row-level locking is best for concurrency.
Now, suppose that you execute an UPDATE statement affecting 1000 rows. Should SQL Server lock one row at a time? That would mean acquiring 1000 locks, which might or might not be an option, depending on how much memory is available. Depending on whether these rows are located on contiguous data pages or not, SQL Server might decide whether to acquire several page locks and extent locks, or lock the entire table until your transaction is complete. If SQL Server decides to lock several pages, then the rest of the database users can modify rows not located on the locked pages. On the other hand, if SQL Server locks the whole table, the rest of the users will have to wait until your transaction is committed. So the bottom line is that locking is a tradeoff between concurrency and overhead.
It's equally important to know how SQL Server chooses the level of locking for each transaction. SQL Server query optimizer is aware of the memory requirements for each lock. If it would require less memory to lock an entire page, and SQL Server is low on memory, a page lock will be acquired instead of a row lock. Likewise, if it is cheaper to acquire a table lock instead of locking a number of extents or pages, then SQL Server will opt for a table-level lock.
The above paragraph might sound like you have no power over the locking decisions that SQL Server makes. This is not entirely trueyou do have locking hints that help you override default decisions made by SQL Server. For instance, you can specify the ROWLOCK hint with your UPDATE statement to convince SQL Server to lock each row affected by that data modification. Whether it's prudent to do so is another story; what will happen if your UPDATE affects 95% of rows in the affected table? If the table contains 1000 rows, then SQL Server will have to acquire 950 individual locks, which is likely to cost a lot more in terms of memory than acquiring a single table lock. So think twice before you bombard your code with ROWLOCKS. We'll discuss locking hints in greater detail later in the article.