Nielsen c66.tex V4 - 07/21/2009 4:13pm Page 1382 Part IX Performance Tuning and Optimization declare @retry int set @retry = 1 while @retry = 1 begin begin try set @retry = 0 begin transaction UPDATE HumanResources.Department SET Name = ‘qq’ WHERE DepartmentID = 2; UPDATE HumanResources.Department SET Name = ‘x’ WHERE DepartmentID = 1; commit transaction end try begin catch if error_number() = 1205 begin print error_message() set @retry = 1 end rollback transaction end catch end Instead of letting SQL Server decide which transaction will be the ‘‘deadlock victim,’’ a transaction can ‘‘volunteer’’ to serve as the deadlock victim. That is, the transaction with the lowest deadlock priority will be rolled back first. Assuming the deadlock priorities are the same, SQL will fallback to the rollback cost to determine which to rollback. The following code inside a transaction will inform SQL Server that the transaction should be rolled b ack in case of a deadlock: SET DEADLOCK_PRIORITY LOW The setting actually allows for a range of values from -10 to 10, or normal (0), low (-5), and high (5). Minimizing deadlocks Even though deadlocks can be detected and handled, it’s better to avoid them altogether. The following practices will help prevent deadlocks: ■ Set the server setting for maximum degree of parallelism (maxdop) to 1. ■ Keep a transaction short and to the point. Any code that doesn’t have to be in the transaction should be left out of it. 1382 www.getcoolebook.com Nielsen c66.tex V4 - 07/21/2009 4:13pm Page 1383 Managing Transactions, Locking, and Blocking 66 ■ Never code a transaction to depend on user input. ■ Try to write batches and procedures so that they obtain locks in the same order — for example, TableA,thenTableB,thenTableC. This way, one procedure will wait for the next, avoiding a deadlock. ■ Plan the physical schema to keep data that might be selected simultaneously close on the data page by normalizing the schema and carefully selecting the clustered indexes. Reducing the spread of the locks will help prevent lock escalation. Smaller locks help prevent lock contention. ■ Ensure that locking is done at the lowest level. This includes locks held at the following: database, object, page, and key. The lower the lock level, the more can be held without contention. ■ Don’t increase the isolation level unless it’s necessary. A stricter isolation level increases the duration of the locks and what types of locks are held during the transaction. Understanding SQL Server Locking SQL Server implements the ACID isolation property with locks that protect a transaction’s r ows from being affected by another transaction. SQL Server locks are not just a ‘‘page lock on’’ and ‘‘page lock off’’ scheme, but rather a series of lock levels. Before they can be controlled, they must be understood. If you’ve written manual locking schemes for other database engines to overcome their locking defi- ciencies (as I have), you may feel as though you still need to control the locks. Let me assure you that the SQL Server lock manager can be trusted. Nevertheless, SQL Server exposes several methods for controlling locks. Within SQL Server, you can informally picture two processes: a query processor and a lock manager. The goal of the lock manager is to maintain transactional integrity as efficiently as possible by creating and dropping locks. Every lock has the following three properties: ■ Granularity: The size of the lock ■ Mode: The type of lock ■ Duration: The isolation mode of the lock Locks are not impossible to view, but some tricks make viewing the current set of locks easier. In addi- tion, lock contention, or the compatibility of various locks to exist or block other locks, can adversely affect performance if it’s not understood and controlled. Lock granularity The portion of the data controlled by a lock can vary from only one row to the entire database, as shown in Table 66-1. Several combinations of locks, depending on the lock granularity, could satisfy a locking requirement. 1383 www.getcoolebook.com Nielsen c66.tex V4 - 07/21/2009 4:13pm Page 1384 Part IX Performance Tuning and Optimization TABLE 66-1 Lock Granularity Lock Size Description Row Lock Locks a single row. This is the smallest lock available. SQL Server does not lock columns. Page Lock Locks a page, or 8 KB. One or more rows may exist on a single page. Extent Lock Locks eight pages, or 64 KB Table Lock Locks the entire table Database Lock Locks the entire database. This lock is used primarily during schema changes. Key Lock Locks nodes on an index For best performance, the SQL Server lock manager tries to balance the size of the lock against the number of locks. The struggle is between concurrency (smaller locks allow more transactions to access the data) and performance (fewer locks are faster, as each lock requires memory in the system to hold the information about the lock). SQL Server automatically manages the granularity of locks by trying to keep the lock size small and only escalating to a higher level when it detects memory pressure. Lock mode Locks not only have granularity, or size, but also a mode that determines their purpose. SQL Server has a rich set of lock modes (such as shared, update, exclusive). Failing to understand lock modes will almost guarantee that you develop a poorly performing database. Lock contention The interaction and compatibility of the locks plays a vital role in SQL Server’s transactional integrity and performance. Certain lock modes block other lock modes, as detailed in Table 66-2. For example, if transaction 1 has a shared lock (S) and transaction 2 requests an exclusive lock (X), then the request is denied, because a shared lock blocks an exclusive lock. Keep in mind that exclusive locks are ignored unless the page in memory has been updated, i.e., is dirty. Shared lock (S) By far the most common and most abused lock, a shared lock (listed as an ‘‘S’’ in SQL Server) is a simple ‘‘read lock.’’ If a transaction has a shared lock, it’s saying, ‘‘I’m looking at this data.’’ Multiple transac- tions are allowed to view the same data, as long as no one else already has an incompatible lock. 1384 www.getcoolebook.com Nielsen c66.tex V4 - 07/21/2009 4:13pm Page 1385 Managing Transactions, Locking, and Blocking 66 TABLE 66-2 Lock Compatibility T2 Requests: T1 has: IS S U IX SIX X Intent shared (IS) Yes Yes Yes Yes Yes Yes Shared (S) Yes Yes Yes No No No Update (U) Yes Yes No No No No Intent exclusive (IX) Yes No No Yes No No Shared with intent exclusive (SIX) Yes No No No No No Exclusive (X) No No No No No No Exclusive lock (X) An exclusive lock means that the transaction is performing a write to the data. As the name implies, an exclusive lock means that only one transaction may hold an exclusive lock at one time, and that no transactions may view the data during the exclusive lock. Update lock (U) An update lock can be confusing. It’s not applied while a transaction is performing an update — that’s an exclusive lock. Instead, the update lock means that the transaction is getting ready to perform an exclusive lock and is currently scanning the data to determine the row(s) it wants for that lock. Think of the update lock as a shared lock that’s about to morph into an exclusive lock. To help prevent deadlocks, only one transaction may hold an update lock at any given time. Intent locks (various) An intent lock is a yellow flag or a warning lock that alerts other transactions to the fact that something more is going on. The primary purpose of an intent lock is to improve performance. Because an intent lock is used for all types of locks and for all lock granularities, SQL Server has many types of intent locks. The following is a sampling of the intent locks: ■ Intent Shared Lock (IS) ■ Shared with Intent Exclusive Lock (SIX) ■ Intent Exclusive Lock (IX) Intent locks serve to stake a claim for a shared or exclusive lock without actually being a shared or exclusive lock. In doing so they solve two performance problems: hierarchical locking and permanent lock block. 1385 www.getcoolebook.com Nielsen c66.tex V4 - 07/21/2009 4:13pm Page 1386 Part IX Performance Tuning and Optimization Without intent locks, if transaction 1 holds a shared lock on a row and transaction 2 wants to grab an exclusive lock on the table, then transaction 2 needs to check for table locks, extent locks, page locks, row locks, and key locks. Instead, SQL Server uses intent locks to propagate a lock to higher levels of the data’s hierarchical levels. When transaction 1 gains a row lock, it also places an intent lock on the row’s page and table. The intent locks move the overhead of locking from the transaction needing to check for a lock to the transaction placing the lock. The transaction placing the lock needs to place three or four locks, i.e., key, page, object, or database. The transaction checking only needs to check for locks that contend with the three or four locks it needs to place. That one-time write of three locks potentially saves hundreds of searches later as other transactions check for locks. Jim Gray (memorialized in Chapter 1) was the brains behind this optimization. The intent locks also prevent a serious shared-lock contention problem — what I call ‘‘permanent lock block.’’ As long as a transaction has a shared lock, another transaction can’t gain an exclusive lock. What would happen if someone grabbed a shared lock every five seconds and held it for 10 seconds while a transaction was waiting for an exclusive lock? The UPDATE transaction could theoretically wait forever. However, once the transaction has an intent exclusive lock (IX), no other transaction can grab a shared lock. The intent exclusive lock isn’t a full exclusive lock, but it lays claim to gaining an exclusive lock in the future. Schema lock (Sch-M, Sch-S) Schema locks protect the database schema. SQL Server will apply a schema stability (Sch-S) lock during any query to prevent data definition language (DDL) commands. A schema modification lock (Sch-M) is applied only when SQL Server is adjusting the physical schema. If SQL Server is in the middle of adding a column to a table, then the schema lock will prevent any other transactions from viewing or modifying the data during the schema-modification operation. Controlling lock timeouts If a transaction is waiting for a lock, it will continue to wait until the lock is available. By default, no timeout exists — it can theoretically wait forever. Fortunately, you can set the lock time using the set lock_timeout connection option. Set the option to a number of milliseconds or set i t to infinity (the default) by setting it to -1. Setting the lock_timeout to 0 means that the transaction will instantly give up if a ny lock contention occurs at all. The application will be very fast, and very ineffective. The following query sets the lock timeout to two seconds (2,000 milliseconds): SET Lock_Timeout 2000 When a transaction does time out while waiting to gain a lock, a 1222 error is raised. 1386 www.getcoolebook.com Nielsen c66.tex V4 - 07/21/2009 4:13pm Page 1387 Managing Transactions, Locking, and Blocking 66 Best Practice I recommend setting a lock timeout in the connection. The length of the wait you should specify depends on the typical performance of the database. I usually set a five-second timeout. Lock duration The third lock property, lock duration, is determined by the transaction isolation level of the trans- actions involved — the more stringent the isolation, the longer the locks will be held. SQL Server implements four transaction isolation levels (transaction isolation levels are detailed in the next section.) Index-level locking restrictions Isolation levels and locking hints are applied from the connection and query perspective. The only way to control locks from the table perspective is to restrict the granularity of locks on a per-index basis. Using the ALTER INDEX command, rowlocks and/or pagelocks may be disabled for a particular index, as follows: ALTER INDEX AK_Department_Name ON HumanResources.Department SET (ALLOW_PAGE_LOCKS = OFF) This is useful for a couple of specific purposes. If a table frequently causes waiting because of page locks, setting ALLOW_PAGE_LOCKS to OFF will force rowlocks. The decreased scope of the lock will improve concurrency. In addition, if a table is seldom updated but frequently read, then row-level and page-level locks a re inappropriate. Allowing only table locks is suitable during the majority of table accesses. For the infrequent update, a table-exclusive lock is not a big issue. Sp_indexoption is for fine-tuning the data schema; that’s why it’s on an index level. To restrict the locks on a table’s primary key, use sp_help tablename to find the specific name for the primary key index. The following commands configure the ProductCategory table as an infrequently updated lookup table. First, sp_help reports the name of the primary key index: sp_help ProductCategory Result (abridged): index index index name description keys PK__ProductCategory__79A81403 nonclustered, ProductCategoryID unique, primary key located on PRIMARY 1387 www.getcoolebook.com Nielsen c66.tex V4 - 07/21/2009 4:13pm Page 1388 Part IX Performance Tuning and Optimization Having identified the actual name of the primary key, the ALTER INDEX command can be set as shown previously. Transaction Isolation Levels Any study of how transactions affect performance must include transactional integrity, which refers to the quality, or fidelity, of the transaction. Three types of problems violate transactional integrity: dirty reads, nonrepeatable reads, and phantom rows. The level of isolation, or the height of the fence between transactions, can be adjusted to control which transactional faults are permitted. The ANSI SQL-92 committee specifies four isolation levels: read uncommitted, read committed, repeatable read, and serializable. SQL Server 2005 introduced two additional row-versioning levels, which enables two levels of optimistic transaction isolation: snapshot and read committed snapshot. All six transaction isolation levels are listed in Table 66-3 and detailed in this section. TABLE 66-3 ANSI-92 Isolation Levels Isolation Level (Transaction isolation level is set for the connection) Table Hint (override the connection’s transaction isolation level) Dirty Read (Seeing another transaction’s noncommitted changes) Non-Repeatable Read (Seeing another transaction’s committed changes) Phantom Row (Seeing additional rows selected by where clause as a result of another transaction) Reader/Writer Blocking (A write transaction blocks a read transaction) Read Uncommitted (least restrictive) NoLock, Read- Uncommitted Possible Possible Possible Yes Read Committed (Sql Server default; moderately restrictive) ReadCommitted Prevented Possible Possible Yes Repeatable Read RepeatableRead Prevented Prevented Possible Yes Serializable (most restrictive) Serializable Prevented Prevented Prevented Yes Snapshot Prevented Prevented Possible No Read Committed Snapshot Prevented Possible Possible No 1388 www.getcoolebook.com Nielsen c66.tex V4 - 07/21/2009 4:13pm Page 1389 Managing Transactions, Locking, and Blocking 66 Internally, SQL Server uses locks for isolation (except for the snapshot isolations), and the transaction isolation level determines the duration of the share lock or exclusive lock for the transaction, as listed in Table 66-4. TABLE 66-4 Isolation Levels and Lock Duration Isolation Level Share-Lock Duration Exclusive-Lock Duration Read Uncommitted None Held only long enough to prevent physical corruption; otherwise, exclusive locks are neither applied nor honored Read Committed Held while the transaction is reading the data Held until TRANSACTION COMMIT Repeatable Read Held until TRANSACTION COMMIT Held until TRANSACTION COMMIT Serializable Held until TRANSACTION COMMIT Held until TRANSACTION COMMIT. The exclusive lock also uses a keylock (also called a range lock ) to prevent inserts. Snapshot Isolation n/a n/a Setting the transaction isolation level The transaction isolation level can be set at the connection level using the SET command. Setting the transaction isolation level affects all statements for the duration of the connection, or until the transac- tion isolation level is changed again (you can’t change the isolation level once in a transaction): SET TRANSACTION ISOLATION LEVEL READ COMMITTED; To view the current connection transaction isolation level, use DBCC UserOptions,orquery sys.dm_exec_sessions: SELECT TIL.Description FROM sys.dm_exec_sessions dmv JOIN (VALUES(1, ‘Read Uncommitted’), (2, ‘Read Committed’), (3, ‘Repeatable Read’), (4, ‘Serializable’)) AS TIL(ID, Description) ON dmv.transaction_isolation_level = TIL.ID WHERE session_id = @@spid; 1389 www.getcoolebook.com Nielsen c66.tex V4 - 07/21/2009 4:13pm Page 1390 Part IX Performance Tuning and Optimization Result: READ COMMITTED Alternately, the transaction isolation level for a single DML statement can be set by using table-lock hints in the FROM clause (WITH is optional). These will override the c urrent connection transaction isolation level and apply the hint on a per-table basis. For example, in the next code sample, the Department table is actually accessed using a read uncommitted transaction isolation level, not the connection’s read committed transaction isolation level: SET TRANSACTION ISOLATION LEVEL REPEATABLE READ; SELECT Name FROM HumanResources.Department WITH (NOLOCK) WHERE DepartmentID = 1; Level 1 – Read uncommitted and the dirty read The lowest level of isolation, read uncommitted, is nothing more than a line drawn in the sand. It doesn’t really provide any isolation between transactions, and it allows all three transactional faults. A dirty read, when one transaction can read uncommitted changes made by another transaction, is possibly the most egregious transaction isolation fault. It is illustrated in Figure 66-7. FIGURE 66-7 A dirty read occurs when transaction 2 can see transaction 1’s uncommitted changes. t1 Update Commit t2 Select Isolation To demonstrate the read uncommitted transaction isolation level and the dirty read it allows, the fol- lowing code uses two connections — creating two transactions: transaction 1 is on the left, and transac- tion 2 is on the right. The second transaction will see the first transaction’s update before that update is committed: USE AdventureWorks2008; 1390 www.getcoolebook.com Nielsen c66.tex V4 - 07/21/2009 4:13pm Page 1391 Managing Transactions, Locking, and Blocking 66 BEGIN TRANSACTION; UPDATE HumanResources.Department SET Name = ‘Transaction Fault’ WHERE DepartmentID = 1; In a separate Query Editor window (refer to Figure 66-1), execute another transaction in its own con- nection window. (Use the Query tab context menu and New Vertical Tab Group to split the windows.) This transaction will set its transaction isolation level to permit dirty reads. Only the second transaction needs to be set to read uncommitted for transaction 2 to experience a dirty read: Transaction 2 USE AdventureWorks2008; SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED; SELECT Name FROM HumanResources.Department WHERE DepartmentID = 1; Result: Name Transaction Fault Transaction 1 hasn’t yet committed the transaction, but transaction 2 was able to read ‘‘Transaction Fault.’’ That’s a dirty read violation of transactional integrity. To finish the task, the first transaction will roll back that transaction: Transaction 1 ROLLBACK TRANSACTION Best Practice N ever use read uncommitted or the With (NOLOCK) table hint. It’s often argued that read uncommitted is OK for a reporting database, the rationale being that dirty reads won’t matter because there’s little updating and/or the data is not changing. If that’s the case, then the reporting locks are only share locks, which won’t block anyway. Another argument is that users don’t mind seeing inconsistent data. However, it’s often the case that users don’t understand what ‘‘seeing inconsistent data’’ means. There are other issues about reading uncommitted data related to how the SQL engine optimizes such a read that can result in your query reading the same data more than once. 1391 www.getcoolebook.com . transaction. Understanding SQL Server Locking SQL Server implements the ACID isolation property with locks that protect a transaction’s r ows from being affected by another transaction. SQL Server locks are. locks. Let me assure you that the SQL Server lock manager can be trusted. Nevertheless, SQL Server exposes several methods for controlling locks. Within SQL Server, you can informally picture. schema. SQL Server will apply a schema stability (Sch-S) lock during any query to prevent data definition language (DDL) commands. A schema modification lock (Sch-M) is applied only when SQL Server