1. Trang chủ
  2. » Công Nghệ Thông Tin

Hướng dẫn học Microsoft SQL Server 2008 part 145 pptx

10 234 0

Đang tải... (xem toàn văn)

THÔNG TIN TÀI LIỆU

Nội dung

Nielsen c66.tex V4 - 07/21/2009 4:13pm Page 1402 Part IX Performance Tuning and Optimization Ifyouweretoopenathirdorfourthtransaction, they would all still see the original value, The Bald Knight . Even after the second transaction committed the change, the first transaction would still see the original value, The Bald Knight. This is the same behavior as the serializable isolation but without the blocking that occurs with serializable isolation. Any new transactions would see updated value, Rocking with Snapshots. Using read committed snapshot isolation Read committed snapshot isolation is enabled using a similar syntax: ALTER DATABASE Aesop SET READ_COMMITTED_SNAPSHOT ON Like snapshot isolation, read committed snapshot isolation uses row versioning to stave off locking and blocking issues. In the previous example, transaction 1 would see transaction 2’s update once it was committed. The difference with snapshot isolation is that you don’t specify a new isolation level. This just changes the behavior of the standard read committed isolation level, which means you shouldn’t have to change your application to benefit from it. Handling write conficts Transactions that write to the data within a snapshot isolation can be blocked by a previous uncom- mitted write transaction. This blocking won’t cause the new transaction to wait; instead, it generates an error. Be sure to use try-catch to handle these errors, wait a split second, and try again. Using locking hints Locking hints enable you to make minute adjustments to the locking strategy. Whereas the isolation level affects the entire connection, locking hints are specific to one table within one query (see Table 66-5). The WITH (locking hint) option is placed after the table in the FROM clause of the query. You can specify multiple locking hints by separating them with commas. The following query uses a locking hint in the FROM clause of an UPDATE query to prevent the lock manager from escalating the granularity of the locks: USE OBXKites UPDATE Product FROM Product WITH (RowLock) SET ProductName = ProductName + ‘ Updated’ If a query includes subqueries, don’t forget that each query’s table references will generate locks and can be controlled by a locking hint. 1402 www.getcoolebook.com Nielsen c66.tex V4 - 07/21/2009 4:13pm Page 1403 Managing Transactions, Locking, and Blocking 66 TABLE 66-5 Locking Hints Locking Hint Description ReadUnCommitted Isolation level. Doesn’t apply or honor locks. Same as no lock. ReadCommitted Isolation level. Uses the default transaction-isolation level. RepeatableRead Isolation level. Holds share and exclusive locks until COMMIT TRANSACTION. Serializable Isolation level. Applies the serializable transaction isolation level durations to the table, which holds the shared lock until the transaction is complete. ReadPast Skips locked rows instead of waiting RowLock Forces row-level locks instead of page, extent, or table locks PagLock Forces the use of page locks instead of a table lock TabLock Automatically escalates row, page, or extent locks to the table-lock granularity NoLock Doesn’t apply or honor locks. Same as ReadUnCommitted. TablockX Forces an exclusive lock on the table. This prevents any other transaction from working with the table. HoldLock Holds the share lock until COMMIT TRANSACTION. (Same as serializable.) Updlock Uses an update lock instead of a shared lock and holds the lock. This blocks any other reads or writes of the data between the initial read and a write operation. This can be used to escalate locks used by a SELECT statement within a serializable isolation transaction from causing deadlocks. XLock Holds an exclusive lock on the data until the transaction is committed Application Locks SQL Server uses a very sophisticated locking scheme. Sometimes a process or a resource other than data requires locking. For example, a procedure might need to run that would be ill affected if another user started another instance of the same procedure. Several years ago, I wrote a program that routed cables for nuclear power plant designs. After the geom- etry of the plant (what’s where) was entered and tested, the design engineers entered the cable-source equipment, destination equipment, and type of cable to be used. Once several cables were entered, a procedure wormed each cable through the cable trays so that cables were as short as possible. The pro- cedure also considered cable fail-safe routes and separated incompatible cables. While I enjoyed writing that database, if multiple instances of the worm procedure ran simultaneously, each instance attempted 1403 www.getcoolebook.com Nielsen c66.tex V4 - 07/21/2009 4:13pm Page 1404 Part IX Performance Tuning and Optimization to route the cables, and the data became fouled. An application lock is the perfect solution to that type of problem. Application locks open up the whole world of SQL Server locks for custom uses within applications. Instead of using data as a locked resource, application locks use any named user resource declared in the sp_GetAppLock stored procedure. Application locks must be obtained within a transaction. As with the locks the engine puts on the database resources, you can specify the lock mode ( Shared, Update, Exclusive, IntentExclusive,orIntentShared) . The return code indicates whether or not the procedure was successful in obtaining the lock, as follows: ■ 0: Lock was obtained normally ■ 1: Lock was obtained after another procedure released it ■ -1: Lock request failed (timeout) ■ -2: Lock request failed (canceled) ■ -3: Lock request failed (deadlock) ■ -999: Lock request failed (other error) The sp_ReleaseAppLock stored procedure releases the lock. The following code shows how the application lock can be used in a batch or procedure: DECLARE @ShareOK INT EXEC @ShareOK = sp_GetAppLock @Resource = ‘CableWorm’, @LockMode = ‘Exclusive’ IF @ShareOK < 0 Error handling code code EXEC sp_ReleaseAppLock @Resource = ‘CableWorm’ Go When the application locks are viewed using Enterprise Manager or sp_Lock, the lock appears as an ‘‘APP’’-type lock. The following is an abbreviated listing of sp_lock executed at the same time as the previous code: Sp_Lock Result: spid dbid ObjId IndId Type Resource Mode Status 57 8 0 0 APP Cabl1f94c136 X GRANT Note two minor differences from the way application locks are handled by SQL Server: ■ Deadlocks are not automatically detected. ■ If a transaction gets a lock several times, it has to release that lock the same number of times. 1404 www.getcoolebook.com Nielsen c66.tex V4 - 07/21/2009 4:13pm Page 1405 Managing Transactions, Locking, and Blocking 66 Application Locking Design Aside from SQL Server locks, another locking issue deserves to be addressed. How the client application deals with multi-user contention is important to both the user’s experience and the integrity of the data. Implementing optimistic locking The two basic means of dealing with multi-user access are optimistic locking and pessimistic locking.The one you use determines the coding methods of the application. Optimistic locking assumes that no one else will attempt to change the data while a user is working on the data in a form. Therefore, you can read the data and then later go back and update the data based on what you o riginally read. Optimistic locking does not apply locks while a user is working with data in the front-end application. The disadvantage of optimistic locking is that multiple users can read and write the data because they aren’t blocked from doing so by locks, which can result in lost updates. The pessimistic (or ‘‘Murphy’’) method takes a different approach: If anything can go wrong it will. When a user is working on some data, a pessimistic locking scheme locks that data until the user is finished with it. While pessimistic locking may work in small workgroup applications on desktop databases, large client/server applications require higher levels of concurrency. If SQL Server locks are held while a user is viewing the data in an application, performance will be unreasonably slow. The accepted best practice is to implement an optimistic locking scheme using minimal SQL Server locks, as well as a method for preventing lost updates. Lost updates A lost update occurs when two users edit the same row, complete their edits, and save the data, and the second user’s update overwrites the first user’s update. For example: 1. Joe opens Product 1001, a 21-inch box kite, in the front-end application. SQL Server applies a shared lock for a split second while retrieving the data 2. Sue also opens Product 1001 using the front-end application. 3. Joe and Sue both make edits to the box-kite data. Joe rephrases the product description, and Sue fixes the product category. 4. Joe saves the data in the application, which sends an update to SQL Server. The UPDATE command replaces the old product description with Joe’s new description. 5. Sue presses the ‘‘save and close’’ button, and her data is sent to SQL Server in another UPDATE statement. The product category is now fixed, but the old description was in Sue’s form, so Joe’s new description was overwritten with the old description. 6. Joe discovers the error and complains to the IT vice president during the next round of golf about the unreliability of that new SQL Server–based database. Because lost updates only occur when two users edit the same row at the same time, the problem might not occur for months. Nonetheless, it’s a flaw in the transactional integrity of the database and should be prevented. 1405 www.getcoolebook.com Nielsen c66.tex V4 - 07/21/2009 4:13pm Page 1406 Part IX Performance Tuning and Optimization Minimizing lost updates If the application is going to use an optimistic locking scheme, you can minimize the chance that a lost update can occur, as well as minimize the effects of a lost update, using the following methods: ■ Normalize the database so that it has many long, narrow tables. With fewer columns in a row, the chance of a lost update is reduced. For example, the OBXKites database has a separate table for prices. A user can work on product pricing and not interfere with another user working on other product data. ■ If the UPDATE statement is being constructed by the front-end application, have it check the controls and send an update for only those columns that are actually changed by the user. This technique alone would prevent the lost update described in the previous example of Joe’s and Sue’s updates and most lost updates in the real world. As an added benefit, it reduces client/server traffic and the workload on SQL Server. ■ If an optimistic locking scheme is not preventing lost updates, the application is using a ‘‘he who writes last, writes best’’ scheme. Although lost updates may occur, a data-audit trail can minimize the effect by exposing updates to the same row within minutes and tracking the data changes. Preventing lost updates A stronger solution to the lost update problem than just minimizing the effect is to block lost updates where the data has changed since it was read. This can be done in two ways. The more complicated version checks the current value of each column against the value that was originally read. Although it can be very complicated, it offers you very fine-grain control over doing partial updates. The second way is to use the RowVersion method. The rowversion data type, previously known as a timestamp in earlier versions of SQL Server, automatically provides a new value every time the row is updated. By comparing the RowVersion value retrieved during the row select and the RowVersion value at the time of update, it’s trivial for code to detect whether the row has been changed and a lost update would occur. The RowVersion method can be used in SELECT and UPDATE statements by adding the RowVersion value in the WHERE clause of the UPDATE statement. The following sequence demonstrates the RowVersion technique using two user updates. Both users begin by opening the 21-inch box kite in the front-end application. Both SELECT statements retrieve the RowVersion column and ProductName: SELECT RowVersion, ProductName FROM Product WHERE ProductCode = ‘1001’ Result: RowVersion ProductName 0x0000000000000077 Basic Box Kite 21 inch 1406 www.getcoolebook.com Nielsen c66.tex V4 - 07/21/2009 4:13pm Page 1407 Managing Transactions, Locking, and Blocking 66 Both front-end applications can grab the data and populate the form. Joe edits the ProductName to ‘‘Joe’s Update.’’ When Joe is ready to update the database, the ‘‘save and close’’ button executes the following SQL statement: UPDATE Product SET ProductName = ‘Joe’s Update’ WHERE ProductCode = ‘1001’ AND RowVersion = 0x0000000000000077 Once SQL Server has processed Joe’s update, it automatically updates the RowVersion value as well. Checking the row again, Joe sees that his edit took effect: SELECT RowVersion, ProductName FROM Product WHERE ProductCode = ‘1001’ Result: RowVersion ProductName 0x00000000000000B9 Joe’s Update If the update procedure checks to see whether any rows were affected, it can detect that Joe’s edit was accepted: SELECT @@ROWCOUNT Result: 1 Although the RowVersion column’s value was changed, Sue’s front-end application isn’t aware of the new value. When Sue attempts to save her edit, the UPDATE statement won’t find any rows meeting that criterion: UPDATE Product SET ProductName = ‘Sue’s Update’ WHERE ProductCode = ‘1001’ AND RowVersion = 0x0000000000000077 If the update procedure checks to see whether any rows were affected, it can detect that Sue’s edit was ignored: SELECT @@ROWCOUNT Result: 0 This method can also be incorporated into applications driven by stored procedures. The FETCH or GET stored procedure returns the RowVersion along with the rest of the data for the row. When the 1407 www.getcoolebook.com Nielsen c66.tex V4 - 07/21/2009 4:13pm Page 1408 Part IX Performance Tuning and Optimization application is ready to update and calls the UPDATE stored procedure, it includes the RowVersion as one of the required parameters. The UPDATE stored procedure can then check the RowVersion and raise an error if the two don’t match. If the method is sophisticated, the stored procedure or the front-end application can check the audit trail to see whether or not the columns updated would cause a lost update and report the changes to the last user in the error dialog. Transaction-Log Architecture SQL Server’s design meets the transactional-integrity ACID properties, largely because of its write-ahead transaction log. The write-ahead transaction log ensures the durability of every transaction. Transaction log sequence Every data-modification operation goes through the same sequence in which it writes first to the transac- tion log and then to the data file. The following sections describe the 12 steps in a transaction. Database beginning state Before the transaction begins, the database is in a consistent state. All indexes are complete and point to the correct row. The data meets all the enforced rules for data integrity. Every foreign key points to a valid primary k ey. Some data pages are likely already cached in memory. Additional data pages or index pages are copied into memory as needed. Here are the steps of a transaction: 1. The database is in a consistent state. Data-modification command The transaction is initiated by a submitted query, batch, or stored procedure, as shown in Figure 66-10. 2. The code issues a BEGIN TRANSACTION command. Even when the DML command is a stand-alone command without a BEGIN TRANSACTION and a COMMIT TRANSACTION,itis still handled as a transaction. 3. ThecodeissuesasingleDML INSERT, UPDATE,orDELETE command, or a series of them. To give you a n example of the transaction log in action, the following code initiates a transaction and then submits two UPDATE commands: USE OBXKites; BEGIN TRANSACTION; UPDATE Product SET ProductDescription = ‘Transaction Log Test A’, DiscontinueDate = ‘12/31/2003’ WHERE Code = ‘1001’; UPDATE Product 1408 www.getcoolebook.com Nielsen c66.tex V4 - 07/21/2009 4:13pm Page 1409 Managing Transactions, Locking, and Blocking 66 SET ProductDescription = ‘Transaction Log Test B’, DiscontinueDate = ‘4/1/2003’ WHERE Code = ‘1002’; Notice that the transaction has not yet been committed. FIGURE 66-10 The SQL DML commands are performed in memory as part of a transaction. Data Pages In RAM T Update Data File T-Log Delete Insert 6) Confirm 5) Write Ahead 1) Begin in Consistent State 4) Write to Data Page Update Confirmed 2) Begin Tran 3) Update SQL Update 4. The query optimization plan is either generated or pulled from memory. Any required locks are applied, and the data modifications, including index updates, page splits, and any other required system operations, are performed in memory. At this point the data pages in memory are different from those that are stored in the data file. The following section continues the chronological walk through the process. Transaction log recorded The most important aspect of the transaction log is that all data modifications are written to it and confirmed prior to being written to the data file (refer to Figure 66-10). Best Practice T he write-ahead nature of the transaction log is what makes it critical that the transaction log be stored on a different disk subsystem from the data file. If they are stored separately and either disk subsystem fails, then the database will still be intact, and you will be able to recover it to the split second before the failure. Conversely, if they are on the same drive, a drive failure will require you to restore from the last backup. If the transaction log fails, it can’t be recovered from the data file, so it’s a best practice to invest in redundancy for the T-Log files along with regular T-Log backups. 1409 www.getcoolebook.com Nielsen c66.tex V4 - 07/21/2009 4:13pm Page 1410 Part IX Performance Tuning and Optimization 5. The data modifications are written to the transaction log. 6. The transaction log DML entries are confirmed. This ensures that the log entries are in fact written to the transaction log. Transaction commit When the sequence of tasks is complete, the COMMIT TRANSACTION closes the transaction. Even this task is written to the transaction log, as shown in Figure 66-11. FIGURE 66-11 The commit transaction command launches another insert into the transaction log. Data Pages In RAM T Commit Data File T-Log Delete Insert Commit 9) Confirm 8) Write Ahead 1) Begin in Consistent State 4) Write to Data Page 2) Begin Tran 3) Update 7) Commit Tran SQL Update 7. The following code closes the transaction: COMMIT TRANSACTION To watch transactions post to the transaction log, watch the Transaction screencast on www.sqlserverbible.com. 8. The COMMIT entry is written to the transaction log. 9. The transaction-log COMMIT entry is confirmed (see Figure 66-12). If you’re interested in digging deeper into the transaction log, You might want to research ::fn_dblog(startlsn, endlsn), an undocumented system function that reads the log. Also, Change Data Capture leverages the transaction log, so there are some new functions to work with the transaction log and LSNs, as described in Chapter 60, ‘‘Change Data Capture.’’ Data-file update With the transaction safely stored in the transaction log, the last operation is to write the data modifica- tion to the data file, as shown in Figure 66-13. 1410 www.getcoolebook.com Nielsen c66.tex V4 - 07/21/2009 4:13pm Page 1411 Managing Transactions, Locking, and Blocking 66 FIGURE 66-12 Viewing committed transactions in the transaction log using ApexSQL Log, a third-party product FIGURE 66-13 As one of the last steps, the data modification is written to the data file. Data Pages In RAM T Data File Write Data File T-Log Delete Insert Commit 9) Confirm 8) Write Ahead 1) Begin in Consistent State 11) Transaction is marked as written to the data file 10) Write when it gets time, tag in T-LOG 12) Finish in Consistent State 4) Write to Data Page 2) Begin Tran 3) Update 7) Commit Tran SQL Update 1411 www.getcoolebook.com . update to SQL Server. The UPDATE command replaces the old product description with Joe’s new description. 5. Sue presses the ‘‘save and close’’ button, and her data is sent to SQL Server in another UPDATE statement small workgroup applications on desktop databases, large client /server applications require higher levels of concurrency. If SQL Server locks are held while a user is viewing the data in an application,. and most lost updates in the real world. As an added benefit, it reduces client /server traffic and the workload on SQL Server. ■ If an optimistic locking scheme is not preventing lost updates, the

Ngày đăng: 04/07/2014, 09:20

TỪ KHÓA LIÊN QUAN