Tài liệu SQL Server MVP Deep Dives- P16 docx

40 388 0
Tài liệu SQL Server MVP Deep Dives- P16 docx

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

554 CHAPTER 42 Tracing the deadlock frame procname=WF.dbo.ViewThread line=20 stmtstart=1090 ➥ stmtend=1362 ➥ sqlhandle=0x03000600b15244168cf9db002b9b00000100000000000000 This section lists the full three-part name of the procedure that the process was running If the call was ad hoc SQL, rather than a stored procedure, then the procname will read adhoc The line number indicates on which line of the procedure the specific SQL statement starts If the line number is 1, it’s a strong indication that the specific SQL statement is a piece of dynamic SQL The statement start and statement end values specify the offsets within the procedure where the query starts and ends The sql_handle can be used with the sys.dm_exec_sql_text DMF to get the SQL statement from the server’s procedure cache This usually isn’t necessary, as most of the time the statement is reproduced in full in the deadlock graph right below this line The input buffer lists either the entire query (for ad hoc SQL) or the database ID and object ID for a stored procedure: inputbuf Proc [Database Id = Object Id = 373576369] The object ID can be translated back to an object name using the object name function: SELECT OBJECT_NAME(373576369, 6) In this case it returns ViewThread, matching what was shown for the process name earlier in the deadlock graph NOTE The Object_Name function took only one parameter, the object ID, prior to SQL Server 2005 SP2 From SP2 onward, it accepts an optional second parameter, the database ID The second process listed in the deadlock graph contains the same information and can be read in much the same way I won’t go through it all in detail, as many of the explanations given for the first process apply to the second as well The second process has a different waitresource than the first one did In the case of the key lock, it was trivial to identify the table involved The second process was waiting on a page lock process id=process809f8748 waitresource=PAGE: 6:1:351 spid=55 The numbers listed for the page indicate database ID (6), file ID (1), and page ID (351) The object that owns the page can be identified using DBCC PAGE, but in this case, it’s not necessary, as the name of the table is given later in the deadlock graph in the list of resources If DBCC Page were to be used, it would show that the page 6:1:351 belongs to a table with the ID 85575343 Using the Object_Name function reveals that the table’s name is Threads The procedure that the second process was running is WF.dbo.ViewForum and the statement began on line Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark Licensed to Kerri Ross Deadlock graph 555 DBCC PAGE DBCC Page is an undocumented but well-known command that shows the contents of database pages The command takes four parameters, the last being optional The first three are the database ID or database name, the file ID, and the page ID The last parameter indicates the print options Among the information that can be retrieved from the file header is the object and index that the page belongs to (if it’s a data or index page) To return the results to a query window, trace flag 3604 has to be enabled An example use of DBCC Page would be DBCC TRACEON (3604) DBCC PAGE (1,1,215,0) DBCC TRACEOFF(3604) By this point, we have a fairly clear idea as to what was happening when the deadlock occurred The process with a session ID of 53 requested a shared lock on the index key 6:72057594038845440 (1900f638aaf3), in the Users table, so that it could run a select that starts on line 20 of the procedure ViewThread The second process, with a session ID of 55, requested a shared lock on the page 6:1:351 belonging to the Threads table so that it could run a select that starts on line of the procedure ViewForum The resource list What we don't know yet is what locks these two processes were holding when the deadlock occurred That’s where the third part of the deadlock graph—the resource list—comes in The resource list lists all of the locks involved in the deadlock, along with which process had them and which process wanted them The first entry in the resource list refers to the key lock on the primary key of the Users table We know, from the first section of the deadlock graph, that this is what session ID 53 was waiting for: keylock hobtid=72057594038845440 dbid=6 ➥ objectname=WF.dbo.Users indexname=PK Users ➥ id=lock800c0f00 mode=X associatedObjectId=72057594038845440 owner-list owner id=process809f8748 mode=X waiter-list waiter id=process803294c8 mode=S requestType=wait Process process809f8748, which is session ID 55, owned that lock in exclusive mode Process process803294c8 requested the lock in shared mode and had to wait The second entry in the resource list refers to the page lock on the Threads table We know from the first section that this is what session ID 55 was waiting for: Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark Licensed to Kerri Ross 556 CHAPTER 42 Tracing the deadlock pagelock fileid=1 pageid=351 dbid=6 objectname=WF.dbo.Threads ➥ id=lock83255f80 mode=IX associatedObjectId=72057594038910976 owner-list owner id=process803294c8 mode=IX waiter-list waiter id=process809f8748 mode=S requestType=wait The process process803294c8, which is session ID 53, owned that resource in intentexclusive mode and hence process process809f8748 (session ID 55) had to wait NOTE An intent-exclusive (IX) lock is taken by a process before an exclusive lock and at a lower level of lock granularity, and is taken to signal the intention to take an exclusive lock; hence the name The two most common values for requestType are wait and convert A value of wait indicates that a new lock has been requested; convert indicates that the process already has a lock on that resource in one mode and requested to convert the lock into a different mode A typical example would be when a process has a shared lock and wishes to convert it to exclusive The conversion can only occur if there are no locks on the resource that are incompatible with the new lock mode—for example in the repeatable read isolation level, where a row is selected and then updated within a transaction The big picture Now the full picture of the events that lead up to and resulted in the deadlock is clear The process with a session ID of 53, while running the procedure ViewThread, began an explicit transaction and did a data modification on the Threads table Later in the same transaction, it tried to a select on the Users table, which was blocked The second process, session ID 55, ran the procedure ViewForum Within the procedure, it began an explicit transaction and did a data modification on the Users table Following that, it attempted to run a select on the Threads table, which was blocked This deadlock turns out to be a case of objects accessed in different orders from different places Although fixing the code is beyond the scope of this chapter, it turns out that reordering the queries in one of those procedures, so that the objects are accessed in the same order in both, prevents the deadlock completely Changing the isolation level to Snapshot or Read Committed Snapshot will also prevent the deadlock, because in the optimistic concurrency model, writers don’t block readers Again, a discussion on snapshot isolation is beyond the scope of this chapter Summary The deadlock graph is the key to understanding and resolving deadlocks The list of resources locked shows the state of events before the deadlock The information provided about the process’s state at the time of the deadlock, including locks requested and the process’s input buffers, shows the cause of the deadlock clearly Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark Licensed to Kerri Ross Summary 557 About the author Gail is a database consultant from Johannesburg, South Africa, specializing in performance tuning and database optimization Before moving to consulting, she worked at a large South African investment bank and was responsible for the performance of the major systems there She was awarded MVP for SQL Server in July 2008 and spoke at both TechEd South Africa and the PASS Community Summit in Seattle in the same year She’s a frequent poster on the SQLServerCentral forums and has written a number of articles for the same site Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark Licensed to Kerri Ross 43 How to optimize tempdb performance Brad M McGehee Although most DBAs know about the tempdb database, many seem to think of it as a black box that takes care of itself with no involvement required from the DBA Although this may be true on smaller, less active SQL Server instances, tempdb can significantly affect SQL Server’s performance DBAs can act to ensure that tempdb performance is optimized and to optimize the overall performance of SQL Server What is tempdb used for? The tempdb database is one of SQL Server’s included system databases and is used as a shared temporary workspace for many different kinds of activities, such as the following: ƒ Storing user objects, such as temporary local and global tables and indexes, temporary stored procedures, table variables, and the cursor ƒ Storing temporary work tables used for hash joins, aggregations, cursors, ƒ ƒ ƒ ƒ ƒ and spool operations, and temporarily storing large objects; storing intermediate sort results from many different internal operations, such as creating or rebuilding indexes, in addition to some GROUP BY, ORDER BY, and UNION queries Storing objects used when using AFTER triggers and INSTEAD OF triggers Storing large XML objects Storing the SQL Server version store (SQL Server 2005/2008), which includes the common version store and the online-index-build version store (Enterprise Edition) Storing intermediate results and sorting activity used during DBCC CHECKDB operations Storing temporary objects for Service Broker 558 Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark Licensed to Kerri Ross How DBAs can help optimize tempdb 559 If your SQL Server instance is not busy, and it doesn’t employ many of the activities described previously, then tempdb performance may not be a problem for you On the other hand, if any of your SQL Server instances are busy, and are heavily involved in many, if not most, of the previously described activities, then you may find that tempdb can become a significant bottleneck for your entire SQL Server instance Tempdb internals Unlike other SQL Server databases, the tempdb database is dropped and re-created every time the SQL Server service is stopped and restarted Here’s what happens When the SQL Server service is started, by default, SQL Server makes a copy of the model database to create a new MB tempdb database, inheriting customizations made to the model database In addition, a transaction log file of MB is created For both the MDF and the LDF files, autogrowth is set to grow by 10 percent with unrestricted growth Each SQL Server instance may have only one tempdb database In addition, tempdb exhibits many behaviors that don’t occur with other SQL Server databases For example, tempdb is configured to run using the simple recovery model, and this setting cannot be changed In addition, many database options, such as Online, Read Write, Auto Close, Auto Shrink, and others are preset and cannot be modified The tempdb database has many other restrictions including the following: it can’t be dropped; it can’t be captured in a database snapshot; it can’t participate in mirroring; and it can’t allow DBCC CHECKDB to be run on it And as you would expect, neither the tempdb database nor its transaction log file can be backed up This makes sense, as tempdb is designed for temporary objects only, and is re-created each time SQL Server restarts After tempdb has been created, DBAs can create objects in it just as in other databases As user-created or internal objects are added to tempdb, it will automatically grow as necessary to whatever size is required to hold the objects On servers with heavy tempdb activity, tempdb can grow considerably As activity transpires in tempdb, transaction logging occurs, but somewhat differently than with other SQL Server databases Operations performed within tempdb are minimally logged, which means that only enough information is logged so that temporary objects can be rolled back, if necessary Minimal logging helps to reduce the overhead put on the SQL Server instance Because the database is set to the simple recovery mode, the transaction log is truncated constantly How DBAs can help optimize tempdb As I mentioned previously, if your SQL Server instance doesn’t use tempdb much, then tempdb performance may not be an issue for you On the other hand, you should keep one important thing in mind about tempdb: there is only one tempdb database per SQL Server instance, and it can become a major bottleneck that can affect the entire performance of your SQL Server Keep in mind that even if most of your applications and databases behave well, a single misbehaving application and database can affect the performance of all the other databases running on the same server Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark Licensed to Kerri Ross 560 CHAPTER 43 How to optimize tempdb performance When building a new SQL Server instance, it is often difficult to determine how busy the tempdb database will be in production Because of this, you may want to consider implementing many of the following suggestions when you build a new instance, as in many cases it is much easier to implement these recommendations when the server is first built, rather than trying to implement them after a problem has developed Consider this as an ounce of prevention to avoid potential and unforeseen problems in the future In the following section, we take a look at many different practices you can employ to help optimize the performance of tempdb Because each SQL Server instance is different, I am not suggesting that you employ every one of these recommendations on each of your SQL Servers Instead, you must evaluate the kinds of problems you are having (or may have, if you are building a new SQL Server) and consider the available options (for example, you may not have the option to reconfigure your storage array or purchase new hardware) You implement only those recommendations that best meet your needs Minimizing the use of tempdb As we discussed earlier, a lot of activity can occur in tempdb In some cases, you can take steps to reduce SQL Server use of tempdb, helping to boost overall SQL Server performance Although this is not a comprehensive list, here are some actions you may want to avoid: ƒ Using user-created temp tables Often, I have seen T-SQL code that creates temp tables unnecessarily when the code could have been written to perform the same task without using temp tables If you have to use a temp table, and the table has several thousand rows and is accessed frequently, consider adding an index to the table to boost performance of queries that are run against it You may have to experiment with indexing to see if it helps or hurts overall performance ƒ Scheduling jobs, such as DBCC CHECKDB, that use tempdb heavily, at times of the day when the SQL Server instance is busy ƒ Using static- and keyset-driven cursors In many cases, cursors can be avoided by rewriting the code ƒ Using recursive common table expression queries If the execution plan for such a query shows a spool operator, then you know that tempdb is being used to execute it ƒ Using the SORT_IN_TEMPDB option when creating or rebuilding an index Or if you decide to use this option, schedule the job to run during a less busy time of the day ƒ Using online index rebuilding (Enterprise Edition), which uses row versioning and, in turn, uses tempdb ƒ Using large object data types ƒ Using table variables Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark Licensed to Kerri Ross Minimizing the use of tempdb 561 ƒ Returning and sorting excessive amounts of data Smaller sorts can be done in memory, but larger sorts spill over into tempdb ƒ Returning and aggregating excessive amounts of data ƒ Using joins that indicate a hash-type join in the query’s execution plan ƒ Using AFTER and INSTEAD OF triggers ƒ Using row-versioning-based transaction isolation levels You may know that you can’t avoid using the features in the previous list If that’s the case, then use them Be aware that each of the choices in the list directly impact tempdb’s performance If you are having tempdb bottleneck issues and can’t reduce the activity occurring in tempdb, you will have to resort to using some of the other tempdb optimization suggestions in this chapter instead Preallocating tempdb space and avoiding use of autogrowth Every time SQL Server restarts using its default settings, a new MB copy of tempdb is created (regardless of the model database’s size), and autogrowth is set to 10 percent with unrestricted growth Other than for the smallest SQL Server instances, MB is rarely large enough, and because of this, tempdb must grow on demand to whatever size is necessary for your SQL Server instance to continue functioning As you might expect, allowing autogrowth to size tempdb can often be problematic For example, let’s say that the average size for tempdb for a particular SQL Server instance is GB In this case, when SQL Server is restarted, the initial size of tempdb is MB Soon thereafter, autogrowth kicks in and grows tempdb 10 percent, over and over again, until it reaches its optimal size of about GB This can result in autogrowth kicking in at busy times of the day, using up valuable server resources; and delaying transactions from completing because autogrowth has to complete before the transaction can complete In addition, the use of autogrowth contributes to physical file fragmentation on disk, which can put extra stress on your disk I/O subsystem Instead of using autogrowth to manage tempdb sizing, use ALTER DATABASE to change the tempdb’s MDF and LDF files to their optimal size This way, when the SQL Server service is restarted, tempdb will be sized correctly right away Additional performance gains can be achieved if you are using instant file initialization for your SQL Server instance, as this feature can dramatically decrease the time it takes for tempdb to be created when SQL Server is restarted, or when autogrowth events occur The difficult part is determining the optimal size of tempdb for a particular SQL Server instance Although Books Online offers a way to estimate the normal size of tempdb, in practice, this is difficult to Personally, I start with an initial guess for the size (based on similar servers I am managing) and then watch the tempdb size over time At some point, you should notice that tempdb is not growing, or that the amount of space you have allocated is not being fully used Based on this information, I round this number up (the maximum amount of tempdb space actually used) to the nearest GB and use that figure as the optimal size for my tempdb database This Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark Licensed to Kerri Ross 562 CHAPTER 43 How to optimize tempdb performance applies to both the MDF and LDF files I still leave autogrowth on, as something unexpected may come up, and I want tempdb to grow if it needs to After a particular SQL Server instance’s tempdb has been set, every time SQL Server is restarted, an optimum-sized tempdb will be created in one fell swoop, and you won’t have to worry about the potential downsides of using autogrowth to size tempdb Don’t shrink tempdb if you don’t need to After you have established the optimal size of tempdb, you may notice that the actual space used within tempdb varies considerably Sometimes it may be less than half used, and other times it may be virtually full For example, your tempdb database may be set to GB (its optimal size), but sometimes only 250 MB of the allocated space is used, whereas at other times 950 MB of the allocated space is used This is normal behavior, as the amount of space used in tempdb varies depending on what is happening on the SQL Server instance Don’t let this varying amount of space trick you into considering shrinking tempdb when space usage drops If you do, you will be using SQL Server resources unnecessarily and contribute to worse overall performance For example, let’s say that on a Friday you notice that your GB of tempdb database is only using 250 MB, so you decide to shrink the tempdb database from GB to 250 MB to recover disk space But on Saturday, a series of weekly jobs runs that causes the now-250-MB-tempdb database to grow back to GB using the autogrowth feature In this case, all the resources required to shrink the tempdb database, and the resources to autogrow the database, will have been wasted In some cases, a wild-running query will cause tempdb to grow much larger than its normal size If you are short on disk space and want to reclaim this empty space, and you are fairly confident that the wild-running query won’t repeat itself, you have two choices: ƒ Shrink the database and log files manually using DBCC SHRINKFILE This option may not work as expected, as internal objects or the version store aren’t moved during this process, often resulting in less than expected shrinkage ƒ Restart the SQL Server instance This will re-create tempdb at the size it was as specified by the ALTER DATABASE statement This option is more effective, but you must have downtime available for it to work Think of a tempdb’s optimum size as its maximum size If you keep this in mind, then you won’t be tempted to shrink it when it is less than full Dividing tempdb among multiple physical files Although there can only be one tempdb database per SQL Server instance, the tempdb database can be split into multiple physical files If your tempdb is not active, then splitting it into multiple files may not buy you much additional performance But Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark Licensed to Kerri Ross Minimizing the use of tempdb 563 if your tempdb is active, splitting it into multiple files can potentially boost your server’s overall performance Unfortunately, selecting the ideal number of physical files needed to optimize tempdb’s performance is not an exact science As a starting point, you should consider creating as many physical files as there are CPU cores available to your SQL Server instance For example, if your server has CPU cores available to it, then divide the tempdb database into physical files NOTE If you decide to use multiple files for tempdb, it is important that each of them be exactly the same size (for example, if your tempdb’s normal size is GB, then you should have physical files of 125 MB each) This is because SQL Server uses a proportional fill strategy to fill the physical files In addition, the autogrowth settings should be identical for each physical file in order to ensure that each physical file grows identically Multiple files can boost disk I/O performance and reduce contention by spreading I/O activity over multiple files This is beneficial even if the multiple files reside on a single disk volume, although locating each physical tempdb file on its own disk volume would generally provide an even greater benefit (at a much greater cost) On the other hand, using more physical disk files can increase switching costs and file management overhead because each object created in tempdb will have to have IAM pages created in each of the physical files This and other unknowns about your SQL Server instance complicate providing an exact recommendation for the optimum number of physical files to use for your tempdb I recommend that you perform tests in your own environment to determine the number of physical files that optimize the tempdb for your particular needs After testing, you may find that more or fewer tempdb physical files are needed for optimum performance Although the tempdb MDF file should generally be split into multiple physical files, this is not the case with the tempdb LDF file Because tempdb uses the simple recovery model, and because it uses it own optimized logging method, dividing the LDF file into multiple physical files rarely provides any benefit Moving tempdb to a disk separate from your other databases By default, when installing SQL Server, tempdb is stored in the same location as the rest of your databases Although this may work for smaller, less busy SQL Servers, it can cause a lot of I/O contention problems on busy SQL Servers Ideally, tempdb should be located on its own disk volume(s), separate from other disk activity Locating tempdb on a fast I/O subsystem No matter where tempdb is located, it should be located on the fastest I/O subsystem available to your SQL Server instance Your hardware will limit what you can do, but you might want to consider the following options, assuming your hardware (and budget) permits: Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark Licensed to Kerri Ross ... Microsoft SQL Server MVP with over 14 years of experience with SQL Server and over years of training experience Brad is a frequent speaker at SQL PASS, European PASS, SQL Connections, SQLTeach, SQLBits,... bottleneck for your entire SQL Server instance Tempdb internals Unlike other SQL Server databases, the tempdb database is dropped and re-created every time the SQL Server service is stopped and... additional RAM to your SQL Server instance, so that these activities can be completed in the buffer cache without using tempdb Using SQL Server 2008 transparent data encryption In SQL Server 2008, if

Ngày đăng: 15/12/2013, 13:15

Từ khóa liên quan

Tài liệu cùng người dùng

Tài liệu liên quan