Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống
1
/ 50 trang
THÔNG TIN TÀI LIỆU
Thông tin cơ bản
Định dạng
Số trang
50
Dung lượng
1,25 MB
Nội dung
This page intentionally left blank
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Advanced Topics
PART
VI
LEARN TO:
• Use locking
• Monitor and optimize
SQL Server 2000
• Use replication
• Use Analysis Services
• Use Microsoft English
Query
• Troubleshoot
2627ch25.qxd 8/22/00 11:21 AM Page 921
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
This page intentionally left blank
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
CHAPTER 25
Locking
FEATURING:
Why Locking? 924
Isolation Levels 926
Locking Mechanics 927
Viewing Current Locks 931
Deadlocks 936
Customizing Locking Behavior 939
Application Locks 942
Summary 944
2627ch25.qxd 8/22/00 11:21 AM Page 923
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
O
ne of the key features of SQLServer 2000 is that it’s been designed from
the start to support many users of the same database at the same time.
It’s this support that leads to the need for locking. Locking refers to the
ability of the database server to reserve resources such as rows of data or
pages of an index for the use of one particular user at a time. In this chapter, we’ll
explore the reasons why locking is necessary in multiuser databases and see the
details of SQL Server’s locking implementation.
Why Locking?
It may seem counterintuitive that a multiuser database would require the ability to
lock users out of their data. Wouldn’t it make more sense to just let everyone get to
the data, so they can get their business done as fast as possible and let the next person
use the data? Unfortunately, this doesn’t work, because working with data often takes
many operations that require everything to stay consistent. In this section, we’ll dis-
cuss the specific problems that locking solves:
• Lost updates
• Uncommitted dependencies
• Inconsistent analysis
• Phantom reads
We’ll also take a look at concurrency, and explain the difference between opti-
mistic and pessimistic concurrency.
Lost Updates
One of the classic database problems is the lost update. Suppose Joe is on the phone
with the Accounting Department of XYZ Corporation, and Mary, who is entering
changes of address for customers, happens to find a change of address card for XYZ
Corporation at roughly the same time. Both Joe and Mary display the record for XYZ
from the Customers table on their computers at the same time. Joe comes to an agree-
ment to raise XYZ’s credit limit, makes the change on his computer, and saves the
change back to the SQLServer database. A few minutes later, Mary finishes updating
XYZ’s address and saves her changes. Unfortunately, her computer didn’t know about
the new credit limit (it had read the original credit limit before Joe raised it), so Joe’s
change is overwritten without a trace.
A lost update can happen anytime two independent transactions select the same
row in a table and then update it based on the data that they originally selected. One
2627ch25.qxd 8/22/00 11:21 AM Page 924
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
925
way to solve this problem is to lock out the second update. In the example above, if
Mary was unable to save changes without first retrieving the changes that Joe made,
both the new credit limit and the new address would end up in the Customers table.
Uncommitted Dependencies
Uncommitted dependencies are sometimes called dirty reads. This problem happens
when a record is read while it’s still being updated, but before the updates are final.
For example, suppose Mary is entering a change of address for XYZ Corporation
through a program that saves each changed field as it’s entered. She enters a wrong
street address, then catches herself and goes back to correct it. However, before she
can enter the correct address, Mark prints out an address label for the company. Even
though Mary puts the correct data in before leaving the company’s record, Mark has
read the wrong data from the table.
One way to avoid the problem of dirty reads is to lock data while it’s being written,
so no one else can read it before the changes are final.
Inconsistent Analysis
The inconsistent analysis problem is related to the uncommitted dependencies prob-
lem. Inconsistent analysis is caused by nonrepeatable reads, which can happen when
data is being read by one process while the data’s being written by another process.
Suppose Betty is updating the monthly sales figures for each of the company’s
divisions by entering new numbers into a row of the Sales table. Even though she
puts all the changes on her screen to be saved at once, it takes SQLServer a little
time to write the changes to the database. If Roger runs a query to total the
monthly sales for the entire company while this data is being saved, the total will
include some old data and some new data. If he runs the query again a moment
later, it will include all new data and give a different answer. Thus, the original read
was nonrepeatable.
Inconsistent analysis can be avoided if reads are not allowed while data is being
written.
Phantom Reads
The final major problem that locking can help solve is the problem of phantom reads.
These occur when an application thinks it has a stable set of data, but other applica-
tions are inserting rows into the data. Suppose Roger retrieves a query that includes
all of the sales for March. If he asks for sales for March 15 twice in a row, he should
get the same answer. However, if Mildred was inserting data for March 15, and Roger’s
WHY LOCKING?
Advanced Topics
PART
VI
2627ch25.qxd 8/22/00 11:21 AM Page 925
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
CHAPTER 25 • LOCKING
926
application read the new data, he might get a different answer the second time. The
new data is called phantom data, because it appeared mysteriously even though it
wasn’t originally present in the data that was retrieved.
Phantom reads can be avoided if some processes are locked out of inserting data
into a set of data that another process is using.
Optimistic and Pessimistic Concurrency
There are two broad strategies for locking in the world of databases. These are referred
to as concurrency control methods, because they control when users can work with
resources that other users are also manipulating.
With optimistic concurrency control, the server makes the assumption that resource
conflicts are unlikely. In this case, resources (for example, a row in a table) are locked
only while a change is about to be saved. This minimizes the amount of time that
resources are locked. However, it increases the chance that another user will make a
change in a resource before you can. For example, you might discover when trying to
save a change that the data in the table is not the data that you originally read, and
need to read the new data and make your change again.
With pessimistic concurrency control, resources are locked when they are required
and are kept locked throughout a transaction. This avoids many of the problems of
optimistic concurrency control, but raises the possibility of deadlocks between
processes. We’ll discuss deadlocks later in the chapter.
In almost all situations, SQLServer uses pessimistic concurrency control. It’s possi-
ble to use optimistic concurrency control by opening tables with a cursor instead of a
query. Chapter 8 covers the use of cursors in T-SQL.
Isolation Levels
The ANSI SQL standard defines four different isolation levels for transactions. These
levels specify how tolerant a transaction is of incorrect data. From lowest to highest,
the four isolation levels are as follows:
• Read Uncommitted
• Read Committed
• Repeatable Read
• Serializable
A lower isolation level increases concurrency and decreases waiting for other trans-
actions, but increases the chance of reading incorrect data. A higher isolation level
2627ch25.qxd 8/22/00 11:21 AM Page 926
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
927
decreases concurrency and increases waiting for other transactions, but decreases the
chance of reading incorrect data.
With the highest level of isolation, transactions are completely serialized, which
means that they are completely independent of one another. If a set of transactions is
serialized, the transactions can be executed in any order, and the database will always
end up in the same state.
The default isolation level for SQLServer transactions is Read Committed, but as
you’ll see later in this chapter, you can adjust this default for particular transactions.
NOTE For a discussion of the properties that define transactions and the T-SQL state-
ments that manage transactions, see Chapter 8.
Table 25.1 shows which database problems can still occur with each isolation level.
TABLE 25.1: ISOLATION LEVELS AND DATABASE PROBLEMS
Isolation Level Lost Updates Dirty Reads Nonrepeatable Reads Phantom Reads
Read Uncommitted Yes Yes Yes Yes
Read Committed Yes No Yes Yes
Repeatable Read No No No Yes
Serializable No No No No
Locking Mechanics
To understand the way that SQLServer manages locks and properly interpret the dis-
play of locking information in SQLServer Enterprise Manager, you need to under-
stand a few technical concepts. In this section, we’ll cover the basics of these concepts,
including locking granularity, locking modes, lock escalation, and dynamic locking.
Locking Granularity
Locking granularity refers to the size of the resources being locked at any given time.
For example, if a user is going to make a change to a single row in a table, it might
make sense to lock just that row. However, if that same user were to make changes to
LOCKING MECHANICS
Advanced Topics
PART
VI
2627ch25.qxd 8/22/00 11:21 AM Page 927
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
CHAPTER 25 • LOCKING
928
multiple rows in a single transaction, it could make more sense for SQLServer to lock
the entire table. The table locking has higher granularity than the row locking.
SQL Server 2000 can provide locks on six levels of granularity:
RID: RID stands for row ID. A RID lock applies a lock to a single row in a
table.
Key: Sometimes locks are applied to indexes rather than directly to tables. A
key lock locks a single row within an index.
Page: A single data page or index page contains 8KB of data.
Extent: Internally, SQLServer organizes pages into groups of eight similar
pages (either data pages or index pages) called extents. An extent lock thus
locks 64KB of data.
Table: A table lock locks an entire table.
DB: Under exceptional circumstances, SQLServer may lock an entire data-
base. For example, when a database is placed into single-user mode for mainte-
nance, a DB lock may be used to prevent other users from entering the database.
The smaller the lock granularity, the higher the concurrency in the database. For
example, if you lock a single row rather than an entire table, other users can work
with other rows in the same table. The trade-off is that smaller lock granularity gener-
ally means more system resources are devoted to tracking locks and lock conflicts.
Locking Modes
All locks are not created equal. SQLServer recognizes that some operations need com-
plete and absolute access to data, while others merely want to signal that they might
change the data. To provide more flexible locking behavior and lower the overall
resource use of locking, SQLServer provides the following types of locks (each type
has an abbreviation that is used in SQLServer Enterprise Manager):
Shared (S): Shared locks are used to ensure that a resource can be read. No
transaction can modify the data in a resource while a shared lock is being held
on that resource by any other transaction.
Update (U): Update locks signal that a transaction intends to modify a
resource. An update lock must be upgraded to an exclusive lock before the
transaction actually makes the modification. Only one transaction at a time
can hold an update lock on a particular resource. This limit helps prevent dead-
locking (discussed in more detail later in the chapter).
Exclusive (X): If a transaction has an exclusive lock on a resource, no other
transaction can read or modify the data in that resource. This makes it safe for
the transaction holding the lock to modify the data itself.
2627ch25.qxd 8/22/00 11:21 AM Page 928
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
929
Intent shared (IS): A transaction can place an intent shared lock on a
resource to indicate that the transaction intends to place shared locks on
resources at a lower level of granularity within the first resource. For example, a
transaction that intends to read a row in a table can place a shared lock on the
RID and an intent shared lock on the table itself. Intent shared locks help
improve SQLServer performance by making it easier for SQLServer to deter-
mine whether a transaction can be granted update or exclusive locks. If SQL
Server finds an intent shared lock on the table, SQLServer doesn’t need to
examine every RID looking for shared locks on a row-by-row basis.
Intent exclusive (IX): A transaction can place an intent exclusive lock on a
resource to indicate that the transaction intends to place exclusive locks on
resources at a lower level of granularity within the first resource.
Shared with intent exclusive (SIX): A transaction can place a shared
with intent exclusive lock on a resource to indicate that the transaction intends
to read all of the resources at a lower level of granularity within the first
resource and modify some of those lower-level resources.
Schema modification (Sch-M): SQLServer places schema modification
locks on a table when DDL operations such as adding or dropping a column are
being performed on that table. Schema modification locks prevent any other
use of the table.
Schema stability (Sch-S): SQLServer places schema stability locks on a
table when compiling a query that is based at least in part on that table.
Schema stability locks do not prevent operations on the data in the table, but
they do prevent modifications to the structure of the table.
Bulk update (BU): SQLServer places bulk update locks on a table when
bulkcopying data into the table, if the TABLOCK hint is specified as part of the
bulkcopy operation or the table lock on bulk load option is set with sp_tableop-
tion. Bulk update locks allow any process to bulkcopy data into the table, but
do not allow any other processes to use the data in the table.
Later in the chapter, you’ll see how you can use locking hints in T-SQL to specify
the exact lock mode that should be used for a particular operation.
One of the factors that determines whether a lock can be granted on a resource is
whether another lock already exists on the resource. Here are the rules that SQL
Server applies to determine whether a lock can be granted:
• If an X lock exists on a resource, no other lock can be granted on that resource.
• If an SIX lock exists on a resource, an IS lock can be granted on that resource.
• If an IX lock exists on a resource, an IS or IX lock can be granted on that
resource.
LOCKING MECHANICS
Advanced Topics
PART
VI
2627ch25.qxd 8/22/00 11:21 AM Page 929
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
[...]... T -SQL: USE pubs Sp_releaseapplock @Resource = ‘authors.txt’ Summary In this chapter, you learned about SQLServer locking You saw why locking is necessary to preserve data integrity and learned about the mechanics of SQLServer locking You learned how to view the current locks on a SQL Server, how to prevent deadlocks, and how to customize SQLServer s locking behavior You also saw how you can use SQL. .. FREQUENTLY USED SQLSERVER PERFORMANCE MONITOR COUNTERS Object Counter Use SqlServer:Buffer Manager Buffer Cache Hit Ratio This tells you how much data is being retrieved from cache instead of disk SqlServer:Buffer Manager Page Reads/sec Number of data pages that are read from disk each second SqlServer:Buffer Manager Page Writes/sec Number of data pages that are written to disk each second SqlServer:General... The result set from sp_lock includes these columns: spid: The SQLServer process ID SQLServer assigns a unique number to each active process dbid: The SQLServer database ID for the database containing the lock To see the database IDs on your server matched to database names, you can execute SELECT * FROM master sysdatabases ObjId: The SQLServer object ID for the object being locked You can retrieve... written to disk each second SqlServer:General Statistics User Connections Number of user connections Each of these will take some RAM SQLServer:Memory Manager Total Server Memory (KB) Total amount of memory that SQLServer has been dynamically assigned SQLServer :SQL Statistics SQL Compilations/sec Number of compiles per second Now that the system resources are working together, you can start creating queries... of this information on a test server FIGURE 25.2 Displaying lock information in SQLServer Enterprise Manager The Process Info node displays the following information for each process currently running on the server: spid: The process ID assigned to the process by SQLServer This column also displays an icon that indicates the current status of the process User: The SQLServer user who owns the process... Click Close and notice the graph being created on the screen You can monitor SQLServer as well as Windows objects using Performance Monitor, because SQLServer provides its own objects and counters The process for monitoring SQLServer is the same as it is with Windows NT/2000—you just add different objects and counters The SQL Server counters that you will be using most often are listed for you in Table... applications are mutually deadlocked SQL Server is designed to detect and eliminate deadlocks automatically The server periodically scans all processes to see which ones are waiting for lock requests to be fulfilled If a single process is waiting during two successive scans, SQL Server starts a more detailed search for deadlock chains If it finds that a deadlock situation exists, SQL Server automatically resolves... users having changed the data in the interim Application Locks SQL Server 2000 adds a new type of lock to those supported in previous versions, the application lock An application lock is a lock created by client code (for example, a TSQL batch or a Visual Basic application) rather than by SQLServer itself Application locks allow you to use SQLServer to manage resource contention issues between multiple... kilobytes of memory in use by the process Login Time: The date and time that the process connected to SQLServer Last Batch: Server The date and time that the process last sent a command to SQL Host: The server where the process is running Network Library: The network library being used for connection to SQLServer by the process Network Address: Blocked By: process Blocking: process The physical network... Escalation SQLServer continuously monitors lock usage to strike a balance between granularity of locks and resources devoted to locking If a large number of locks on a resource with lesser granularity is acquired by a single transaction, SQLServer might escalate these locks to fewer locks with higher granularity For example, suppose a process begins requesting rows from a table to read SQLServer will . help
improve SQL Server performance by making it easier for SQL Server to deter-
mine whether a transaction can be granted update or exclusive locks. If SQL
Server. includes these columns:
spid: The SQL Server process ID. SQL Server assigns a unique number to each
active process.
dbid: The SQL Server database ID for the database