Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống
1
/ 50 trang
THÔNG TIN TÀI LIỆU
Thông tin cơ bản
Định dạng
Số trang
50
Dung lượng
296,6 KB
Nội dung
a lock on that inventory record and is trying to access the customer record that the other program already has locked. The two programs wait indefi- nitely to access the record that the other program has locked. You solve the problem by modifying the warranty registration pro- gram so that it locks customer records before it locks the corresponding inventory record. FORCES ⁄ Multiple objects need to access a set of resources. The operations they perform on these resources require that some or all other objects be prevented from concurrently accessing the resources. ⁄ Dynamically determining at runtime whether granting an object access to a resource will result in a deadlock can be a very expensive operation. ⁄ Some transaction management mechanisms automatically detect dead- locks among the transactions that they manage. It will generally take them a while to detect the deadlock after it occurs. The way that most of these transaction managers handle a deadlock is to cause some or all of the transactions involved to fail. From the viewpoint of an applica- tion, such failures appear to be intermittent failures of the application. If it is important that the transaction behaves in a reliable and pre- dictable way, then it is important for it to avoid such deadlocks. ⁄ Objects access a set of resources that either is static or always fills a static set of roles. Ÿ If resources can fill multiple roles, then it may take a prohibitively long amount of time to determine, in advance, whether a particular pattern of accessing resources can result in a deadlock. SOLUTION If objects lock multiple shared resources, then ensure that the resources are always locked in the same relative order. For example, if there are four resources, A, B, C, and D, then you could require all objects to lock them in that order. So one object may lock B, C, and D, in that order. Another object may lock A and C, in that order. However, no object may lock C and B, in that order. The same strategy applies to situations where the specific resources that objects use vary, but the objects always fill the same roles. In this sort of situation, you apply the relative ordering to the roles rather than the resources. In the example under the “Context” heading, the specific data- base records that the programs use vary with the transaction. 292 ■ CHAPTER SEVEN CONSEQUENCES ⁄ Use of the Static Locking order pattern allows you to ensure that objects will be able to lock resources without deadlocking. Ÿ Forcing all objects to lock resources in a predetermined order can sometimes increase the amount of time it takes some objects to per- form an operation. For an example, consider the case of the warranty registration program discussed under the “Context” heading. In its original implementation, it only needed to fetch an inven- tory record once. Forcing it to lock a customer record before locking an inventory record requires it to fetch the inventory record twice. The first time it fetches an inventory record, it may discover that there is a warranty to register and which customer is involved. It then locks the appropriate customer record. It must then fetch the inven- tory record a second time after locking it. IMPLEMENTATION There are no special implementation considerations related to the Static Locking Order pattern. KNOWN USES The author has seen the Static locking order pattern used in a number of proprietary applications. CODE EXAMPLE The code example for the static locking pattern is an extension to the example for the Lock File pattern. It is an additional method whose argu- ments are an array of file names and an array of file mode strings (“r” or “rw”). It opens the files in sorted order. It returns an array of ExclusiveRandomAccessFile objects that correspond to the given file names. If there is a problem opening any of the files, any files opened up to that point are closed and an exception is thrown. public static ExclusiveRandomAccessFile[] openExclusive(String[] fileNames, String[] modes) throws IOException { int[] ndx = new int[fileNames.length]; InsertionSort.sort(fileNames, ndx); Concurrency Patterns ■ 293 ExclusiveRandomAccessFile[] opened = new ExclusiveRandomAccessFile[fileNames.length]; try { for (int i=0; i<fileNames.length; i++) { opened[ndx[i]] = openExclusive(fileNames[ndx[i]], modes[ndx[i]]); } // for } catch (IOException e) { // close any opened files for (int i=0; i<opened.length; i++) { if (opened[i]!=null) { try { opened[i].close(); } catch (IOException ee) { } // try } // if } // for throw e; } // try return opened; } // openExclusive(String) Here is the sort method that the openExclusive method calls: /** * Fill the given <code>int</code> array with indices that can * be used to put the array of <code>Comparable</code> objects in * sorted order. If the array is * { "gh", "ab", "zz", "mm" } * then the <code>indices</code> array will be * { 1, 0, 3, 2 } * @exception IllegalArgumentException * If the two arrays are not the same length. */ public static void sort(Comparable[] a, int[] indices) { if (a.length!=indices.length) { String msg = "Different length arrays"; throw new IllegalArgumentException(msg); } // if for (int i=0; i<indices.length; i++) { indices[i]=i; } // for for (int i=1; i<a.length; i++) { Comparable temp = a[i]; int j = i-1; while (j>=0 && a[indices[j]].compareTo(temp)>0) { indices[j+1] = indices[j]; j ; } // while indices[j+1] = i; } // for i } // sort(Comparable[]) 294 ■ CHAPTER SEVEN RELATED PATTERNS ACID Transaction. The Static Locking Order pattern can be used in the design of ACID transactions. Lock File. The Static Locking Order pattern can be used with the Lock File pattern to avoid deadlocks. Concurrency Patterns ■ 295 SYNOPSIS Improve throughput of transaction processing by not waiting for locks that a transaction may need. Instead, be optimistic and perform the transaction logic on private copies of the records or objects involved. Do insist that the locks be granted before the updates can be committed, but abort the trans- action if conflicting updates were made prior to the granting of the locks. CONTEXT You are designing a student records system. One characteristic of a stu- dent records system is that most of the time it has a relatively low number of transactions to process. However, at certain times, such as the begin- ning of a semester, there is a very high level of activity. Your design must accommodate peak levels of activity while keeping the cost of the infra- structure low. When there is a need to keep costs down, there are usually compro- mises to make. After analyzing the requirements, you decide that the most import guarantee to make is the level of throughput that the system will pro- vide. It must be able to process a certain number of transactions per hour. Since the transactions that drive the peak periods will be submitted directly by students, the throughput requirement translates into a requirement to guarantee a maximum average response time. It will be acceptable if a small percentage of the transactions take noticeably longer than the average. With these goals in mind, you begin examining the problem at hand to see if it has any attributes that you can exploit. You notice that it will be very unusual for two concurrent transactions to update information about the same student. Another thing you notice is that although the database manager you are using can handle concurrent transactions, its mechanism for granting locks is single-threaded. This means it is possible for lock management to become a bottleneck. You decide that you can lessen the impact of single-threaded lock management by processing transactions in a way that does not require a transaction to obtain locks on records until the transaction is ready to com- mit changes to the records. Delays in granting locks will not have an impact on the completion of a transaction unless the delays are longer than the transaction takes to get to the point of committing its results. If the transac- Concurrency Patterns ■ 297 Optimistic Concurrency tion is delayed in committing its results, the commitment of the results is all that will be delayed. The rest of the transaction will already be done. FORCES ⁄ Concurrent attempts to modify the state of an object or record are very rare. This is often the case when there are few concurrent trans- actions. It is also often the case when there are a very large number of records or objects and transactions only modify a small number of records or objects. ⁄ Locks are granted centrally by a single-threaded mechanism and it is possible to update the contents of objects or records while waiting to find out if a requested lock will be granted. ⁄ The available locking mechanism is coarse-grained. Its locks apply to an entire file or table or to a large set of objects. Such coarse-grained locks can cause multiple transactions to wait for a lock when the changes that they will make will not conflict with each other. Ÿ Aborting a transaction because it could not obtain a lock and then starting the transaction over again can take a significant amount of time. It may take a lot more time than getting locks beforehand to ensure that the transaction has exclusive access to the resources that it will modify. SOLUTION Coordinate changes that transactions make to the state of records or objects by assuming that concurrent updates to the same record or object are very unlikely. Based on this assumption, proceed optimistically without first obtaining any locks. Instead, you rely on a field of the records or attribute of the objects to recognize when a conflicting update has occurred. This field or attribute will contain a version number or timestamp that contains a dif- ferent value after each time the record or object is updated. Organize the transaction processing into three phases: 1. Read/Fetch. Make a private copy of the state of each record or object that the transaction will update. 2. Perform transaction logic. Have the transaction work with its private copy of the records or states, using them as its source of data and updating them. 3. Commit the updates. After obtaining locks on all of the records or objects that the transaction has updated, verify that no other transactions have modified them. This is usually done by compar- ing their version number or timestamp with the private copies. 298 ■ CHAPTER SEVEN If any records or objects have been modified, abort the transaction. Otherwise, store the values in the private copies into the records or objects. When implementing this pattern, it is crucial that no updates occur until after all of the locks that a transaction will need have been obtained. CONSEQUENCES ⁄ The Optimistic Concurrency pattern allows transactions to be more effectively multithreaded under heavy loads than more pessimistic ways of coordinating concurrent updates. Ÿ When there are concurrent transactions that will modify the same records or objects, there is a bigger performance penalty with opti- mistic concurrency than with more pessimistic policies. Pessimistic policies can cause otherwise concurrent transactions to be performed serially. Optimistic concurrency can result in transactions’ having to be aborted and restarted. It is possible for a transaction to be aborted multiple times before it is finally able to finish. IMPLEMENTATION Sometimes you may want to use optimistic concurrency with records or objects that do not have version numbers or timestamps. There are some strategies to work around this deficiency. One strategy is to use the timestamp or version number of one record or object to control updates to another. This requires the cooperation of all transactions. If the record or object with the version number or timestamp is not naturally part of a transaction, then including it in a transaction adds overhead. Another strategy is to compare the contents of a record or object with its original contents. This avoids the overhead of adding extraneous records or objects to a transaction. However, in some cases this can be at the expense of transactions’ losing their guarantees of consistency and durability. Consider the following sequence of events: Transaction 1 reads record X. Transaction 2 reads record X. Transaction 2 commits changes to record X. Transaction 3 reads records X and Y. Transaction 3 commits changes to records X and Y that cause record X to contain what it contained before. Transaction 1 sees that record X contains the same as it did before, so it commits its changes to record X. Concurrency Patterns ■ 299 In this sequence of events, a lengthy transaction begins by reading record X. While that transaction is processing, another transaction changes the contents of record X. A third transaction comes along and sets the contents of record X to what they were when the first transaction started. Because the lengthy transaction relies on the contents of record X to determine if another transaction has modified it, it modifies the record since it cannot tell that there have been intervening transactions. KNOWN USES SQL server and Sybase allow optimistic concurrency to be specified as the means of concurrency control. Some groupware applications that allow people to collaborate on tasks use optimistic concurrency. In such applications, response time is improved by not having to wait for locks. Because these types of applica- tions have user interfaces that are designed around the principle of direct manipulation, it is generally obvious to all users when there is a conflict between what users are doing. This usually causes users to avoid conflict- ing changes. When conflicts result in pauses in actions and actions’ being aborted, the results are generally understood and acceptable. CODE EXAMPLE The code example updates a row in a database table using optimistic con- currency. class Updater { private boolean gotLock = false; After this example fetches the row to be updated, it asynchronously attempts to get a lock on the row. After the thread that gets the lock is fin- ished, the value of the gotLock variable is true if it was successful in get- ting a lock on the row. void update(Connection conn, String id) throws SQLException { try { Here is where this example gets the row to be updated without lock- ing the row. Statement myStatement = conn.createStatement(); String query; 300 ■ CHAPTER SEVEN TEAMFLY Team-Fly ® query = "SELECT tot_a, tot_b, version, ROWID" + " FROM summary_tb" + " WHERE unit_id=" + id; ResultSet result; result = myStatement.executeQuery(query); result.next(); BigDecimal totA = result.getBigDecimal(1); BigDecimal totB = result.getBigDecmial(2); long version = result.getLong(3); String rowID = result.getString(4); result.close(); At this point, the values from the row in question have been fetched, including the values for a lengthy computation and the row’s version number. The call to getLock returns immediately while it asynchronously gets a lock on the row in question. While getLock is getting the lock, a call to the doIt method performs a lengthy computation to produce a value that will be used to update the row. Thread locker; locker = getLock(myStatement, rowID, version); totB = doIt(totA, totB); locker.join(); The call to getLock returns the thread that is responsible for asyn- chronously getting the lock on the row in question. After getLock returns, a call to doIt computes a value that will be used to update the row in question. After the value is computed, a call to the thread’s join method ensures that the update will not proceed until after the attempt to lock the row is complete. The value of the gotLock variable will be true if the attempt to lock the row in question succeeded. If the lock attempt succeeded, the update proceeds and the transaction is committed. if (gotLock) { String update; update = "UPDATE summary_tb" + " SET tot_b='" + totB + "'" + " WHERE ROWID='" + rowID + "'"; myStatement.executeUpdate(update); conn.commit(); } else { conn.rollback(); } // if myStatement.close(); } catch (InterruptedException e) { conn.rollback(); return; Concurrency Patterns ■ 301 [...]... terminate itself In this situation, there are no good options The simplest option is to do nothing If the resources the task is using need to be recycled, then doing nothing is unsatisfactory A thread pool can attempt to force the termination of a task by calling the Thread object’s stop method The stop method will succeed in terminating a task in many cases, when an interrupt fails In order for interrupt... computer is trying to send a file to the server at the same time, then all but one computer will be waiting for their turn In most situations, there is a limited window of time in which all backups must be done Because the amount of time for finishing all backups is limited, designing the server to only receive one backup file at a time may prevent the backups from finishing in time You need a design that... object in the cache After an object’s time-to-live has elapsed, the object is removed from the cache By assigning and enforcing the time-to-live, you guarantee that the states of the objects in the cache are current within the predetermined amount of time Figure 7. 7 shows the roles that classes play in the Ephemeral Cache Item pattern Here are descriptions of the roles shown in Figure 7. 7: Client Instances... existing thread managed by the thread pool becomes idle Figure 7. 6 shows the roles that classes and interfaces play in the Thread Pool pattern Here are descriptions of these roles: Executor An interface in this role defines a method that can be passed a Runnable object for the purpose of executing it The Concurrency Patterns ■ 30 5 «interface» Executor execute(:Runnable) ThreadPool Holds-tasks-waiting-for-a-thread... know in advance how many threads the task will need IMPLEMENTATION Some JVMs internally pool threads When a Java program is running on such a JVM, the Thread Pool pattern may not reduce the amount of time spent creating threads Use of the Thread Pool pattern may even increase the amount of time spent on thread creation by making the JVM’s internal thread pooling less effective Concurrency Patterns ■ 30 7. .. same as infinity public static final int DEFAULT_MAXIMUMPOOLSIZE = Integer.MAX_VALUE; This constant is the default value for the normal pool size For most applications, the normal pool size should be set to a value greater than one Concurrency Patterns ■ 31 3 public static final int DEFAULT_NORMALPOOLSIZE = 1; This constant is the default maximum time to keep worker threads alive while waiting for... processed One way to manage waiting tasks is to put them in a queue Putting waiting tasks in a queue ensures that they are run in the order in which they arrive You can choose other scheduling policies by choosing another data structure, such as a priority queue If it is possible for tasks to arrive at a faster rate than they are processed, it is possible for a queue to grow indefinitely If a queue gets too... check for interruption after each task executed by their thread finishes They terminate the thread if it has been interrupted If the logic of the task checks for its thread’s being interrupted, then the thread will terminate sooner When new tasks are presented to the thread pool, new threads are created to replace the terminated threads, if needed Unfinished tasks are never dropped upon interruption... (Thread)(it.next()); t.interrupt(); } // for } // synchronized } // interruptAll() Normally, before shutting down a pool by a call to the interruptAll method, you should be sure that all clients of the pool are themselves terminated, in order to avoid hanging or losing commands Additionally, you may wish to call the drain method to remove (and return) unprocessed tasks from the queue after shutting down the... task in the list returned by the drain method The drain method removes all unprocessed tasks from pool’s queue, and returns them in a java. util.List This should be used only when the pool has no active clients; otherwise, it is possible that the method will loop, removing tasks as clients put them in This method can be useful after shutting down a threadPool (by a call to interruptAll) to determine . manage waiting tasks is to put them in a queue. Putting waiting tasks in a queue ensures that they are run in the order in which they arrive. You can choose other scheduling policies by choosing another data. the amount of time for finishing all backups is limited, designing the server to only receive one backup file at a time may prevent the backups from fin- ishing in time. You need a design that allows. the machine that they are running on from being swamped with too many threads. Concurrency Patterns ■ 30 5 «interface» Executor execute(:Runnable) «interface» Runnable Holds-tasks-waiting-for-a-thread