1. Trang chủ
  2. » Công Nghệ Thông Tin

Building Secure and Reliable Network Applications phần 7 potx

51 283 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 51
Dung lượng 399,29 KB

Nội dung

Kenneth P. Birman - Building Secure and Reliable Network Applications 308 308 membership, and hence prevented from making progress. Other researchers (including the author) have pinned down precise conditions (in various models) under which dynamic membership consensus protocols are guaranteed to make progress [BDM95, FKMBD95, GS96, Nei96], and the good news is that for most practical settings the answer is that such protocols make progress with overwhelmingly high probability if the probability of failures and message loss are uniform and independent over the processes and messages sent in the system. In effect, only partitioning failures or a very intelligent adversary (one that in practice could never be implemented) can prevent these systems from making progress. Thus, we know that all of these models face conditions under which progress is not possible. Research is still underway on pinning down the precise conditions when progress is possible in each approach: the maximum rates of failures that dynamic systems can sustain. But as a practical matter, the evidence is that all of these models are perfectly reasonable for building reliable distributed systems. The theoretical impossibility results do not appear to represent practical impediments to implementing reliable distributed software; they simply tell us that there will be conditions that these reliability approaches cannot overcome. The choice, in a practical sense, is to match the performance and consistency properties of the solution to the performance and consistency requirements of the application. The weaker the requirements, the better the performance we can achieve. Our study also revealed two other issues that deserve comment: the need, or lack thereof, for a primary component in a partitioned membership model, and the broader but related question of how consistency is tied to ordering properties in distributed environments. The question of a primary component is readily understood in terms of the air-traffic control example we looked at earlier. In that example, there was a need to take “authoritative action” within a service on behalf of the system as a whole. In effect, a representative of a service needed to be sure that it could safely allow an air traffic control to take a certain action, meaning that it runs no risk of being contradicted by any other process (or, in the case of a possible partitioning failure, that before any other process could start taking potentially conflicting actions, a timeout would elapse and the air traffic controller warned that this representative of the service was now out of touch with the primary partition). In the static system model, there is only a single notion of the system as a whole, and actions are taken upon the authority of the full system membership. Naturally, it can take time to obtain majority acquiescence in an action [KD95], hence this is a model in which some actions may be delayed for a considerable period of time. However, when an action is actually taken, it is taken on behalf of the full system. In the dynamic model we lose this guarantee and face the prospect that our notion of consistency can become trivial because of system partitioning failures. In the limit, a dynamic system could partition arbitrarily, with each component having its own notion of authoritative action. For purely internal purposes, such a notion of consistency may be adequate, in the sense that it still permits work to be shared among the processes that compose the system, and (as noted above), is sufficient to avoid the risk that the states of processes will be directly inconsistent in a way that is readily detectable. The state merge problem [Mal94, BBD96], which arises when two components of a partitioned system reestablish communication connectivity and must reconcile their states, is where such problems are normally resolved (and the normal resolution is to simply take the state of one partition as being the official system state, abandoning the other). As noted in Chapter 13, this challenge has lead researchers working on the Relacs system in Bologna to propose a set of tools, combined with a set of guarantees that relate to view installation, which simplify the development of applications that can operate in this manner [BBD96]. The weakness of allowing simultaneous progress in multiple components of a partitioned dynamic system, however, is that there is no meaningful form of consistency that can be guaranteed Chapter 16: Consistency in Distributed Systems 309 309 between the components, unless one is prepared to pay the high cost of using only dynamically uniform message delivery protocols. In particular, the impossibility of guaranteeing progress among the participants in a consensus protocol implies that when a system partitions, there will be situations in which we can define the membership of both components but cannot decide how to terminate protocols that were underway at the time of the partitioning event. Consequences of this observation include the implication that when non-uniform protocols are employed, it will be impossible to ensure that the components have consistent histories (in terms of the events that occurred and the ordering of events) for their past prior to the partitioning event. In practice, one component, or both, may be irreconcilably inconsistent with the other! There is no obvious way to “merge” states in such a situation: the only real option is to arbitrarily pick one component’s state as the official one and to replace the other component’s state with this state, perhaps reapplying any updates that occurred in the “unofficial” partition. Such an approach, however, can be understood as one in which the primary component is simply selected when the network partition is corrected rather than when it forms. If there is a reasonable basis on which to make the decision, why delay it? As we saw in the previous chapter, there are two broad ways to deal with this problem. The one favored in the author’s own work is to define a notion of primary component of a partitioned system, and to track primaryness when the partitioning event first occurs. The system can then enforce the rule that non-primary components must not trust their own histories of the past state of the system and certainly should not undertake authoritative actions on behalf of the system as a whole. A non-primary component may, for example, continue to operate a device that it “owns”, but is not safe in instructing an air traffic controller about the status of air space sectors or other global forms of state-sensitive data unless they were updated using dynamically uniform protocols. Of course, a dynamic distributed system can lose its primary component, and making matters still more difficult, there may be patterns of partial communication connectivity within which a static distributed system model can make progress but no primary partition can be formed, and hence a dynamic model must block! For example, suppose that a system partitions so that all of its members are disconnected from one another. Now we can selectively reenable connections so that over time, a majority of a static system membership set are able to vote in favor of some action. Such a pattern of communication could allow progress. For example, there is the protocol of Keidar and Dolev, cited several times above, in which an action can be terminated entirely on the basis of point to point connections [KD95]. However, as we commented, this protocol delays actions until a majority of the processes in the whole system knows about them, which will often be a very long time. The author’s work has not needed to directly engage these issues because of the underlying assumption that rates of failure are relatively low and that partitioning failures are infrequent and rapidly repaired. Such assumptions let us conclude that these types of partitioning scenarios just don’t arise in typical local-area networks and typical distributed systems. On the other hand, frequent periods of partitioned operation could arise in very mobile situations, such as when units are active on a battlefield. They are simply less likely to arise in applications like air traffic control systems or other “conventional” distributed environments. Thus, there are probably systems that should use a static model with partial communications connectivity as their basic model, systems that should use a primary component consistency model, and perhaps still other systems for which a virtual synchrony model that doesn’t track primaryness would suffice. These represent successively higher levels of availability, and even the weakest retains a meaningful notion of distributed consistency. At the same time, they represent diminishing notions of consistency in any absolute sense. This suggests that there are unavoidable tradeoffs in the design of reliable distributed systems for critical applications. Kenneth P. Birman - Building Secure and Reliable Network Applications 310 310 The two-tiered architecture of the previous section can be recognized as a response to this impossibility result. Such an approach explicitly trades higher availability for weaker consistency in the LAN subsystems while favoring strong consistency at the expense of reduced availability in the WAN layer (which might run a protocol based on the Chandra-Toueg consensus algorithm). For example, the LAN level of a system might use non-uniform protocols for speed, while the WAN level uses tools and protocols similar to the ones proposed bythe Transis effort, or by Babaoglu’s group in their work on Relacs [BBD96]. We alluded briefly to the connection between consistency and order. This topic is perhaps an appropriate one on which to end our review of the models. Starting with Lamport’s earliest work on distributed computing systems, it was already clear that consistency and the ordering of distributed events are closely linked. Over time, it has become apparent that distributed systems contain what are essentially two forms of knowledge or information. Static knowledge is that information which is well known to all of the processes in the system, at the outset. For example, the membership of a static system is a form of static knowledge. Being well known, it can be exploited in a decentralized but consistent manner. Other forms of static knowledge can include knowledge of the protocol that processes use, knowledge that some processes are more important than others, or knowledge that certain classes of events can only occur in certain places within the system as a whole. Dynamic knowledge is that which stems from unpredicted events that arise within the system either as a consequence of non-determinism of the members, failures or event orderings that are determined by external physical processes, or inputs from external users of the system. The events that occur within a distributed system are frequently associated with the need to update the system state in response to dynamic events. To the degree that system state is replicated, or is reflected in the states of multiple system processes, these dynamic updates of the state will need to occur at multiple places. In the internal external d ynamic, no primary partion s tatic dynamically uniform, s tatic model n on-uniform, d ynamic model increasing costs, decreasing availability Membership Consistency d ynamic, primary partition Figure 16-3: Conceptual options for the distributed systems designer. Even when one seeks "consistency" there are choices concerning how strong the consistency desired should be, and which membership model to use. The least costly and highest availability solution for replicating data, for example, looks only for internal consistency within dynamically defined partitions of a system, and does not limit progress to the primary partition. This model, we have suggested, may be too weak for practical purposes. A slightly less available approach that maintains the same high level of performance allows progress only in the primary partition. As one introduces further constraints, such as dynamic uniformity or a static system model, costs rise and availability falls, but the system model becomes simpler and simpler to understand. The most costly and restrictive model sacrifices nearly three orders of magnitude of performance in some studies relative to the least costly one. Within any given model, the degree of ordering required for multicasts introduces further fine-grained cost/benefit tradeoffs. Chapter 16: Consistency in Distributed Systems 311 311 work we presented above, process groups are the places where such state resides, and multicasts are used to update such state. Viewed from this perspective, it becomes apparent that consistency is order, in the sense that the distributed aspects of the system state are entirely defined by process groups and multicasts to those groups, and these abstractions, in turn, are defined entirely in terms of ordering and atomicity. Moreover, to the degree that the system membership is self-defined, as in the dynamic models, atomicity is also an order-based abstraction! This reasoning leads to the conclusion that the deepest of the properties in a distributed system concerned with consistency may be the ordering in which distributed events are scheduled to occur.As we have seen, there are many ways to order events, but the schemes all depend upon either explicit participation by a majority of the system processes, or upon dynamically changing membership, managed by a group membership protocol. These protocols, in turn, depend upon majority action (by a dynamically defined majority). Moreover, when examined closely, all the dynamic protocols depend upon some notion of token or special permission that enables the process holding that permission to take actions on behalf of the system as a whole. One is strongly inclined to speculate that in this observation lies the grain of a general theory of distributed computing, in which all forms of consistency and all forms of progress could be related to membership, and in which dynamic membership could be related to the liveness of token passing or “leader election” protocols. At the time of this writing, the author is not aware of any clear presentation of this theory of all possible behaviors for asynchronous distributed systems, but perhaps it will emerge in the not distant future. Our goals in this textbook remain practical, however, and we now have powerful practical tools to bring to bear on the problems of reliability and robustness in critical applications. Even knowing that our solutions will not be able to guarantee progress under all possible asynchronous conditions, we have seen enough to know how to guarantee that when progress is made, consistency will be preserved. There are promising signs of emerging understanding of the conditions under which progress can be made, and the evidence is that the prognosis is really quite good: if a system rarely loses messages and rarely experiences real failures (or mistakenly detects failures), the system will be able to reconfigure itself dynamically and make progress while maintaining consistency. As to the tradeoffs between the static and dynamic model, it may be that real applications should employ mixtures of the two. The static model is more costly in most settings (perhaps not in heavily partitioned ones), and may be drastically more expensive if the goal is merely to update the state of a distributed server or a set of web pages managed on a collection of web proxies. The dynamic primary component model, while overcoming these problems, lacks external safety guarantees that may sometimes be needed. And the non-primary component model lacks consistency and the ability to initiate authoritative actions at all, but perhaps this ability is not always needed. Complex distributed systems of the future may well incorporate multiple levels of consistency, using the cheapest one that suffices for a given purpose. 16.2 General remarks Concerning Causal and Total Ordering The entire notion of providing ordered message delivery has been a source of considerable controversy within the community that develops distributed software [Ren93]. Causal ordering has been especially controversial, but even total ordering is opposed by some researchers [CS93], although others have been critical of the arguments advanced in this area [Bir94, Coo94, Ren94]. The CATOCS controversy came to a head in 1993, and although it seems no longer to interest the research community, it would also be hard to claim that there is a generally accepted resolution of the question. Kenneth P. Birman - Building Secure and Reliable Network Applications 312 312 Underlying the debate are tradeoffs between consistency, ordering, and cost. As we have seen, ordering is an important form of “consistency”. In the next chapter we will develop a variety of powerful tools for exploiting ordering, especially to implement replicated data efficiently. Thus, since the first work on consistency and replication with process groups, there has been an emphasis on ordering. Some systems, like the Isis Toolkit developed by this author in the mid 1980’s, made extensive use of causal ordering because of its relatively high performance and low latency. Isis, in fact, enforces causally delivered ordering as a system-wide default, although as we saw in Chapter 14, such a design point is in some ways risky. The Isis approach makes certain types of asynchronous algorithm very easy to implement, but has important cost implications; developers of sophisticated Isis applications sometimes need to disable the causal ordering mechanism to avoid these costs. Other systems, such as Ameoba, looked at the same issues but concluded that causal ordering is rarely needed if total ordering can be made fast enough. Writing this text, today, this author tends to agree with the Ameoba project except in certain special cases. Above, we have seen a sampling of the sorts of uses to which ordered group communication can be put. Moreover, earlier sections of this book have established the potential value of these sorts of solutions in settings such as the Web, financial trading systems, and highly available database or file servers. Nonetheless, there is a third community of researchers (Cheriton and Skeen are best known within this group) who have concluded that ordered communication is almost never matched with the needs of the application [CS93]. These researchers cite their success in developing distributed support for equity trading in financial settings and work in factory automation, both settings in which developers have reported good results using distributed message-bus technologies (TIB is the one used by Cheriton and Skeen) that offer little in the sense of distributed consistency or fault-tolerance guarantees. To the degree that the need arises for consistency within these applications, Cheriton and Skeen have found ways to reduce the consistency requirements of the application rather than providing stronger consistency within a system to respond to a strong application-level consistency requirement (the NFS example from Section 7.3 comes to mind). Broadly, this leads them to a mindset that favors the use of stateless architectures, non-replicated data, and simple fault-tolerance solutions in which one restarts a failed server and leaves it to the clients to reconnect. Cheriton and Skeen suggest that such a point of view is the logical extension of the end-to-end argument [SRC84], which they interpret as an argument that each application must take direct responsibility for guaranteeing its own behavior. Cheriton and Skeen also make some very specific points. They are critical of system-level support for causal or total ordering guarantees. The argue that communication ordering properties are better left to customized application-level protocols, which can also incorporate other sorts of application- specific properties. In support of this view, they present applications that need stronger ordering guarantees and applications that need weaker ones, arguing that in the former case, causal or total ordering will be inadequate, and in the latter that it will be overkill (we won’t repeat these examples here). Their analysis leads them to conclude that in almost all cases, causal order is more than the application needs (and more costly), or less than the application needs (in which case the application must add some higher level ordering protocol of its own in any case), and similarly for total ordering [CS93]. Unfortunately, while making some good points, this paper also includes a number of questionable claims, including some outright errors that were refuted in other papers including one written by the author of this text [Bir94, Coo94, Ren94]. For example, they claim that causal ordering algorithms have an overhead on messages that grows as n 2 where n is the number of processes in the system as a whole. Yet we have seen that causal ordering for group multicasts, the case Cheriton and Skeen claim to be discussing, can easily be provided with a vector clock whose length is linear in the number of active senders in a group (rarely more than two or three processes), and that in more complex settings, compression techniques can often be used to bound the vector timestamp to a small size. This particular Chapter 16: Consistency in Distributed Systems 313 313 claim is thus incorrect. The example is just one of several specific points on which Cheriton and Skeen make statements that could be disputed purely on technical grounds. Also curious is the entire approach to causal ordering adopted by Cheriton and Skeen. In this chapter, we have seen that causal order is often needed when one seeks to optimize an algorithm expressed originally in terms of totally ordered communication, and that total ordering is useful because, in a state- machine style of distributed system, by presenting the same inputs to the various processes in a group in the same order, their states can be kept consistent. Cheriton and Skeen never address this use of ordering, focusing instead on causal and total order in the context of a publish-subscribe architecture in which a small number of data publishers send data that a large number of consumers receive and process, and in which there are no consistency requirements that span the consumer processes. This example somewhat misses the point of the preceedings chapters, where we made extensive use of total ordering primarily for consistent replication of data, and of causal ordering as a relaxation of total ordering where the sender has some form of mutual exclusion within the group. To this author, Cheriton and Skeen’s most effective argument is one based on the end-to-end philosophy. They suggest, in effect, that although many applications will benefit from properties such as fault-tolerance, ordering, or other communication guarantees, no single primitive is capable of capturing all possible properties without imposing absurdly high costs for the applications that required weaker guarantees. Our observation about the cost of dynamically uniform strong ordering bears this out: here we see a very strong property, but it is also thousands of times more costly than rather similar but weaker property! If one makes the weaker version of a primitive the default, the application programmer will need to be careful not to be surprised by its non-uniform behavior; the stronger version may just be too costly for many applications. Cheriton and Skeen generalize from similar observations based on their own examples and conclude that the application should implement its own ordering protocols. Yet we have seen that these protocols are not trivial, and implementing them would not be an easy undertaking. It also seems unreasonable to expect the average application designer to implement a special-purpose, hand-crafted protocol for each specific need. In practice, if ordering and atomicity properties are not provided by the computing system, it seems unlikely that applications will be able to make any use of these concepts at all. Thus, even if one agrees with the end-to-end philosophy, one might disagree that it implies that each application programmer should implement nearly identical and rather complex ordering and consistency protocols, because no single protocol will suffice for all uses. Current systems, including the Horus system which was developed by the author and his colleagues at Cornell, usually adopt a middle ground, in which the ordering and atomicity properties of the communication system are viewed as options that can be selectively enabled (Chapter 18). The designer can in this way match the ordering property of a communication primitive to the intended use. If Cheriton and Skeen were using Horus, their arguments would warn us not to enable such-and-such a property for a particular application because the application doesn’t need the property and the property is costly. Other parts of their work would be seen to argue in favor of additional properties beyond the ones normally provided by Horus. As it happens, Horus is easily extended to accomodate such special needs. Thus the reasoning of Cheriton and Skeen can be seen as critical of systems that adopt a single all-or- nothing approach to ordering or atomicity, but perhaps not of systems such as Horus that seek to be more general and flexible. The benefits of providing stronger communication tools in a “system”, in the eyes of the author, are that the resulting protocols can be highly optimized and refined, giving much better performance than could be achieved by a typical application developer working over a very general but very “weak” communications infrastructure. To the degree that Cheriton and Skeen are correct and application developers will need to implement special-purpose ordering properties, such a system can also provide Kenneth P. Birman - Building Secure and Reliable Network Applications 314 314 powerful support for the necessary protocol development tasks. In either case, the effort required from the developer is reduced and the reliability and performance of the resulting applications improved. We mentioned that the community has been particularly uncomfortable with the causal ordering property. Within a system such as Horus, causal order is normally used as an optimization of total order, in settings where the algorithm was designed to use a totally ordered communication primitive but exhibits a pattern communication for which the causal order is also a total one. We will return to this point below, but we mention it now simply to stress that the “explicit” use of casually ordered communication, much criticized by Cheriton and Skeen, is actually quite uncommon. More typical is a process of refinement whereby an application is gradually extended to use less and less costly communication primitives in order to optimize performance. The enforcement of causal ordering, system wide, is not likely to become standard in future distributed systems. When cbcast is substituted for abcast communication may cease to be totally ordered but any situation in which messages arrive in different orders at different members will be due to events that commute. Thus their effect on the group state will be as if the messages had been received in a total order even if the actual sequence of events is different. In contrast, much of the discussion and controversy surrounding causal order arises when causal order is considered not as an optimization, but rather as an ordering property that one might employ by default, just as a stream provides FIFO ordering by default. Indeed, the analogy is a very good one, because causal ordering is an extention of FIFO ordering. Additionally, much of the argument over causal order uses examples in which point-to-point messages are sent asynchronously, with system-wide causal order used to to ensure that “later” messages arrive after “earlier” ones. There some merit in this view of things, because the assumption of system-wide causal ordering permits some very asynchronous algorithms to be expressed extremely elegantly and simply. It would be a shame to lose the option of exploiting such algorithms. However, system-wide causal order is not really the main use of causal order, and one could easily live without such a guarantee. Point-to-point messages can also be sent using a fast RPC protocol, and saving a few hundred microseconds at the cost of a substantial system-wide overhead seems like a very questionable design choice; systems like Horus obtain system-wide causality, if desired, by waiting for asynchronously transmitted messages to become stable in many situations. On the other hand, when causal order is used as an optimization of atomic or total order, the performance benefits can be huge. So we face a performance argument, in fact, in which the rejection of causal order involves an acceptance of higher than necessary latencies, particularly for replicated data. Notice that if asynchronous cbcast is only used to replace abcast in settings where the resulting delivery order will be unchanged, the associated process group can still be programmed under the assumption that all group members will see the same events in the same order. As it turns out, there are cases in which the handling of messages commute and the members may not even need to see messages in identical ordering in order to behave as if they did. There are major advantages to exploiting these cases: doing so potentially reduces idle time (because the latency to message delivery is lower, hence a member can start work on a request sooner, if the cbcast encodes a request that will cause the recipient to perform a computation). Moreover, the risk that a Heisenbug will cause all group members to fail simultaneously is reduced because the members do not process the requests in identical orders, and Heisenbugs are likely to be very sensitive to the detailed ordering of events within a process. Yet one still presents the algorithm in the group and thinks of the group as if all the communication within it was totally ordered. 16.3 Summary and Conclusion There has been a great deal of debate over the notions of consistency and reliability in distributed systems (which are sometimes seen as violating end-to-end principles), and of causal or total ordering (which are sometimes too weak or too strong for the needs of a specific application that does need ordering). Finally, Chapter 16: Consistency in Distributed Systems 315 315 although we have not focused on this here, there is the criticism that technologies such as the ones we have reviewed do not “fit” with standard styles of distributed systems development. As to the first concern, the best argument for consistency and reliability is to simply exhibit classes of critical distributed computing systems that will not be sufficiently available unless data is replicated, and will not be trustworthy unless the data is replicated consistency. We have done so throughout this textbook; if the reader is unconvinced, there is little that will convince him or her. On the other hand, one would not want to conclude that most distributed applications need these properties: today, the ones that do remain a fairly small subset of the total. However, this subset is rapidly growing. Moreover, even if one believed that consistency and reliability are extremely important in a great many applications, one would not want to impose potentially costly communication properties system-wide, especially in applications with very large numbers of overlapping process groups. To do so is to invite poor performance, although there may be specific situations where the enforcement of strong properties within small sets of groups is desirable or necessary. Turning to the second issue, it is clearly true that different applications have different ordering needs. The best solution to this problem is to offer systems that permit the ordering and consistency properties of a communications primitive or process group to be tailored to their need. If the designer is concerned about paying the minimum price for the properties an application really requires, such a system can then be configured to only offer the properties desired. Below, will see that the Horus system implements just such an approach. Finally, as to the last issue, it is true that we have presented a distributed computing model that, so far, may not seem very closely tied to the software engineering tools normally used to implement distributed systems. In the next chapter we study this practical issue, looking at how group communication tools and virtual synchrony can be applied to real systems that may have been implemented using other technologies. 16.4 Related Reading On notions of consistency in distributed systems: [BR94, BR96]; in the case of partitionable systems, [Mal94, KD95, MMABL96, Ami95]. On the Causal Controversy, [Ren93]. The dispute over CATOCS: [CS93], with responses in [Bir94, Coo94, Ren94]. The end-to-end argument was first put forward in [SRC84]. Regarding recent theoretical work on tradeoffs between consistency and availability: [FLP85, CHTC96, BDM95, FKMBD95, CS96]. Kenneth P. Birman - Building Secure and Reliable Network Applications 316 316 17. Retrofitting Reliability into Complex Systems This chapter is concerned with options for presenting group computing tools to the application developer. Two broad approaches are considered: those involving wrappers that encapsulate an existing piece of software in an environment that transparently extends its properties, for example by introducing fault- tolerance through replication or security, and those based upon toolkits which provide explicit procedure- call interfaces. We will not examine specific examples of such systems now, but instead focus on the advantages and disadvantages of each approach, and on their limitations. In the next chapter and beyond, however, we turn to a real system on which the author has worked and present substantial detail, and in Chapter 26 we review a number of other systems in the same area. 17.1 Wrappers and Toolkits The introduction of reliability technologies into a complex application raises two sorts of issues. One is that many applications contain substantial amounts of preexisting software, or make use of off-the-shelf components (the military and government favors the acronym COTS for this, meaning “components off the shelf”; presumably because OTSC is hard to pronounce!) In these cases, the developer is extremely limited in terms of the ways that the old technology can be modified. A wrapper is a technology that overcomes this problem by intercepting events at some interface between the unmodifiable technology and the external environment [Jon93], replacing the original behavior of that interface with an extended behavior that confers a desired property on the wrapped component, extends the interface itself with new functionality, or otherwise offers a virtualized environment within which the old component executes. Wrapping is a powerful technical option for hardening existing software, although it also has some practical limitations that we will need to understand. In this section, we’ll review a number of approaches to performing the wrapping operation itself, as well as a number of types of interventions that wrappers can enable. An alternative to wrapping is to explicitly develop a new application program that is designed from the outset with the reliability technology in mind. For example, we might set out to build an authentication service for a distributed environment that implements a particular encryption technology, and that uses replication to avoid denial of service when some of its server processes fail. Such a program would be said to use a toolkit style of distributed computing, in which the sorts of algorithms developed in the previous chapter are explicitly invoked to accomplish a desired task. A toolkit approach packages potentially complex mechanisms, such as replicated data with locking, behind simple to use interfaces (in the case of replicated data, LOCK, READ and UPDATE operations). The disadvantage of such an approach is that it can be hard to glue a reliability tool into an arbitrary piece of code, and the tools themselves will often reflect design tradeoffs that limit generality. Thus, toolkits can be very powerful but are in some sense inflexible: they adopt a programming paradigm, and having done so, it is potentially difficult to use the functionality encapsulated within the toolkit in a setting other than the one envisioned by the tool designer. Toolkits can also take other forms. For example, one could view a firewall, which filters messages entering and exiting a distributed application, as a tool for enforcing a limited security policy. When one uses this broader interpretation of the term, toolkits include quite a variety of presentations of reliability technologies. In addition to the case of firewalls, a toolkit could package a reliable communication technology as a message bus, a system monitoring and management technology, a fault- tolerant file system or database system, a wide-area name service, or in some other form (Figure 17-1). Moreover, one can view a programming language that offers primitives for reliable computing as a form of toolkit. Chapter 17: Retrofitting Reliability into Complex Systems 317 317 In practice, many realistic distributed applications require a mixture of toolkit solutions and wrappers. To the degree that a system has new functionality which can be developed with a reliability technology in mind, the designer is afforded a great deal of flexibility and power through the execution model supported (for example, transactional serializability or virtual synchrony), and may be able to provide sophisticated functionality that would not otherwise be feasible. On the other hand, in any system that reuses large amounts of old code, wrappers can be invaluable by shielding the previously developed functionality from the programming model and assumptions of the toolkit. Server replication Tools and techniques for replicating data to achieve high availability, load- balancing, scalable parallelism, very large memory-mapped caches, etc. Cluster API’s for management and exploitation of clusters Video server Technologies for striping video data across multiple servers, isochronous replay, single replay when multiple clients request the same data WAN replication Technologies for data diffusion among servers that make up a corporate network. Client groupware Integration of group conferencing and cooperative work tools into Java agents, Tcl/Tk, or other GUI-builders and client-side applications. Client reliability Mechanisms for transparently fault-tolerant RPC to servers, consistent data subscription for sets of clients that monitor the same data source, etc. System management Tools for instrumenting a distributed system and performing reactive control. Different solutions might be needed when instrumenting the network itself, cluster-style servers, and user-developed applications. Firewalls and containment tools Tools for restricting the behavior of an application or for protecting it against a potentially hostile environment. For example, such a toolkit might provide a bank with a way to install a “partially trusted” client-server application so as to permit its normal operations while prevening unauthorized ones. Figure 17-1: Some types of toolkits that might be useful in building or hardening distributed systems. Each toolkit would address a set of application-specific problems, presenting an API specialized to the programming language or environment within which the toolkit will be used, and to the task at hand. While it is also possible to develop extremely general toolkits that seek to address a great variety of possible types of users, doing so can result in a presentation of the technology that is architecturally weak and hence doesn’t guide the user to the best system structure for solving their problems. In contrast, application-oriented toolkits often reflect strong structural assumptions that are known to result in solutions that perform well and achieve high reliability. [...]... Birman - Building Secure and Reliable Network Applications 320 17. 1.1.4 Wrapping With Interposition Agents and Buddy Processes Up to now, we have focused on wrappers that operate directly upon the application process and that live in its address space However, wrappers need not be so intrusive Interposition involves placing some sort of object or process in between an existing object or process and its... integrating it 325 326 Kenneth P Birman - Building Secure and Reliable Network Applications with Horus The resulting system is a powerful protyping tool, but in fact could actually support “production” applications as well; Brian Smith at Cornell University is using this infrastructure in support of a new video conferencing system, and it could also be employed as a groupware and computersupported cooperative... generation 331 332 Kenneth P Birman - Building Secure and Reliable Network Applications of interactive multiparticipant network games or simulations, and could support the sorts of cooperation needed in commercial or financial transactions that require simultaneous actions in multiple markets or multiple countries The potential seems nearly unlimited Moreover, all of these are applications that would appear... protocols and applications, a very transparent, very general solution is achievable 17. 5.2 An Unbreakable Stream That Mimics TCP To address this issue, we will need to assume that there is a version of the stream protocol that has been “isolated” in the form of a protocol module with a well-defined interface To simplify the discussion, 335 Kenneth P Birman - Building Secure and Reliable Network Applications. .. largely one-way communication channel, and to the degree that the protocol and the application are deterministic, the replication method will have minimal impact on system performance As we move away from this simple case into more complex ones, the protocol becomes much more 339 340 Kenneth P Birman - Building Secure and Reliable Network Applications complex and imposes increasingly visible overheads... Systems 329 330 Application domain Server replication Data dissemination System management Security applications Kenneth P Birman - Building Secure and Reliable Network Applications Uses of process groups • High availability, fault-tolerance • State transfer to restarted process • Scalable parallelism and automatic load balancing • Coherent caching for local data access • Database replication for high... the machine itself, and is “locked” into the TCP protocol of the client system As a consequence, IP packets sent by the client are only received at one site, which represents a single point of failure for the protocol We need a way to shift the adress to a different location to enable a backup to take over after a crash 3 37 338 Kenneth P Birman - Building Secure and Reliable Network Applications This... more and more organizations and cooperations onto the Web will tomorrow translate into pressure for consistent, predictable, and rapidly updated groupware tools and objects The match of the technologies we have presented with this likely need is good, although the packaging of group communication tools to work naturally and easily within such applications will certainly demand additional research and. .. purpose in this textbook is to understand how reliability can be enhanced through the appropriate use of distributed computing technologies How do wrappers help in this undertaking? Examples of robustness properties that wrappers can be used to introduce into an application include the following: 321 322 Kenneth P Birman - Building Secure and Reliable Network Applications • Fault-tolerance Here, the... replica will be the same without ensuring that the replica sees the same time values and receives timer interrupts at the same point in its execution The UNIX select system call is a source of non-determinism, as are interactions with devices Any time an 323 324 Kenneth P Birman - Building Secure and Reliable Network Applications application uses ftell to measure the amount of data available in an incoming . are unavoidable tradeoffs in the design of reliable distributed systems for critical applications. Kenneth P. Birman - Building Secure and Reliable Network Applications 310 310 The two-tiered architecture. tradeoffs between consistency and availability: [FLP85, CHTC96, BDM95, FKMBD95, CS96]. Kenneth P. Birman - Building Secure and Reliable Network Applications 316 316 17. Retrofitting Reliability. to result in solutions that perform well and achieve high reliability. Kenneth P. Birman - Building Secure and Reliable Network Applications 318 318 17. 1.1 Wrapper Technologies In our usage,

Ngày đăng: 14/08/2014, 13:20