1. Trang chủ
  2. » Công Nghệ Thông Tin

Concepts, Techniques, and Models of Computer Programming - Chapter 11 docx

41 287 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 41
Dung lượng 241,49 KB

Nội dung

Chapter 11 Distributed Programming A distributed system is a set of computers that are linked together by a network. Distributed systems are ubiquitous in modern society. The canonical example of such a system, the Internet, has been growing exponentially ever since its inception in the late 1970’s. The number of host computers that are part of it has been doubling each year since 1980. The question of how to program a distributed system is therefore of major importance. This chapter shows one approach to programming a distributed system. For the rest of the chapter, we assume that each computer has an operating system that supports the concept of process and provides network communication.Pro- gramming a distributed system then means to write a program for each process such that all processes taken together implement the desired application. For the operating system, a process is a unit of concurrency. This means that if we ab- stract away from the fact that the application is spread over different processes, this is just a case of concurrent programming. Ideally, distributed programming would be just a kind of concurrent programming, and the techniques we have seen earlier in the book would still apply. Distributed programming is complicated Unfortunately, things are not so simple. Distributed programming is more com- plicated than concurrent programming for the following reasons: • Each process has its own address space. Data cannot be transferred from one process to another without some translation. • The network has limited performance. Typically, the basic network opera- tions are many orders of magnitude slower than the basic operations inside one process. At the time of publication of this book, network transfer time is measured in milliseconds, whereas computational operations are done in nanoseconds or less. This enormous disparity is not projected to change for the foreseeable future. Copyright c  2001-3 by P. Van Roy and S. Haridi. All rights reserved. 714 Distributed Programming • Some resources are localized. There are many resources that can only be used at one particular computer due to physical constraints. Localized resources are typically peripherals such as input/output (display screen, keyboard/mouse, file system, printer). They can be more subtle, such as a commercial application that can only be run on a particular computer because it is licensed there. • The distributed system can fail partially. The system consists of many components that are only loosely connected. It might be that part of the network stops working or that some of the computers stop working. • The distributed system is open. Independent users and computations co- habit the system. They share the system’s resources and they may compete or collaborate. This gives problems of security (protection against malicious intent) and naming (finding one another). How do we manage this complexity? Let us attempt to use the principle of separation of concerns. According to this principle, we can divide the problem into an ideal case and a series of non-ideal extensions. We give a solution for the ideal case and we show how to modify the solution to handle the extensions. The network transparency approach In the ideal case, the network is fast, resources can be used everywhere, all com- puters are up and running, and all users trust one another. In this case there is a solution to the complexity problem: network transparency. That is, we im- plement the language so that a program will run correctly independently of how it is partitioned across the distributed system. The language has a distributed implementation to guarantee this property. Each language entity is implemented by one or more distribution protocols, which all are carefully designed to respect the language semantics. For example, the language could provide the concept of an object. An object can be implemented as a stationary object,whichmeans that it resides on one process and other processes can invoke it with exactly the same syntax as if it were local. The behavior will be different in the nonlocal case (there will be a round trip of network messages), but this difference is invisible from the programmer’s point of view. Another possible distribution protocol for an object is the cached object.In this protocol, any process invoking the object will first cause the object to become local to it. From then on, all invocations from that process will be local ones (until some other process causes the object to move away). The point is that both stationary and cached objects have exactly the same behavior from the language point of view. With network transparency, programming a distributed system becomes sim- ple. We can reuse all the techniques of concurrent programming we saw through- out the book. All the complexity is hidden inside the language implementation. Copyright c  2001-3 by P. Van Roy and S. Haridi. All rights reserved. 715 This is a real complexity, but given the conditions of the ideal case, it can be realistically implemented. It provides all the distribution protocols. It translates data between the address spaces of the processes. Translating to serial form is called marshaling and translating back is called unmarshaling. The term serial- ization is also used. It does distributed garbage collection, i.e., not reclaiming a local entity if there is still some remote reference. The idea of making a distributed language operation similar to a local lan- guage operation has a long history. The first implementation was the Remote Procedure Call (RPC), done in the early 1980’s [18]. A call to a remote procedure behaves in the same way, under ideal conditions, as a local procedure. Recently, the idea has been extended to object-oriented programming by allowing methods to be invoked remotely. This is called Remote Method Invocation (RMI). This technique has been made popular by the Java programming language [186]. Beyond network transparency Network transparency solves the problem in the ideal case. The next step is to handle the non-ideal extensions. Handling all of them at the same time while keeping things simple is a research problem that is still unsolved. In this chapter we only show the tip of the iceberg of how it could be done. We give a practical introduction to each of the following extensions: • Network awareness (i.e., performance). We show how choosing the dis- tribution protocol allows to tune the performance without changing the correctness of the program. • Openness. We show how independent computations can connect togeth- er. In this we are aided because Oz is a dynamically-typed language: all type information is part of the language entities. This makes connecting independent computations relatively easy. • Localized resources. We show how to package a computation into a component that knows what localized resources it needs. Installing this component in a process should connect it to these resources automatically. We already have a way to express this, using the concept of functor.A functor has an import declaration that lists what modules it needs. If resources are visible as modules, then we can use this to solve the problem of linking to localized resources. • Failure detection. We show how to detect partial failure in a way usable to the application program. The program can use this information to do fault confinement and possibly to repair the situation and continue work- ing. While failure detection breaks transparency, doing it in the language allows to build abstractions that hide the faults, e.g., using redundancy to implement fault tolerance. These abstractions, if desired, could be used to regain transparency. Copyright c  2001-3 by P. Van Roy and S. Haridi. All rights reserved. 716 Distributed Programming This brief introduction leaves out many issues such as security, naming, resource management, and building fault tolerance abstractions. But it gives a good overview of the general issues in the area of distributed programming. Structure of the chapter This chapter consists of the following parts: • Sections 11.1 and 11.2 set the stage by giving a taxonomy of distributed systems and by explaining our distributed computation model. • Sections 11.3–11.6 show how to program in this distribution model. We first show how to program with declarative data and then with state. We handle state separately because it involves more sophisticated and expensive distribution protocols. We then explain the concept of network awareness, which is important for performance reasons. Finally, we show some common distributed programming patterns. • Section 11.7 explains the distributed protocols in more detail. It singles out two particularly interesting protocols, the mobile state protocol and the distributed binding protocol. • Section 11.8 introduces partial failure. It explains and motivates the two failures we detect, permanent process failure and temporary network inac- tivity. It gives some simple programming techniques including an abstrac- tion to create resilient server objects. • Section 11.9 briefly discusses the issue of security and how it affects writing distributed applications. • Section 11.10 summarizes the chapter by giving a methodology how to build distributed applications. 11.1 Taxonomy of distributed systems This chapter is mainly about a quite general kind of distributed system, the open collaborative system. The techniques we give can also be used for other kinds of distributed system, such as cluster computing. To explain why this is so, we give a taxonomy of distributed systems that situates the different models. Figure 11.1 shows four types of distributed system. For each type, there is a simple diagram to illustrate it. In these diagrams, circles are processors or computers, the rectangle is memory, and connecting lines are communication links (a network). The figure starts with a shared-memory multiprocessor, which is a computer that consists of several processors attached to a memory that is shared between all of them. Communication between processors is extremely fast; it suffices for one processor Copyright c  2001-3 by P. Van Roy and S. Haridi. All rights reserved. 11.1 Taxonomy of distributed systems 717 (partial failure) ? Distributed memory multiprocessor Shared memory multiprocessor multiprocessor Distributed memory with partial failure Add distribution Add partial failure Add openness Collaboration (‘‘Internet computing’’) Open distributed system (naming and security) High performance (‘‘cluster computing’’) Figure 11.1: A simple taxonomy of distributed systems to write a memory cell and another to read it. Coordinating the processors, so that, e.g., they all agree to do the same operation at the same time, is efficient. Small shared-memory multiprocessors with one to eight processors are com- modity items. Larger scalable shared-memory cache-coherent multiprocessors are also available but are relatively expensive. A more popular solution is to connect a set of independent computers through their I/O channels. Another popular solution is to connect off-the-shelf computers with a high-speed network. The network can be implemented as a shared bus (similar to Ethernet) or be point-to-point (separately connecting pairs of processors). It can be custom or use standard LAN (local-area network) technology. All such machines are usual- ly called clusters or distributed-memory multiprocessors. They usually can have partial failure, i.e., where one processor fails while the others continue. In the figure, a failed computer is a circle crossed with a large X. With appropriate hardware and software the cluster can keep running, albeit with degraded perfor- mance, even if some processors are failed. That is, the probability that the cluster continues to provide its service is close to 1 even if part of the cluster is failed. This property is called high availability. A cluster with the proper hardware and software combines high performance with high availability. In the last step, the computers are connected through a wide-area network (WAN) such as the Internet. This adds openness, in which independent compu- tations or computers can find each other, connect, and collaborate meaningfully. Openness is the crucial difference between the world of high-performance com- puting and the world of collaborative computing. In addition to partial failure, openness introduces two new issues: naming and security. Naming is how compu- tations or computers find each other. Naming is usually supported by a special Copyright c  2001-3 by P. Van Roy and S. Haridi. All rights reserved. 718 Distributed Programming b ● ● Threads, ports, cells, and variables are localized to home processes {a, b, , n, } Threads b (ST1) (ST2) (STn) ab n b Immutable store Mutable store a Values are not localized (dataflow variables and values) (cells and ports) n (c1:W) (c2:Z) (X) U=c2 Y=c1 (p1:X) Z=person(age: Y) W=atom(V) Figure 11.2: The distributed computation model part of the system called the name server. Security is how computations or computers protect themselves from each other. 11.2 The distribution model We consider a computation model with both ports and cells, combining the mod- els of Chapters 5 and 8. We refine this model to make the distribution model, which defines the network operations done for language entities when they are shared between Oz processes [71, 197, 72, 201, 73]. If distribution is disregarded (i.e., we do not care how the computation is spread over processes) and there are no failures, then the computation model of the language is the same as if it executes in one process. We assume that any process can hold a reference to a language entity on any other process. Conceptually, there is a single global computation model that en- compasses all running Mozart processes and Mozart data world-wide (even those programs that are not connected together!). The global store is the union of all the local stores. In the current implementation, connected Mozart processes pri- marily use TCP to communicate. To a first approximation, all data and messages sent between processes travel through TCP. Figure 11.2 shows the computation model. To add distribution to this global view, the idea is that each language entity has a distribution behavior, which de- fines how distributed references to the entity interact. In the model, we annotate each language entity with a process, which is the “home process” of that entity. It is the process that coordinates the distribution behavior of the entity. Typically, it will be the process at which the entity was first created. 1 We will sometimes use the phrase consistency protocol to describe the distribution behavior of an en- 1 In Mozart, the coordination of an entity can be explicitly moved from one process to another. This issue will not be discussed in this introductory chapter. Copyright c  2001-3 by P. Van Roy and S. Haridi. All rights reserved. 11.2 The distribution model 719 tity. The distribution behavior is implemented by exchanging messages between Mozart processes. What kinds of distribution behavior are important? To see this, we first distinguish between stateful, stateless,andsingle-assignment language entities. Each of them has a different distribution behavior: • Stateful entities (threads, cells, ports, objects) have an internal state.The distribution behavior has to be careful to maintain a globally coherent view of the state. This puts major constraints on the kinds of efficient behavior that are possible. The simplest kind of behavior is to make them stationary. An operation on a stationary entity will traverse the network from the invoking process and be performed on the entity’s home process. Other kinds of behavior are possible. • Single-assignment entities (dataflow variables, streams) have one essential operation, namely binding. Binding a dataflow variable will bind all its distributed references to the same value. This operation is coordinated from the process on which the variable is created. • Stateless entities, i.e., values (procedures, functions, records, classes, func- tors) do not need a process annotation because they are constants. They can be copied between processes. Figure 11.3 shows a set of processes with localized threads, cells, and unbound dataflow variables. In the stateful concurrent model, the other entities can be defined in terms of these and procedure values. These basic entities have a default distributed behavior. But this behavior can be changed without changing the language semantics. For example, a remote operation on a cell could force the cell to migrate to the calling process, and thereafter perform the operation locally. For all derived entities except for ports, the distributed behaviors of the de- fined entities can be seen as derived behavior from the distributed behavior of their parts. In this respect ports are different. The default distribution behavior is asynchronous (see Section 5.1). This distributed behavior does not follow from the definition of ports in terms of cells. This behavior cannot be derived from that of a cell. This means that ports are basic entities in the distribution model, just like cells. The model of this section is sufficient to express useful distributed programs, but it has one limitation: partial failures are not taken into account. In Sec- tion 11.8 we will extend the basic model to overcome this limitation. Depending on the application’s needs, entities may be given different dis- tributed behaviors. For example “mobile” objects (also known as “cached” ob- jects) move to the process that is using them. These objects have the same language semantics but a different distributed hehavior. This is important for tuning network performance. Copyright c  2001-3 by P. Van Roy and S. Haridi. All rights reserved. 720 Distributed Programming Pr ocess b Sn S2 S3 S1 X Threads Cell c1:X Dataflow variable c2:Z Z W Sx Y Pr ocess a Pr ocess n Figure 11.3: Process-oriented view of the distribution model 11.3 Distribution of declarative data Let us show how to program with the distribution model. In this section we show how distribution works for the declarative subset of the stateful concurrent model. We start by explaining how to get different processes to talk to each other. 11.3.1 Open distribution and global naming We say a distributed computation is open if a process can connect independently with other processes running a distributed computation at run time, without necessarily knowing beforehand which process it may connect with nor the type of information it may exchange. A distributed computation is closed if it is arranged so that a single process starts and then spawns other processes on various computers it has access to. We will talk about closed distribution later. An important issue in open distributed computing is naming.Howdoin- dependent computations avoid confusion when communicating with each other? They do so by using globally-unique names for things. For example, instead of using print representations (character strings) to name procedures, ports, or objects, we use globally-unique names instead. The uniqueness should be guar- anteed by the system. There are many possible ways to name entities: • References. A reference is an unforgeable means to access any language entity. To programs, a reference is transparent, i.e., it is dereferenced when needed to access the entity. References can be local, to an entity on the current process, or remote, to an entity on a remote process. For example, a thread can reference an entity that is localized on another process. The language does not distinguish local from remote references. • Names. A name is an unforgeable constant that is used to implement abstract data types. Names can be used for different kinds of identity and authentication abilities (see Sections 3.7.5 and 6.4). All language en- tities with token equality, e.g., objects, classes, procedures, functors, etc., Copyright c  2001-3 by P. Van Roy and S. Haridi. All rights reserved. 11.3 Distribution of declarative data 721 implement their identity by means of a name embedded inside them (see Chapter 13). • Tickets.Aticket, in the terminology of this chapter, is a global means to access any language entity. A ticket is similar to a reference, except that it is valid anywhere including outside a Mozart process. It is represented by an ASCII string, it is explicitly created and dereferenced, and it is forgeable. A computation can get a reference to an independent computation by getting a ticket from that computation. The ticket is communicated using any communication protocol between the processes (e.g., TCP, IP, SMTP, etc.) or between the users of these processes (e.g., sneakernet, telephone, PostIt notes, etc.). Usually, these protocols can only pass simple datatypes, not arbitrary language references. But in almost all cases they support passing information coded in ASCII form. If they do, then they can pass a ticket. • URLs (Uniform Resource Locators). A URL is a global reference to a file. The file must be accessible by a World-Wide Web server. A URL encodes the hostname of a machine that has aWeb server and a file name on that machine. URLs are used to exchange persistent information between processes. A common technique is to store a ticket in a file addressed by URL. Within a distributed computation, all these four kinds of names can be passed be- tween processes. References and names are pure names, i.e., they do not explicitly encode any information other than being unique. They can be used only inside a distributed computation. Tickets and URLs are impure names since they explic- itly encode the information needed to dereference them–they are ASCII strings and can be read as such. Since they are encoded in ASCII, they can be used both inside and outside a distributed computation. In our case we will connect different processes together using tickets. The Connection module Tickets are created and used with the Connection module. This module has three basic operations: • {Connection.offer X ?T} creates a ticket T for any reference X.The ticket can be taken just once. Attempting to take a ticket more than once will raise an exception. • {Connection.offerUnlimited X ?T} creates a ticket T for any reference X. The ticket can be taken any number of times. • {Connection.take T ?X} creates a reference X when given a valid ticket in T.TheX refers to exactly the same language entity as the original reference that was offered when the ticket was created. A ticket can be taken at any Copyright c  2001-3 by P. Van Roy and S. Haridi. All rights reserved. 722 Distributed Programming process. If taken at a different process than where the ticket was offered, then network communication is initiated between the two processes. With Connection, connecting computations in different processes is extreme- ly simple. The system does a great deal of work to give this simple view. It implements the connection protocol, transparent marshaling and unmarshaling, distributed garbage collection, and a carefully-designed distribution protocol for each language entity. 11.3.2 Sharing declarative data Sharing records We start with a simple example. The first process has a big data structure, a record, that it wants to share. It first creates the ticket: 2 X=the_novel(text:"It was a dark and stormy night. " author:"E.G.E. Bulwer-Lytton" year:1803) {Show {Connection.offerUnlimited X}} This example creates the ticket with Connection.offerUnlimited and displays it in the Mozart emulator window (with Show). Any other process that wants to get a reference to X just has to know the ticket. Here is what the other process does: X2={Connection.take ´ ticket comes here ´} (To make this work, you have to replace the text ´ ticket comes here ´ by what was displayed by the first process.) That’s it. The operation Connection.take takes the ticket and returns a language reference, which we put in X2. Because of network transparency, both X and X2 behave identically. Sharing functions This works for other data types as well. Assume the first process has a function instead of a record: fun {MyEncoder X} (X*4449+1234) mod 33667 end {Show {Connection.offerUnlimited MyEncoder}} The second process can get the function easily: E2={Connection.take ´ MyEncoders ticket ´} {Show {E2 10000}} % Call the function 2 Here, as in the subsequent examples, we leave out declare for brevity, but we keep declare in for clarity. Copyright c  2001-3 by P. Van Roy and S. Haridi. All rights reserved. [...]... protocol and explains the network 6 In terms of the number of network hops Copyright c 200 1-3 by P Van Roy and S Haridi All rights reserved 11. 7 Distribution protocols 741 T P P state pointer P X M cell content Figure 11. 6: Graph notation for a distributed cell (3) put T P P P forward (2) M (1) get state pointer X cell content Figure 11. 7: Moving the state pointer operations it does A formal definition of. .. fast 11. 8 Partial failure Let us now extend the distribution model with support for partial failure We first explain the kinds of failures we detect and how we detect them Then we show some simple ways to use this detection in applications to handle partial failure Copyright c 200 1-3 by P Van Roy and S Haridi All rights reserved 11. 8 Partial failure 11. 8.1 Fault model The fault model defines the kinds of. .. browsers offers the third possibility Copyright c 200 1-3 by P Van Roy and S Haridi All rights reserved 11. 8 Partial failure 11. 8.2 Simple cases of failure handling We show how to handle two cases, namely disconnected operation and failure detection We show how to use the Fault module in either case Disconnected operation Assume that you are running part of an application locally on your machine through... round-trip message delay before the calculation Copyright c 200 1-3 by P Van Roy and S Haridi All rights reserved 734 Distributed Programming • The third scenario is best of all Here the asynchronous calls are initiated before we need them When we need them, their calculation is already in progress This can take much less than one round-trip message delay The first scenario is standard sequential object-oriented... 200 1-3 by P Van Roy and S Haridi All rights reserved 11. 6 Common distributed programming patterns fun {Fibo N} if N . gives a good overview of the general issues in the area of distributed programming. Structure of the chapter This chapter consists of the following parts: • Sections 11. 1 and 11. 2 set the stage. be point-to-point (separately connecting pairs of processors). It can be custom or use standard LAN (local-area network) technology. All such machines are usual- ly called clusters or distributed-memory. world of high-performance com- puting and the world of collaborative computing. In addition to partial failure, openness introduces two new issues: naming and security. Naming is how compu- tations

Ngày đăng: 14/08/2014, 10:22

TỪ KHÓA LIÊN QUAN