1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Tài liệu Grid Computing P11 pptx

37 533 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 37
Dung lượng 390,64 KB

Nội dung

11 Condor and the Grid Douglas Thain, Todd Tannenbaum, and Miron Livny University of Wisconsin-Madison, Madison, Wisconsin, United States 11.1 INTRODUCTION Since the early days of mankind the primary motivation for the establishment of communities has been the idea that by being part of an organized group the capabil- ities of an individual are improved. The great progress in the area of intercomputer communication led to the development of means by which stand-alone processing subsystems can be integrated into multicomputer communities. – Miron Livny, Study of Load Balancing Algorithms for Decentralized Distributed Processing Systems, Ph.D. thesis, July 1983. Ready access to large amounts of computing power has been a persistent goal of com- puter scientists for decades. Since the 1960s, visions of computing utilities as pervasive and as simple as the telephone have motivated system designers [1]. It was recognized in the 1970s that such power could be achieved inexpensively with collections of small devices rather than expensive single supercomputers. Interest in schemes for managing distributed processors [2, 3, 4] became so popular that there was even once a minor controversy over the meaning of the word ‘distributed’ [5]. Grid Computing – Making the Global Infrastructure a Reality. Edited by F. Berman, A. Hey and G. Fox  2003 John Wiley & Sons, Ltd ISBN: 0-470-85319-0 300 DOUGLAS THAIN, TODD TANNENBAUM, AND MIRON LIVNY As this early work made it clear that distributed computing was feasible, theoretical researchers began to notice that distributed computing would be difficult. When messages may be lost, corrupted, or delayed, precise algorithms must be used in order to build an understandable (if not controllable) system [6, 7, 8, 9]. Such lessons were not lost on the system designers of the early 1980s. Production systems such as Locus [10] and Grapevine [11] recognized the fundamental tension between consistency and availability in the face of failures. In this environment, the Condor project was born. At the University of Wisconsin, Miron Livny combined his 1983 doctoral thesis on cooperative processing [12] with the powerful Crystal Multicomputer [13] designed by DeWitt, Finkel, and Solomon and the novel Remote UNIX [14] software designed by Litzkow. The result was Condor, a new system for distributed computing. In contrast to the dominant centralized control model of the day, Condor was unique in its insistence that every participant in the system remain free to contribute as much or as little as it cared to. Modern processing environments that consist of large collections of workstations interconnected by high capacity network raise the following challenging question: can we satisfy the needs of users who need extra capacity without lowering the quality of service experienced by the owners of under utilized workstations? . The Condor scheduling system is our answer to this question. – Michael Litzkow, Miron Livny, and Matt Mutka, Condor: A Hunter of Idle Work- stations, IEEE 8th Intl. Conf. on Dist. Comp. Sys., June 1988. The Condor system soon became a staple of the production-computing environment at the University of Wisconsin, partially because of its concern for protecting individ- ual interests [15]. A production setting can be both a curse and a blessing: The Condor project learned hard lessons as it gained real users. It was soon discovered that inconve- nienced machine owners would quickly withdraw from the community, so it was decreed that owners must maintain control of their machines at any cost. A fixed schema for representing users and machines was in constant change and so led to the development of a schema-free resource allocation language called ClassAds [16, 17, 18]. It has been observed [19] that most complex systems struggle through an adolescence of five to seven years. Condor was no exception. The most critical support task is responding to those owners of machines who feel that Condor is in some way interfering with their own use of their machine. Such complaints must be answered both promptly and diplomatically. Workstation owners are not used to the concept of somebody else using their machine while they are away and are in general suspicious of any new software installed on their system. – Michael Litzkow and Miron Livny, Experience With The Condor Distributed Batch System, IEEE Workshop on Experimental Dist. Sys., October 1990. The 1990s saw tremendous growth in the field of distributed computing. Scientific interests began to recognize that coupled commodity machines were significantly less CONDOR AND THE GRID 301 expensive than supercomputers of equivalent power [20]. A wide variety of powerful batch execution systems such as LoadLeveler [21] (a descendant of Condor), LSF [22], Maui [23], NQE [24], and PBS [25] spread throughout academia and business. Several high-profile distributed computing efforts such as SETI@Home and Napster raised the public consciousness about the power of distributed computing, generating not a little moral and legal controversy along the way [26, 27]. A vision called grid computing began to build the case for resource sharing across organizational boundaries [28]. Throughout this period, the Condor project immersed itself in the problems of pro- duction users. As new programming environments such as PVM [29], MPI [30], and Java [31] became popular, the project added system support and contributed to standards development. As scientists grouped themselves into international computing efforts such as the Grid Physics Network [32] and the Particle Physics Data Grid (PPDG) [33], the Condor project took part from initial design to end-user support. As new protocols such as Grid Resource Access and Management (GRAM) [34], Grid Security Infrastructure (GSI) [35], and GridFTP [36] developed, the project applied them to production systems and suggested changes based on the experience. Through the years, the Condor project adapted computing structures to fit changing human communities. Many previous publications about Condor have described in fine detail the features of the system. In this chapter, we will lay out a broad history of the Condor project and its design philosophy. We will describe how this philosophy has led to an organic growth of computing communities and discuss the planning and the scheduling techniques needed in such an uncontrolled system. Our insistence on dividing responsibility has led to a unique model of cooperative computing called split execution. We will conclude by describing how real users have put Condor to work. 11.2 THE PHILOSOPHY OF FLEXIBILITY As distributed systems scale to ever-larger sizes, they become more and more difficult to control or even to describe. International distributed systems are heterogeneous in every way: they are composed of many types and brands of hardware, they run various oper- ating systems and applications, they are connected by unreliable networks, they change configuration constantly as old components become obsolete and new components are powered on. Most importantly, they have many owners, each with private policies and requirements that control their participation in the community. Flexibility is the key to surviving in such a hostile environment. Five admonitions outline our philosophy of flexibility. Let communities grow naturally: Humanity has a natural desire to work together on common problems. Given tools of sufficient power, people will organize the comput- ing structures that they need. However, human relationships are complex. People invest their time and resources into many communities with varying degrees. Trust is rarely complete or symmetric. Communities and contracts are never formalized with the same level of precision as computer code. Relationships and requirements change over time. Thus, we aim to build structures that permit but do not require cooperation. Relationships, obligations, and schemata will develop according to user necessity. 302 DOUGLAS THAIN, TODD TANNENBAUM, AND MIRON LIVNY Plan without being picky: Progress requires optimism. In a community of sufficient size, there will always be idle resources available to do work. But, there will also always be resources that are slow, misconfigured, disconnected, or broken. An overdependence on the correct operation of any remote device is a recipe for disaster. As we design software, we must spend more time contemplating the consequences of failure than the potential benefits of success. When failures come our way, we must be prepared to retry or reassign work as the situation permits. Leave the owner in control : To attract the maximum number of participants in a com- munity, the barriers to participation must be low. Users will not donate their property to the common good unless they maintain some control over how it is used. Therefore, we must be careful to provide tools for the owner of a resource to set use policies and even instantly retract it for private use. Lend and borrow: The Condor project has developed a large body of expertise in dis- tributed resource management. Countless other practitioners in the field are experts in related fields such as networking, databases, programming languages, and security. The Condor project aims to give the research community the benefits of our expertise while accepting and integrating knowledge and software from other sources. Understand previous research: We must always be vigilant to understand and apply pre- vious research in computer science. Our field has developed over many decades and is known by many overlapping names such as operating systems, distributed computing, metacomputing, peer-to-peer computing, and grid computing. Each of these emphasizes a particular aspect of the discipline, but is united by fundamental concepts. If we fail to understand and apply previous research, we will at best rediscover well-charted shores. At worst, we will wreck ourselves on well-charted rocks. 11.3 THE CONDOR PROJECT TODAY At present, the Condor project consists of over 30 faculties, full time staff, graduate and undergraduate students working at the University of Wisconsin-Madison. Together the group has over a century of experience in distributed computing concepts and practices, systems programming and design, and software engineering. Condor is a multifaceted project engaged in five primary activities. Research in distributed computing: Our research focus areas and the tools we have pro- duced, several of which will be explored below and are as follows: 1. Harnessing the power of opportunistic and dedicated resources. (Condor) 2. Job management services for grid applications. (Condor-G, DaPSched) CONDOR AND THE GRID 303 3. Fabric management services for grid resources. (Condor, Glide-In, NeST) 4. Resource discovery, monitoring, and management. (ClassAds, Hawkeye) 5. Problem-solving environments. (MW, DAGMan) 6. Distributed I/O technology. (Bypass, PFS, Kangaroo, NeST) Participation in the scientific community: Condor participates in national and interna- tional grid research, development, and deployment efforts. The actual development and deployment activities of the Condor project are a critical ingredient toward its success. Condor is actively involved in efforts such as the Grid Physics Network (GriPhyN) [32], the International Virtual Data Grid Laboratory (iVDGL) [37], the Particle Physics Data Grid (PPDG) [33], the NSF Middleware Initiative (NMI) [38], the TeraGrid [39], and the NASA Information Power Grid (IPG) [40]. Further, Condor is a founding member in the National Computational Science Alliance (NCSA) [41] and a close collaborator of the Globus project [42]. Engineering of complex software: Although a research project, Condor has a significant software production component. Our software is routinely used in mission-critical settings by industry, government, and academia. As a result, a portion of the project resembles a software company. Condor is built every day on multiple platforms, and an automated regression test suite containing over 200 tests stresses the current release candidate each night. The project’s code base itself contains nearly a half-million lines, and significant pieces are closely tied to the underlying operating system. Two versions of the software, a stable version and a development version, are simultaneously developed in a multiplatform (Unix and Windows) environment. Within a given stable version, only bug fixes to the code base are permitted – new functionality must first mature and prove itself within the development series. Our release procedure makes use of multiple test beds. Early development releases run on test pools consisting of about a dozen machines; later in the development cycle, release candidates run on the production UW-Madison pool with over 1000 machines and dozens of real users. Final release candidates are installed at collaborator sites and carefully monitored. The goal is that each stable version release of Condor should be proven to operate in the field before being made available to the public. Maintenance of production environments: The Condor project is also responsible for the Condor installation in the Computer Science Department at the University of Wisconsin- Madison, which consist of over 1000 CPUs. This installation is also a major compute resource for the Alliance Partners for Advanced Computational Servers (PACS) [43]. As such, it delivers compute cycles to scientists across the nation who have been granted computational resources by the National Science Foundation. In addition, the project provides consulting and support for other Condor installations at the University and around the world. Best effort support from the Condor software developers is available at no charge via ticket-tracked e-mail. Institutions using Condor can also opt for contracted 304 DOUGLAS THAIN, TODD TANNENBAUM, AND MIRON LIVNY support – for a fee, the Condor project will provide priority e-mail and telephone support with guaranteed turnaround times. Education of students: Last but not the least, the Condor project trains students to become computer scientists. Part of this education is immersion in a production system. Students graduate with the rare experience of having nurtured software from the chalkboard all the way to the end user. In addition, students participate in the academic community by designing, performing, writing, and presenting original research. At the time of this writing, the project employs 20 graduate students including 7 Ph.D. candidates. 11.3.1 The Condor software: Condor and Condor-G When most people hear the word ‘Condor’, they do not think of the research group and all of its surrounding activities. Instead, usually what comes to mind is strictly the software produced by the Condor project: the Condor High Throughput Computing System,often referred to simply as Condor. 11.3.1.1 Condor: a system for high-throughput computing Condor is a specialized job and resource management system (RMS) [44] for compute- intensive jobs. Like other full-featured systems, Condor provides a job management mechanism, scheduling policy, priority scheme, resource monitoring, and resource man- agement [45, 46]. Users submit their jobs to Condor, and Condor subsequently chooses when and where to run them based upon a policy, monitors their progress, and ultimately informs the user upon completion. While providing functionality similar to that of a more traditional batch queueing system, Condor’s novel architecture and unique mechanisms allow it to perform well in environments in which a traditional RMS is weak – areas such as sustained high- throughput computing and opportunistic computing. The goal of a high-throughput com- puting environment [47] is to provide large amounts of fault-tolerant computational power over prolonged periods of time by effectively utilizing all resources available to the net- work. The goal of opportunistic computing is the ability to utilize resources whenever they are available, without requiring 100% availability. The two goals are naturally coupled. High-throughput computing is most easily achieved through opportunistic means. Some of the enabling mechanisms of Condor include the following: • ClassAds: The ClassAd mechanism in Condor provides an extremely flexible and expressive framework for matching resource requests (e.g. jobs) with resource offers (e.g. machines). ClassAds allow Condor to adopt to nearly any desired resource uti- lization policy and to adopt a planning approach when incorporating Grid resources. We will discuss this approach further in a section below. • Job checkpoint and migration: With certain types of jobs, Condor can transparently record a checkpoint and subsequently resume the application from the checkpoint file. A periodic checkpoint provides a form of fault tolerance and safeguards the accumu- lated computation time of a job. A checkpoint also permits a job to migrate from CONDOR AND THE GRID 305 one machine to another machine, enabling Condor to perform low-penalty preemptive- resume scheduling [48]. • Remote system calls: When running jobs on remote machines, Condor can often pre- serve the local execution environment via remote system calls. Remote system calls is one of Condor’s mobile sandbox mechanisms for redirecting all of a jobs I/O-related system calls back to the machine that submitted the job. Therefore, users do not need to make data files available on remote workstations before Condor executes their programs there, even in the absence of a shared file system. With these mechanisms, Condor can do more than effectively manage dedicated compute clusters [45, 46]. Condor can also scavenge and manage wasted CPU power from oth- erwise idle desktop workstations across an entire organization with minimal effort. For example, Condor can be configured to run jobs on desktop workstations only when the keyboard and CPU are idle. If a job is running on a workstation when the user returns and hits a key, Condor can migrate the job to a different workstation and resume the job right where it left off. Figure 11.1 shows the large amount of computing capacity available from idle workstations. Figure 11.1 The available capacity of the UW-Madison Condor pool in May 2001. Notice that a significant fraction of the machines were available for batch use, even during the middle of the work day. This figure was produced with CondorView, an interactive tool for visualizing Condor-managed resources. 306 DOUGLAS THAIN, TODD TANNENBAUM, AND MIRON LIVNY Moreover, these same mechanisms enable preemptive-resume scheduling of dedi- cated compute cluster resources. This allows Condor to cleanly support priority-based scheduling on clusters. When any node in a dedicated cluster is not scheduled to run a job, Condor can utilize that node in an opportunistic manner – but when a schedule reservation requires that node again in the future, Condor can preempt any opportunistic computing job that may have been placed there in the meantime [30]. The end result is that Condor is used to seamlessly combine all of an organization’s computational power into one resource. The first version of Condor was installed as a production system in the UW-Madison Department of Computer Sciences in 1987 [14]. Today, in our department alone, Condor manages more than 1000 desktop workstation and compute cluster CPUs. It has become a critical tool for UW researchers. Hundreds of organizations in industry, government, and academia are successfully using Condor to establish compute environments ranging in size from a handful to thousands of workstations. 11.3.1.2 Condor-G: a computation management agent for Grid computing Condor-G [49] represents the marriage of technologies from the Globus and the Condor projects. From Globus [50] comes the use of protocols for secure interdomain commu- nications and standardized access to a variety of remote batch systems. From Condor comes the user concerns of job submission, job allocation, error recovery, and creation of a friendly execution environment. The result is very beneficial for the end user, who is now enabled to utilize large collections of resources that span across multiple domains as if they all belonged to the personal domain of the user. Condor technology can exist at both the frontends and backends of a middleware envi- ronment, as depicted in Figure 11.2. Condor-G can be used as the reliable submission and job management service for one or more sites, the Condor High Throughput Computing system can be used as the fabric management service (a grid ‘generator’) for one or { { { Fabric Grid User Application, problem solver, . Condor (Condor-G) Globus toolkit Condor Processing, storage, communication, . Figure 11.2 Condor technologies in Grid middleware. Grid middleware consisting of technologies from both Condor and Globus sit between the user’s environment and the actual fabric (resources). CONDOR AND THE GRID 307 more sites, and finally Globus Toolkit services can be used as the bridge between them. In fact, Figure 11.2 can serve as a simplified diagram for many emerging grids, such as the USCMS Test bed Grid [51], established for the purpose of high-energy physics event reconstruction. Another example is the European Union DataGrid [52] project’s Grid Resource Broker, which utilizes Condor-G as its job submission service [53]. 11.4 A HISTORY OF COMPUTING COMMUNITIES Over the history of the Condor project, the fundamental structure of the system has remained constant while its power and functionality has steadily grown. The core com- ponents, known as the kernel, are shown in Figure 11.3. In this section, we will examine how a wide variety of computing communities may be constructed with small variations to the kernel. Briefly, the kernel works as follows: The user submits jobs to an agent.Theagent is responsible for remembering jobs in persistent storage while finding resources will- ing to run them. Agents and resources advertise themselves to a matchmaker,whichis responsible for introducing potentially compatible agents and resources. Once introduced, an agent is responsible for contacting a resource and verifying that the match is still valid. To actually execute a job, each side must start a new process. At the agent, a shadow is responsible for providing all of the details necessary to execute a job. At the resource, a sandbox is responsible for creating a safe execution environment for the job and protecting the resource from any mischief. Let us begin by examining how agents, resources, and matchmakers come together to form Condor pools. Later in this chapter, we will return to examine the other components of the kernel. The initial conception of Condor is shown in Figure 11.4. Agents and resources inde- pendently report information about themselves to a well-known matchmaker, which then Problem solver (DAGMan) (Master−Worker) User Matchmaker (central manager) Agent (schedd) Shadow (shadow) Job Resource (startd) Sandbox (starter) Figure 11.3 The Condor Kernel. This figure shows the major processes in a Condor system. The common generic name for each process is given in large print. In parentheses are the technical Condor-specific names used in some publications. 308 DOUGLAS THAIN, TODD TANNENBAUM, AND MIRON LIVNY R R 11 Condor pool 3 2 M R A Figure 11.4 A Condor pool ca. 1988. An agent (A) is shown executing a job on a resource (R) with the help of a matchmaker (M). Step 1: The agent and the resource advertise themselves to the matchmaker. Step 2: The matchmaker informs the two parties that they are potentially compatible. Step 3: The agent contacts the resource and executes a job. makes the same information available to the community. A single machine typically runs both an agent and a resource daemon and is capable of submitting and executing jobs. However, agents and resources are logically distinct. A single machine may run either or both, reflecting the needs of its owner. Furthermore, a machine may run more than one instance of an agent. Each user sharing a single machine could, for instance, run its own personal agent. This functionality is enabled by the agent implementation, which does not use any fixed IP port numbers or require any superuser privileges. Each of the three parties – agents, resources, and matchmakers – are independent and individually responsible for enforcing their owner’s policies. The agent enforces the sub- mitting user’s policies on what resources are trusted and suitable for running jobs. The resource enforces the machine owner’s policies on what users are to be trusted and ser- viced. The matchmaker is responsible for enforcing community policies such as admission control. It may choose to admit or reject participants entirely on the basis of their names or addresses and may also set global limits such as the fraction of the pool allocable to any one agent. Each participant is autonomous, but the community as a single entity is defined by the common selection of a matchmaker. As the Condor software developed, pools began to sprout up around the world. In the original design, it was very easy to accomplish resource sharing in the context of one community. A participant merely had to get in touch with a single matchmaker to consume or provide resources. However, a user could only participate in one community: that defined by a matchmaker. Users began to express their need to share across organizational boundaries. This observation led to the development of gateway flocking in 1994 [54]. At that time, there were several hundred workstations at Wisconsin, while tens of workstations were scattered across several organizations in Europe. Combining all of the machines into one Condor pool was not a possibility because each organization wished to retain existing community policies enforced by established matchmakers. Even at the University of Wisconsin, researchers were unable to share resources between the separate engineering and computer science pools. The concept of gateway flocking is shown in Figure 11.5. Here, the structure of two existing pools is preserved, while two gateway nodes pass information about participants [...]... distributed computing Journal of Cluster Computing, 4, 39–47 61 Thain, D., Bent, J., Arpaci-Dusseau, A., Arpaci-Dusseau, R and Livny, M (2001) Gathering at the well: creating communities for grid I/O Proceedings of Supercomputing 2001, Denver, CO, November, 2001 62 Livny, M and Raman, R (1998) High-throughput resource management, in Foster, I and Kesselman, C The Grid: Blueprint for a New Computing Infrastructure... Compact Muon Solenoid Collaboration, http://uscms.fnal.gov., August, 2002 52 European Union DataGrid Project, http://www.eu-datagrid.org 53 Anglano, C et al (2001) Integrating grid tools to build a computing resource broker: activities of datagrid WP1 Proceedings of the Conference on Computing in High Energy Physics 2001 (CHEP01), Beijing, September, 3–7 54 Epema, D H J., Livny, M., van Dantzig, R., Evers,... science Proceedings of Advanced Computing and Analysis Techniques in Physics Research (ACAT), 2000, pp 161–163 37 The International Virtual Data Grid Laboratory (iVDGL), http://www.ivdgl.org, August, 2002 38 NSF Middleware Initiative (NMI), http://www.nsf-middleware.org, August, 2002 39 TeraGrid Project, http://www.teragrid.org, August, 2002 40 NASA Information Power Grid, http://www.ipg.nasa.gov/,... (ed.) Beowulf Cluster Computing with Linux Cambridge, MA; MIT Press 46 Tannenbaum, T., Wright, D., Miller, K and Livny, M (2001) Condor – a distributed job scheduler, in Sterling, T (ed.) Beowulf Cluster Computing with Windows Cambridge, MA; MIT Press 47 Basney, J and Livny, M (1999) Deploying a high throughput computing cluster, in edsBuyya, R (ed.) High Performance Cluster Computing: Architectures... multi-institutional grids Proceedings of the Tenth IEEE Symposium on High Performance Distributed Computing (HPDC), San Francisco, CA, August, 7–9, 2001 50 Foster, I and Kesselman, C (1997) Globus: a metacomputing infrastructure toolkit International Journal of Supercomputer Applications, 11(2), 115–128 51 The Compact Muon Solenoid Collaboration, http://uscms.fnal.gov., August, 2002 52 European Union DataGrid Project,... Champaign-Urbana, IL, June, 2001 31 Thain, D and Livny, M (2002) Error scope on a computational grid: theory and practice Proceedings of the 11th IEEE Symposium on High Performance Distributed Computing (HPDC), July, 2002 32 The Grid Physics Network (GriPhyN), http://www.griphyn.org, August, 2002 33 Particle Physics Data Grid (PPDG), http://www.ppdg.net, August, 2002 34 Czajkowski, K., Foster, I., Karonis,... Tuecke, S (2002) Condor-G: a computation management agent for multi-institutional grids Cluster Computing, 5, 237–246 56 Linderoth, J., Kulkarni, S., Goux, J.-P and Yoder, M (2000) An enabling framework for master-worker applications on the computational grid Proceedings of the Ninth IEEE Symposium on High Performance Distributed Computing (HPDC9), Pittsburgh, PA, August, 2000, pp 43–50 57 Chen, C., Salem,... Step 4: The agent contacts the resource and executes a job 311 CONDOR AND THE GRID to the gateway Direct flocking, although less powerful, was much simpler to build and much easier for users to understand and deploy About 1998, a vision of a worldwide computational Grid began to grow [28] A significant early piece in the Grid computing vision was a uniform interface for batch execution The Globus Project... distributed resource management for high throughput computing Proceedings of the Seventh IEEE International Symposium on High Performance Distributed Computing (HPDC7), July, 1998 17 Raman, R., Livny, M and Solomon, M (2000) Resource management through multilateral matchmaking Proceedings of the Ninth IEEE Symposium on High Performance Distributed Computing (HPDC9), Pittsburgh, PA, August, 2000, pp... Microsystems, 20(6), 4–5 28 Foster, I and Kesselman, C (1998) The Grid: Blueprint for a New Computing Infrastructure San Francisco, CA: Morgan Kaufmann Publishers 29 Pruyne, J and Livny, M (1994) Providing resource management services to parallel applications Proceedings of the Second Workshop on Environments and Tools for Parallel Scientific Computing, May, 1994 30 Wright, D (2001) Cheap cycles from the . overlapping names such as operating systems, distributed computing, metacomputing, peer-to-peer computing, and grid computing. Each of these emphasizes a particular. support. As new protocols such as Grid Resource Access and Management (GRAM) [34], Grid Security Infrastructure (GSI) [35], and GridFTP [36] developed, the project

Ngày đăng: 15/12/2013, 05:15

Nguồn tham khảo

Tài liệu tham khảo Loại Chi tiết
1. Organick, E. I. (1972) The MULTICS system: An examination of its structure. Cambridge, MA, London, UK: The MIT Press Sách, tạp chí
Tiêu đề: The MULTICS system: An examination of its structure
2. Stone, H. S. (1977) Multiprocessor scheduling with the aid of network flow algorithms. IEEE Transactions of Software Engineering, SE-3(1), 95 – 93 Sách, tạp chí
Tiêu đề: IEEE"Transactions of Software Engineering
3. Chow, Y. C. and Kohler, W. H. (1977) Dynamic load balancing in homogeneous two-processor distributed systems. Proceedings of the International Symposium on Computer Performance, Modeling, Measurement and Evaluation, Yorktown Heights, New York, August, 1977, pp.39 – 52 Sách, tạp chí
Tiêu đề: Proceedings of the International Symposium on Computer Performance,"Modeling, Measurement and Evaluation
5. Enslow, P. H. (1978) What is a distributed processing system? Computer, 11(1), 13 – 21 Sách, tạp chí
Tiêu đề: Computer
6. Lamport, L. (1978) Time, clocks, and the ordering of events in a distributed system. Commu- nications of the ACM, 7(21), 558 – 565 Sách, tạp chí
Tiêu đề: Commu-"nications of the ACM
7. Lamport, L., Shostak, R. and Pease, M. (1982) The byzantine generals problem. ACM Trans- actions on Programming Languages and Systems, 4(3), 382 – 402 Sách, tạp chí
Tiêu đề: ACM Trans-"actions on Programming Languages and Systems
8. Chandy, K. and Lamport, L. (1985) Distributed snapshots: determining global states of dis- tributed systems. ACM Transactions on Computer Systems, 3(1), 63 – 75 Sách, tạp chí
Tiêu đề: ACM Transactions on Computer Systems
9. Needham, R. M. (1979) Systems aspects of the Cambridge Ring. Proceedings of the Seventh Symposium on Operating Systems Principles, 0-89791-009-5, Pacific Grove, CA, USA, 1979, pp. 82 – 85 Sách, tạp chí
Tiêu đề: Proceedings of the Seventh"Symposium on Operating Systems Principles
10. Walker, B., Popek, G., English, R., Kline, C. and Thiel, G. (1983) The LOCUS distributed operating system. Proceedings of the 9th Symposium on Operating Systems Principles (SOSP), November, 1983, pp. 49 – 70 Sách, tạp chí
Tiêu đề: Proceedings of the 9th Symposium on Operating Systems Principles (SOSP)
11. Birrell, A. D., Levin, R., Needham, R. M. and Schroeder, M. D. (1982) Grapevine: an exercise in distributed computing. Communications of the ACM, 25(4), 260 – 274 Sách, tạp chí
Tiêu đề: Communications of the ACM
12. Livny, M. (1983) The Study of Load Balancing Algorithms for Decentralized Distributed Pro- cessing Systems. Weizmann Institute of Science Sách, tạp chí
Tiêu đề: The Study of Load Balancing Algorithms for Decentralized Distributed Pro-"cessing Systems
13. DeWitt, D., Finkel, R. and Solomon, M. (1984) The CRYSTAL multicomputer: design and implementation experience. IEEE Transactions on Software Engineering, 553 UW-Madison Comp. Sci. Dept., September, 1984 Sách, tạp chí
Tiêu đề: IEEE Transactions on Software Engineering
15. Litzkow, M. and Livny, M. (1990) Experience with the condor distributed batch system. Pro- ceedings of the IEEE Workshop on Experimental Distributed Systems, October, 1990 Sách, tạp chí
Tiêu đề: Pro-"ceedings of the IEEE Workshop on Experimental Distributed Systems
16. Raman, R., Livny, M. and Solomon, M. (1998) Matchmaking: distributed resource manage- ment for high throughput computing. Proceedings of the Seventh IEEE International Symposium on High Performance Distributed Computing (HPDC7), July, 1998 Sách, tạp chí
Tiêu đề: Proceedings of the Seventh IEEE International Symposium"on High Performance Distributed Computing (HPDC7)
37. The International Virtual Data Grid Laboratory (iVDGL), http://www.ivdgl.org, August, 2002 Link
41. The National Computational Science Alliance,http://www.ncsa.uiuc.edu/About/Alliance/, August, 2002 Link
43. Alliance Partners for Advanced Computational Services (PACS), http://www.ncsa.uiuc.edu/About/Alliance/Teams, 2002 Link
65. Condor Manual, Condor Team, 2001, Available from http://www.cs.wisc.edu/condor/manual Link
80. HawkEye: A Monitoring and Management Tool for Distributed Systems, 2002, http://www.cs.wisc.edu/condor/hawkeye Link
81. Public Key Infrastructure Lab (PKI-Lab), http://www.cs.wisc.edu/pkilab, August, 2002 Link

TỪ KHÓA LIÊN QUAN