1. Trang chủ
  2. » Công Nghệ Thông Tin

Building Secure and Reliable Network Applications phần 4 pps

51 214 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 51
Dung lượng 364,78 KB

Nội dung

Chapter 8: Operating System Support for High Performance Communication 155 155 U-Net maps the communication segment of a process directly into its address space, pinning the pages into physical memory and disabling the hardware caching mechanisms so that updates to a segment will be applied directly to that segment. The set of communication segments for all the processes using U- Net is mapped to be visible to the device controller over the I/O bus of the processor used; the controller can thus initiate DMA or direct memory transfers in and out of the shared region as needed and without delaying for any sort of setup. A limitation of this approach is that the I/O bus is a scarce resource shared by all devices on a system, and the U-Net mapping excludes any other possible mapping for this region. However, some machines (for example, on the cluster-style multiprocessors discussed in Chapter 24), there are no other devices contending for this mapping unit, and dedicating it to the use of the communications subsystem makes perfect sense. The communications segment is directly monitored by the device controller. U-Net accomplishes this by reprogramming the device controller, although it is also possible to imagine an implementation in which a kernel driver would provide this functionality. The controller watches for outgoing messages on the send queue; if one is present, it immediately sends the message. The delay between when a message is placed on the send queue and when sending starts is never larger than a few microseconds. Incoming messages are automatically placed on the receive queue unless the pool of memory is exhausted; should that occur, any incoming messages are discarded silently. To accomplish this, U-Net need only look at the first bytes of the incoming message, which give the ATM channel number on which it was transmitted. These are used to index into a table maintained within the device controller that gives the range of addresses within which the communications segment can be found, and the head of the receive and free queues are then located at a fixed offset from the base of the segment. To minimize latency, the addresses of a few free memory regions are cached in the device controller’s memory Such an approach may seem complex because of the need to reprogram the device controller. In fact, however, the concept of a programmable device controller is a very old one (IBM’s channel architecture for the 370 series of computers already supported a similar “programmable channels” architecture nearly twenty years ago). Programmability such as this remains fairly common, and device drivers that download code into controllers are not unheard of today. Thus, although unconventional, the U-Net approach is not actually “unreasonable”. The style of programming required is similar to that used when implementing a device driver for use in a conventional operating system With this architecture, U-Net achieves impressive application-to-application performance. The technology easily saturates an ATM interface operating at the OC3 performance level of 155Mbits/second, and measured end-to-end latencies through a single ATM switch are as low as 26us for a small message. These performance levels are also reflected in higher level protocols: versions of UDP and TCP have been layered over U-Net and shown capable of saturating the ATM for packet sizes as low as 1k bytes; similar performance is achieved with a standard UDP or TCP technology only for very large packets of 8k bytes or more. Overall, performance of the approach tends to be an order of magnitude or more better than with a conventional architecture for all metrics not limited by the raw bandwidth of the ATM: throughput for small packets, latency, and computational overhead of communication. Such results emphasize the importance of rethinking standard operating system structures in light of the extremely high performance that modern computing platforms can achieve. Kenneth P. Birman - Building Secure and Reliable Network Applications 156 156 Returning to the point made at the start of this chapter, a technology like U-Net also improves the statistical properties of the communication channel. There are fewer places at which messages can be lost, hence reliability increases and, in well designed applications, may approach perfect reliability. The complexity of the hand-off mechanisms employed as messages pass from application to controller to ATM and back up to the receiver is greatly reduced, hence the measured latencies are much “tighter” than in a conventional environment, where dozens of events could contribute towards variation in latency. Overall, then, U-Net is not just a higher performance communication architecture, but is also one that is more conducive to the support of extremely reliable distributed software. 8.5 Protocol Compilation Techniques U-Net seeks to provide very high performance by supporting a standard operating system structure in which a non-standard I/O path is provided to the application program. A different direction of research, best known through the results of the SPIN project at University of Washington [BSPS95], is concerned with building operating systems that are dynamically extensible through application programs coded in a special type-safe language and linked directly into the operating system at runtime. In effect, such a technology compiles the protocols used in the application into a form that can be executed close to the device driver. The approach results in speedups that are impressive by the standards of conventional operating systems, although less dramatic than those achieved by U-Net. The key idea in SPIN is to exploit dynamically loadable code modules to place the communications protocol very close to the wire. The system is based on Modula-3, a powerful modern programming language similar to C++ or other modular languages, but “type safe”. Among other guarantees, type safety implies that a SPIN protocol module can be trusted not to corrupt memory or to I /O bus A TM controller Communication s e g ments User p rocesses ATM Figure 8-6: U-Net shared memory architecture permits the device controller to directly map a communications region shared with each user process. The send, receive and free message queues are at known offsets within the region. The architecture provides strong protection guarantees and yet slashes the latency and CPU overheads associated with communication. In this approach, the kernel assists in setup of the segments but is not interposed on the actual I/O path used for communication once the segments are established. Chapter 8: Operating System Support for High Performance Communication 157 157 leak dynamically allocated memory resources. This is in contrast with, for example, the situation for a streams module, which must be “trusted” to respect such restrictions. SPIN creates a runtime context within which the programmer can establish communication connections, allocate and free messages, and schedule lightweight threads. These features are sufficient to support communications protocols such as the ones that implement typical RPC or streams modules, as well as for more specialized protocols such as might be used to implement file systems or to maintain cache consistency. The approach yields latency and throughput improvements of as much as a factor of two when compared to a conventional user-space implementation of similar functionality. Most of the benefit is gained by avoiding the need to copy messages across address space boundaries and to cross protection boundaries when executing the short code segments typical of highly optimized protocols. Applications of SPIN include support for streams-style extensibility in protocols, but also less traditional operating systems features such as distributed shared memory and file system paging over an ATM between the file system buffer pools of different machines. Perhaps more significant, a SPIN module has control over the conditions under which messages are dropped because of a lack of resources or time to process them. Such control, lacking in traditional operating systems, permits an intelligent and controlled degradation if necessary, a marked contrast with the more conventional situation in which as load gradually increases, a point is reached where the operating system essentially collapses, losing a high percentage of incoming and outgoing messages, often without indicating that any error has occurred. Like U-Net, SPIN illustrates that substantial performance gains in distributed protocol performance can be achieved by concentrating on the supporting infrastructure. Existing operating systems remain “single-user” centric in the sense of having been conceived and implemented with dedicated applications in mind. Although such systems have evolved successfully into platforms capable of supporting distributed applications, they are far from optimal in terms of overhead imposed on protocols, data loss characteristics, and length of the I/O path followed by a typical message on its way to the wire. As work such as this enters the mainstream, significant reliability benefits will spill over to end- users, who often experience the side-effects of the high latencies and loss rates of current architectures as sources of unreliability and failure. 8.6 Related Readings For work on kernel and microkernel architectures for high speed communication: Ameoba [MRTR90, RST88, RST89]. Chorus [AGHR89, RAAB88a, RAAB88b]. Mach [RAS86]. QNX [Hil92]. Sprite [OCDN88]. Issues associated with the performance of threads are treated in [ABLL91]. Packet filters are discussed in the context of Mach in [MRA87]. The classic paper on RPC cost analysis is [SB89], but see also [CT87]. TCP cost analysis and optimizations are presented in [CJRS89, Jac88, Jac90, KF93, Jen90]. Lightweight RPC is treated in [BALL89]. Fbufs and the xKernel in [DP83, PHMA89, AP93]. Active Messages are covered in [ECGS92, TL93] and U-Net in [EBBV95]. SPIN is treated in [BSPS95]. Kenneth P. Birman - Building Secure and Reliable Network Applications 158 158 Part II: The World Wide Web This second part of the textbook focuses on the technologies that make up the World Wide Web, which we take in a general sense that includes internet email and “news” as well as the Mosaic-style of network document browser that has seized the public attention. Our treatment seeks to be detailed enough to provide the reader with a good understanding concerning the key components of the technology base and the manner in which they are implemented, but without going to such an extreme level of detail as to lose track of our broader agenda, which is to understand how reliable distributed computing services and tools can be introduced into the sorts of critical applications that may soon be placed on the Web. Chapter 9: The World Wide Web 159 159 9. The World Wide Web As recently as 1992 or 1993, it was common to read of a coming revolution in communications and computing technologies. Authors predicted a future information economy, the emergence of digital libraries and newspapers, the prospects of commerce over the network, and so forth. Yet the press was also filled with skeptical articles, suggesting that although there might well be a trend towards an information superhighway, it seemed to lack on-ramps accessible to normal computer users. In an astonishingly short period of time, this situation has reversed itself. By assembling a relatively simple client-server application using mature, well-understood technologies, a group of researchers at CERN and at the National Center for Supercomputing Applications (NCSA) developed system for downloading and displaying documents over a network. They employed an object-oriented approach in which their display system could be programmed to display various types of objects: audio, digitized images, text, hypertext documents represented using the hypertext markup language (a standard for representing complex documents), and other data types. They agreed upon a simple resource location scheme, capable of encoding the information needed to locate an object on a server and the protocol with which it should be accessed. Their display interface integrated these concepts with easily used, powerful, graphical user interface tools. And suddenly, by pointing and clicking, a completely unsophisticated user could access a rich collection of data and documents over the internet. Moreover, authoring tools for hypertext documents already existed, making it surprisingly easy to create elaborate graphics and sophisticated hypertext materials. By writing simple programs to track network servers, checking for changed content and following hypertext links, substantial databases of web documents were assembled, against which sophisticated information retrieval tools could be applied. Overnight, the long predicted revolution in communications took place. Two years later, there seems to be no end to the predictions for the potential scope and impact of the information revolution. One is reminded of the early days of the biotechnology revolution, during which dozens of companies were launched, fortunes were earned, and the world briefly overlooked the complexity of the biological world in its unbridled enthusiasm for a new technology. Of course, initial hopes can be unrealistic. A decade or so later, the biotechnology revolution is beginning to deliver on some of its initial promise, but the popular press and the individual in the street have long since become disillusioned. The biotechnology experience highlights the gap that often forms between the expectations of the general populace, and the deliverable reality of a technology area. We face a comparable problem in distributed computing today. On the one hand, the public seems increasingly convinced that the information society has arrived. Popular expectations for this technology are hugely inflated, and it is being deployed on a scale and rate that is surely unprecedented in the history of technology. Yet, the fundamental science underlying web applications is in many ways very limited. The vivid graphics and ease with which hundreds of thousands of data sources can be accessed obscures more basic technical limitations, which may prevent the use of the Web for many of the uses that the popular press currently anticipates. Kenneth P. Birman - Building Secure and Reliable Network Applications 160 160 7 6 9 8 5 4 Cornell Web Proxy (cached documents) Local Web Proxy (cached documents) Cornell Web Server The network name service is structured like an inverted tree. cornell.edu cs.cornell.edu cafe.or g sf.cafe.or g 1 2 3 Web brower’s system only needs to contact local name and web services. The web operates like a postal service. Computers have “names” and “addresses,” and communication is by the exchange of electronic “letters” (messages) between programs. Individual systems don’t need to know how to locate all the resources in the world. Instead, many services, like the name service and web document servers, are structured to pass requests via local representatives, which forward them to more remote ones, until the desired location or a document is reached. For example, to retrieve the web document www.cs.cornell.edu/Info/Projects/HORUS, a browser must first map the name of the web server, www.cs.cornell.edu, to an address. If the address is unknown locally, the request will be forwarded up to a central name server and then down to one at Cornell (1-3). The request to get the document itself will often pass through one or more web “proxies” on its way to the web server itself (4-9). These intermediaries save copies of frequently used information in short-term memory. Thus, if many documents are fetched from Cornell, the server address will be remembered by the local name service, and if the same document is fetched more than once, one of the web proxies will respond rapidly using a saved copy. The term caching refers to the hoarding of reused information in this manner. Our web surfer looks irritated, perhaps because the requested server “is overloaded or not responding.” This common error message is actually misleading because it can be provoked by many conditions, some of which don’t involve the server at all. For example, the name service may have failed or become overloaded, or this may be true of a web proxy , opposed to the Cornell web server itself. The Internet addresses for any of these may be incorrect, or stale (e.g. if a machine has been moved). The Internet connections themselves may have failed or become overloaded. Although caching dramatically speeds response times in network applications, the web does not track the locations of cached copies of documents, and offers no guarantees that cached documents will be updated. Thus, a user may sometimes see a stale (outdated) copy of a document. If a document is complex, a user may even be presented with an inconsistent mixture of stale and up-to- date information. With wider use of the web and other distributed computing technologies, critical applications will require stronger guarantees. Such applications depend upon correct, consistent, secure and rapid responses. If an application relies on rapidly changing information, stale responses may be misleading, incorrect, or even dangerous, as in the context of a medical display in a hospital, or the screen image presented to an air-traffic controller. One way to address such concerns is to arrange for cached copies of vital information such as resource addresses, web documents, and other kinds of data to be maintained consistently and updated promptly. By reliably replicating information, computers can guarantee rapid response to requests, avoid overloading the network, and avoid “single points of failure”. The same techniques also offer benefits from scaleable parallelism, where incoming requests are handled cooperatively by multiple servers in a way that balances load to give better response times. Chapter 9: The World Wide Web 161 161 As we will see below, the basic functionality of the Web can be understood in terms of a large collection of independently operated servers. A web browser is little more than a graphical interface capable of issuing remote procedure calls to such a server, or using simple protocols to establish a connection to a server by which a file can be downloaded. The model is stateless: each request is handled as a separate interaction, and if a request times out, a browser will simply display an error message. On the other hand, the simplicity of the underlying model is largely concealed from the user, who has the experience of a “session” and a strong sense of continuity and consistency when all goes well. For example, a user who fills in a graphical form seems to be in a dialog with the remote server, although the server, like an NFS server, would not normally save any meaningful “state” for this dialog. The reason that this should concern us becomes clear when we consider some of the uses to which web servers are being put. Commerce over the internet is being aggressively pursued by a diverse population of companies. Such commerce will someday take many forms, including direct purchases and sales between companies, and direct sales of products and information to human users. Today, the client of a web server who purchases a product provides credit card billing information, and trusts the security mechanisms of the browser and remote servers to protect this data from intruders. But, unlike a situation in which this information is provided by telephone, the Web is a shared packet forwarding system in which a number of forms of intrusion are possible. For the human user, interacting with a server over the Web may seem comparable to interacting to a human agent over a telephone. The better analogy, however, is to shouting out one’s credit card information in a crowded train station. The introduction of encryption technologies will soon eliminate the most extreme deficiencies in this situation. Yet data security alone is just one element of a broader set of requirements. As the reader should recall from the first chapters of this text, RPC-based systems have the limitation that when a timeout occurs, it is often impossible for the user to determine if a request has been carried out, and if a server sends a critical reply just when the network malfunctions, the contents of that reply may be irretrievably lost. Moreover, there are no standard ways to guarantee that an RPC server will be available when it is needed, or even to be sure that an RPC server purporting to provide a desired service is in fact a valid representative of that service. For example, when working over the Web, how can a user convince him or herself that a remote server offering to sell jewelry at very competitive prices is not in fact fraudulent? Indeed, how can the user become convinced that the web page for the bank down the street is in fact a legitimate web page presented by a legitimate server, and not some sort of a fraudulent version that has been maliciously inserted onto the Web? At the time of this writing, the proposed web security architectures embody at most partial responses to these sorts of concerns. Full service banking and investment support over the Web is likely to emerge in the near future. Moreover, many banks and brokerages are developing web-based investment tools for internal use, in which remote servers price equities and bonds, provide access to financial strategy information, and maintain information about overall risk and capital exposure in various markets. Such tools also potentially expose these organizations to new forms of criminal activity, insider trading and fraud. Traditionally banks have kept their money in huge safes, buried deep underground. Here, one faces the prospect of prospect that billions of dollars will be protected primarily by the communications protocols and security architecture of the Web. We should ask ourselves if these are understood well enough to be trusted for such a purpose. Web interfaces are extremely attractive for remote control of devices. How long will it be before such an interface is used to permit a plant supervisor to control a nuclear power plant from a remote location, or permit a physician to gain access to patient records or current monitoring status from home? Indeed, a hospital could potentially place all of its medical records onto web servers, including everything from online telemetry and patient charts to x-rays, laboratory data, and even billing. But when this development occurs, how will we know that hackers cannot, also, gain access to these databases, perhaps even manipulating the care plans for patients? Kenneth P. Birman - Building Secure and Reliable Network Applications 162 162 A trend towards critical dependence on information infrastructure and applications is already evident within many corporations. There is an increasing momentum behind the idea of developing “corporate knowledge bases” in which the documentation, strategic reasoning, and even records of key meetings would be archived for consultation and reuse. It is easy to imagine the use of a web model for such purposes, and this author is aware of several efforts directed to developing products based on this concept. Taking the same idea one step further, the military sees the Web as a model for future information based conflict management systems. Such systems would gather data from diverse sources, integrating it and assisting all levels of the military command hierarchy in making coordinated, intelligent decisions that reflect the rapidly changing battlefield situation and that draw on continuously updated intelligence and analysis. The outcome of battles may someday depend on the reliability and integrity of information assets. Libraries, newspapers, journals and book publishers are increasingly looking to the Web as a new paradigm for publishing the material they assemble. In this model, a subscriber to a journal or book would read it through some form of web interface, being charged either on a per-access basis, or provided with some form of subscription. The list goes on. What is striking to this author is the extent to which our society is rushing to make the transition, placing its most critical activities and valuable resources on the Web. A perception has been created that to be a viable company in the late 1990’s, it will be necessary to make as much use of this new technology as possible. Obviously, such a trend presupposes that web servers and interfaces are reliable enough to safely support the envisioned uses. Many of the applications cited above have extremely demanding security and privacy requirements. Several involve situations in which human lives might be at risk if the envisioned Web application malfunctions by presenting the user with stale or incorrect data; in others, the risk is that great sums of money could be lost, a business might fail, or a battle lost. Fault-tolerance and guaranteed availability are likely to matter as much as security: one wants these systems to protect data against unauthorized access, but also to guarantee rapid and correct access by authorized users. Today, reliability of the Web is often taken as a synonym for data security. When this broader spectrum of potential uses is considered, however, it becomes clear that reliability, consistency, availability and trustworthiness will be at least as important as data security if critical applications are to be safely entrusted to the Web or the Internet. Unfortunately, however, these considerations rarely receive attention when the decision to move an application to the Web is made. In effect, the enormous enthusiasm for the potential information revolution has triggered a great leap of faith that it has already arrived. And, unfortunately, it already seems to be too late to slow, much less reverse, this trend. Our only option is to understand how web applications can be made sufficiently reliable to be used safely in the ways that society now seems certain to employ them. Unfortunately, this situation seems very likely to deteriorate before any significant level of awareness that there is even an issue here will be achieved. As is traditionally the case in technology areas, reliability considerations are distinctly secondary to performance and user-oriented functionality in the development of web services. If anything, the trend seems to a form of latter-day gold rush, in which companies are stampeding to be first to introduce the critical servers and services on which web commerce will depend. Digital cash servers, signature authorities, special purpose web search engines, and services that map from universal resource names to locations providing those services are a few examples of these new dependencies; they add to a list that already included such technologies as the routing and data transport layers of the internet, the domain name service, and the internet address resolution protocol. To Chapter 9: The World Wide Web 163 163 a great degree, these new services are promoted to potential users on the basis of functionality, not robustness. Indeed, the trend at the time of this writing seems to be to stamp “highly available” or “fault- tolerant” or more or less any system capable of rebooting itself after a crash. As we have already seen, recovering from a failure can involve much more than simply restarting the failed service. The trends are being exacerbated by the need to provide availability for “hot web sites”, which can easily be swamped by huge volumes of requests from thousands or millions of potential users. To deal with such problems, web servers are turning to a variety of ad-hoc replication and caching schemes, in which the document corresponding to a particular web request may be fetched from a location other than its ostensible “home.” The prospect is thus created of a world within which critical data is entrusted to web servers which replicate it for improved availability and performance, but without necessarily providing strong guarantees that the information in question will actually be valid (or detectably stale) at the time it is accessed. Moreover, standards such as HTTP V1/0 remain extremely vague as to the conditions under which it is appropriate to cache documents, and when they should be refreshed if they may have become stale. Broadly, the picture would seem to reflect two opposing trends. On the one hand, as critical applications are introduced into the Web, users may begin to depend on the correctness and accuracy of web servers and resources, along with other elements of the internet infrastructure such as its routing layers, data transport performance, and so forth. To operate safely, these critical applications will often require a spectrum of behavioral guarantees. On the other hand, the modern internet offers guarantees in none of these areas, and the introduction of new forms of web services, many of which rapidly become indispensable components of the overall infrastructure, is only exacerbating the gap. Recalling our list of potential uses in commerce, banking, medicine, the military, and others, the potential for very serious failures becomes apparent. We are moving towards a world in which the electronic equivalents of the bridges that we traverse may collapse without warning, in which road signs may be out of date or intentionally wrong, and in which the agents with which we interact over the network may sometimes be clever frauds controlled by malicious intruders. As a researcher, one can always adopt a positive attitude towards such a situation, identifying technical gaps as “research opportunities” or “open questions for future study.” Many of the techniques presented in this textbook could be applied to web browsers and servers, and doing so would permit those servers to overcome some (not all!) of the limitations identified above. Yet it seems safe to assume that by the time this actually occurs, many critical applications will already be operational using technologies that are only superficially appropriate. Short of some major societal pressure on the developers and customers for information technologies, it is very unlikely that the critical web applications of the coming decade will achieve a level of reliability commensurate with the requirements of the applications. In particular, we seem to lack a level of societal consciousness of the need for a reliable technical base, and a legal infrastructure that assigns responsibility for reliability to the developers and deployers of the technology. Lacking both the pressure to provide reliability and any meaningful notion of accountability, there is very little to motivate developers to focus seriously on reliability issues. Meanwhile, the prospect of earning huge fortunes overnight has created a near hysteria to introduce new Web-based solutions in every imaginable setting. As we noted early in this textbook, society has yet to demand the same level of quality assurance from the developers of software products and systems as it does from bridge builders. Unfortunately, it seems that the negative consequences of this relaxed attitude will soon become all too apparent. Kenneth P. Birman - Building Secure and Reliable Network Applications 164 164 9.1 Related Readings On the Web: [BCLF94, BCLF95, BCGP92, GM95a, GM95b]. There is a large amount of online material concerning the Web, for example in the archives maintained by Netscape Corporation [http://www.netscape.com]. [...]... 168 Kenneth P Birman - Building Secure and Reliable Network Applications “proxies”), gateways that may impose some form of firewall between the user and the outside world, and other servers that handle such protocols as Gopher, FTP, NNTP, SMTP, and WAIS When this feature is used, the HTTP client is expected to understand the form of data available from the protocol it employs and to implement the necessary... for working, social interaction, and commerce And these new electronic worlds will depend upon a wide variety of critical services to function correctly and reliably 165 166 Kenneth P Birman - Building Secure and Reliable Network Applications 10.1 Hyper-Text Markup Language (HTML) The Hyper-Text Markup Language, or HTML, is a standard for representing textual documents and the associated formatting information... documents and replicating critical resources raise questions about the degree to which a user can trust a document to be a legitimate and current version of the document that was requested With existing web architectures, the only way to validate a document is to connect to its home server and use the “head” command to confirm that it has not 183 1 84 Kenneth P Birman - Building Secure and Reliable Network Applications. .. consideration (and we will see a related problem when we talk about non-PC systems that depend upon standard file systems like NFS) can only engender some degree of skepticism about the near-term prospects for real security in the Web 173 Kenneth P Birman - Building Secure and Reliable Network Applications 1 74 10.7 Web Proxy Servers In Figure 10-1 a server proxy was shown between the browser and document... certain features be highlighted, and so forth, all in an application-specific manner and without soliciting additional data from the server Whatever makes sense to the application developer can be coded by picking an appropriate document representation and designing an appropriate interactive display program 177 178 Kenneth P Birman - Building Secure and Reliable Network Applications in the form of an... companies that today offer access to the internet and its email, chat and bulletin board services 181 182 Kenneth P Birman - Building Secure and Reliable Network Applications 10.12 Important Web Servers Although the Web can support a great variety of servers, business or “enterprise” use of the Web is likely to revolve around a small subset that support a highly standardized commercial environment Current... Kenneth P Birman - Building Secure and Reliable Network Applications When transferring genuinely sensitive information, web applications typically make use of a trusted intermediary that provides session keys, using what is called public key encryption to authenticate channels and then a secret key encryption scheme to protect the data subsequently sent on that channel (the so-called secure sockets layer... extensively cached • Security and Commerce Issues The basic security architecture of the Web is very limited and rather trusting of the network As noted earlier, a number of standards have been proposed in the areas of web security and digital cash These proposals remain difficult to evaluate and compare with one another, and considerable work will be needed before widely acceptable standards are available... server, and executes it locally However, there are applications in which one would prefer the converse model: one in which the browser builds an agent which is then sent to the server to execute remotely In particular, this would seem to be the case for applications in which the user needs to browse a very large database but only wishes to see a 179 180 Kenneth P Birman - Building Secure and Reliable Network. .. Birman - Building Secure and Reliable Network Applications product line, IBM’s MQSeries products, and the so-called “asynchronous message agent technology” available in some object-oriented computing systems For example, CORBA Event Notification Services are likely to be positioned as MOMS products Broadly, these products fall into two categories One very important area is concerned with providing network . soon become all too apparent. Kenneth P. Birman - Building Secure and Reliable Network Applications 1 64 1 64 9.1 Related Readings On the Web: [BCLF 94, BCLF95, BCGP92, GM95a, GM95b]. There is a large. Birman - Building Secure and Reliable Network Applications 168 168 “proxies”), gateways that may impose some form of firewall between the user and the outside world, and other servers that handle. P. Birman - Building Secure and Reliable Network Applications 160 160 7 6 9 8 5 4 Cornell Web Proxy (cached documents) Local Web Proxy (cached documents) Cornell Web Server The network name

Ngày đăng: 14/08/2014, 13:20

TỪ KHÓA LIÊN QUAN