1. Trang chủ
  2. » Công Nghệ Thông Tin

THE SEMANTIC WEB CRAFTING INFRASTRUCTURE FOR AGENCY jan 2006 phần 4 pdf

38 269 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 38
Dung lượng 328,76 KB

Nội dung

Payment Models E-commerce has grown rapidly in the existing Web, despite being hampered by various problems ranging from unrealistic business models to inappropriate pricing. In most cases, the relative successes have been the traditional brick-and-mortar businesses offering their products and services online, perhaps largely because they already had a viable business plan to build on, and an established trust relationship with a large customer base willing to take their shopping online. The advantage of real-world products and services in the Web context has been that transaction costs are drastically lowered, yet item prices are still comparable to offline offerings, albeit cheaper. It has, however, proven more difficult to construct a viable business model and pricing structure for digital content, in which category we include information ‘products’ such as music CDs, software and books, not just specifically Web-published content. Bit 3.23 Nobody really objects to paying for content – if the price is right And, it must be added, if the mechanism for ordering and paying is ubiquitous (that is, globally valid), easy to use, and transaction secure. Arguably the greatest barrier to a successful business model for digital content has been the lack of a secure micropayment infrastructure. Existing pricing models for physical products are patently unrealistic in the face of the essentially zero cost to the user of copying and distributing e-content. Credit-card payment over SSL connections comes close to providing a ubiquitous payment model, but suffers from transaction costs that make smaller sums impractical and micropayment transactions impossible. Unfortunately, few of the large, global, content owners/distributors have realized this essential fact of online economics, and are thus still fighting a costly and messy losing battle against what they misleadingly call ‘piracy’ of digital content, without offering much in the way of realistic alternatives to sell the same content to customers in legal ways. Instead they threaten and sue ordinar y people, who like most of their peers feel they have done no wrong. The debate is contentious, and the real losses have yet to be calculated, whether in terms of money or eroded public trust. It will take time to find a new and sustainable balance. Even fundamental, long-established policies of ‘fair use’ and academic citation, to name two, are being negated with little thought to the consequences. It can be questioned whose rights are being served by in practice indefinite copyright that renders huge volumes of literature, music, and art inaccessible to the public as a matter of principle. More disturbing is the insinuation by degree of ‘digital rights management’ (DRM) into consumer software and hardware, with serious consequences for legitimate use. DRM- technology can time-limit or make impossible the viewing of any material not centrally registered and approved, even the blocking ability of creators to manage their own files. Perhaps the worst of this absurdity could have been avoided, if only one or more of the early-advocated technologies for digital online cash had materialized as a ubiquitous component integrated into the desktop-browser environment. For various and obscure Web Information Management 93 reasons, this event never happened, though not for lack of trying. Instead, online payment is currently based on:  real-world credit cards (for real-world prices);  subscription models (to aggregate small charges into sums viable for credit-card transac- tions);  other real-world payment means;  alternate payment channels that leverage these means (such as PayPal). The only relative success in the field has been the utilities-style rates model applied by ISPs for access and bandwidth charges, and some services with rates similar to phone usage. Bit 3.24 Free content can complement and promote traditional sales A brave few have gone against commercial tradition by freely distributing base pro- ducts – software, music, or books – figuring, correctly, that enough people will pay for ad ded value (features or support) or physical copies (CD and manuals) to allow actually turning a profit. Baen Free Library (www.baen.com/library/) successfully applies the principle to published fiction. In the Semantic Web, an entirely new and potentially vast range of products and services arise, whose pricing (if set a price at all) must be ‘micro’ verging on the infinitesimal. Successful business models in this area must be even more innovative than what we have seen to date. One of the core issues connected with payment mechanisms is that of trust. The Semantic Web can implement workable yet user-transparent trust mechanisms in varying ways. Digital signatures and trust certificates are but the most visible components. Less developed are different ways to manage reputability and trust portability, as are schemes to integrate mainstream banking with Web transactions. Some technologies applicable to distributed services have been prototyped as proof-of- concept systems using virtua l economies in the p2p arena (see for example Mojo Nation, described in Chapter 8 of Peer to Peer), and derivative technologies based on them are gradually emerging in new forms as e-commerce or management solutions for enterprise. Bit 3.25 A Web economy might mainly be for resource management It might be significant that the virtual economy prototypes turn out to be of greatest value as a way automatically to manage resources according to business rules. The Semantic Web might even give rise to a global virtual economy, significant in scope but not directly connected to real-world money. The distribution and brokerage of a virtual currency among users and services would serve the main purpose of implementing a workable system to manage Web resources. It is a known fact that entirely free resources tend to give rise to various forms of abuse – the so-called tragedy of the commons. Perhaps the most serious such abuse is 94 The Semantic Web the problem of spam in the free transport model of e-mail. It costs spam perpetrators little or nothing to send ten, or a thousand, or a million junk messages, so not surprisingly, it is millions that are sent by each. If each mail sent ‘cost’ the sender a small, yet non-zero sum, even only in a virtual currency tracking resource usage, most of these mails would not be sent. Various measures of price and value, set relative to user priority and availability of virtual funds, can thus enable automatic allocation of finite resources and realize load balancing. Web Information Management 95 4 Semantic Web Collaboration and Agency From information management in principle, we move on to the actors involved in realizing this management, mainly collaborating humans and automatic agents. The Semantic Web is slanted deliberately towards promoting and empowering both these actors by providing a base infrastructure with extensive support for relevant collaborative mechanisms. For most people, collaboration processes might seem to be relevant mainly to specialized academic and business goals – writing research papers, coordinating project teams, and so on. But it is about more than this narrow focus. Strictly speaking, most human communica- tion efforts contain collaborative aspects to some degree, so that embedded mechanisms to facilitate collaboration are likely to be used in many more contexts than might be expected. Such added functionality is bound to transform the way we use the Web to accomplish any task. Planning and coordinating activities make up one major application area that involves almost everyone at some time or another. Current planning tools, such as paper agendas and electronic managers (PIM), started off as organizational aids specific to academic and business environments, yet they are more and more seen as essential personal accessories in daily life. They make daily life easier to manage by facilitating how we interact with others, and they remind us of our appointments and to-do items. With the spread of home computing and networks, PIMs are now often synchronized with both home and work systems. It is not hard to extrapolate the trend, factoring in ubiquitous Web access and the new collaboration technologies, to get a very interesting picture of a possible future where networked collaboration processes dominate most of what we do. Chapter 4 at a Glance This chapter looks at a motivating application area of the Semantic Web, that of collabora- tion processes. Back to Basics reminds us that the Internet and the Web were conceived with collaborative purposes in mind. The Semantic Web: Crafting Infrastructure for Agency Bo Leuf # 2006 John Wiley & Sons, Ltd  The Return of p2p explores the revitalization of p2p technologies applied to the Web and how these affect the implementation of the Semantic Web.  WebDAV looks at embedding extended functionality for collaboration in the underlying Web protocol HTTP. Peer Aspects discusses issues of distributed storage, security and governance.  Peering the Data explores the consequences and implications of a distributed model for data storage on the Web.  Peering the Services looks at Web services implemented as collaborating distributed resources.  Edge Computing examines how Web functionality moves into smaller and more portable devices, with a focus on pervasive connectivity and ubiquitous computing. Automation and Agency discusses collaboration mechanisms that are to be endpoint- agnostic, in that it will not matter if the actor is human or machin e.  Kinds of Agents defines the concept and provides an overview of agent research areas.  Multi-Agent Systems explores a field highly relevant to Semantic Web implementations, that of independent yet cooperating agents. Back to Basics For a long time (in the Web way of reckoning) information flow has been dominated by server-client ‘broadcasting’ from an actively-publishing few to the passively-reading many. However, peer collaboration on the Web is once again coming to the fore, this time in new forms. As some would say: It’s about time! The original write-openness of the Web was largely obscured in the latter half of the 1990s, as millions surfed into the ever-expanding universe of read-only pages. Bit 4.1 Not everyone sees themselves as Web content authors The objection that most people are perfectly content to just read the work of others does, however, beg the question of how you define ‘published content’. Much useful informa- tion other than written text can be ‘published’ and made accessible to others. There were, of course, good reasons why the Web’s inherent capability for anyone to ‘put’ new content on any server was disabled, but a number of interesting potential features of the Web were lost in the process. Reviving some of them, in controlled and new ways, is one of the aims of the Semantic Web initiative. Content-collaboration became needlessly difficult when the Web was locked down. Publishing on the Web required not just access to a hosting server, but also different applications to create, upload and view a Web page. Free hosting made way for commercial hosting. The large content producers came to dominate, and portal sites tried to lock in ever more users in order to attract advertising revenue. In fact, for a brief period in the late 1990s, 98 The Semantic Web it almost appeared as if the Web was turning into a just another broadcast medium for the media corporations. Everyone was pushing ‘push’ technology, where users were expected to subscribe to various ‘channels’ of predigested information that would automatically end up on the user’s desktop. Collaboration became marginalized, relegated to the cont ext of proprietary corporate software suites, and adapted to conferencing or ‘net-meeting’ connections that depended on special clients, special servers, and closed protocols. Or so it seemed. Actually, free and open collaboration was alive and well, only out of the spotlight and awareness of the majority of users. It was alive between researchers and between students at the institutions and universities that had originally defined the Internet. It was also alive in a growing community of open-source development, with a main focus on Linux, even as the grip of commercial interests around the PC market and by extension the Web was hardening. The concepts were also alive among the many architects of the Internet, and being refined for introduction at a new level. Developers were using the prototypes to test deployment viability. Finally, there was the vision of a new Internet, a next-generation Web that would not just provide a cornucopia of content but one of functionality as well. The line between ‘my local information’ and ‘out-there Web information’ seems to blur in this perspective, because people tend both to want access to their own private data from anywhere, and to share selected portions of it for use by others (for example, cal endar data to plan events). The server-centric role of old then loses dominance and relevance. That is not to say that server publishing by sole content creators is dead – far from it. Ever increasing numbers of people are involved in publishing their own Web sites or writing Web logs (blogs), instead of merely surfing the sites of others. Such activity may never encompass the masses – the condescending comment is usually along the lines that most people simply have nothing to say, and therefore (implied ‘we’) do not need the tools. But that is not the point. Bit 4.2 The capability of potential use enables actual use when needed Providing a good education for the masses does not mean that we expect everyone to become rocket scientists. However, we can never know in advance who will need that education, or when, in order to make critical advances for society as a whole. We do not know in advance who will have something important to say. Therefore, the best we can do is to provide the means, so that when somebody does have something to say, they can do it effectively. Anyway, with increased interaction between user systems, distributed function- ality and better tools, more users will in fact have more to say – possibly delegated to their agent software that will say it for them, as background negotiations with the remote services. The Return of p2p In the late 1990s, peer-to-peer technology suddenly caught everyone’s fancy, mainly in the form of Internet music swapping. It was a fortuitous joining of several independent developments that complemented each other: Semantic Web Collaboration and Agency 99  The explosion in the number of casual users with Internet access – everyone could play.  The development of lossy yet perceptionally acceptable compression technology for near- CD-quality music – leading to file sizes even modem users could manage (4 MB instead of 40).  The new offering of small clients that could search for and retrieve files from other connected PCs – literally millions of music tracks at one’s fingertips. Legalities about digital content rights (or control) aside (an area that years later is still highly contentious and infected in the various camps), here was a ‘killer application’ for the Internet that rivalled e-mail – it was something everyone wanted to do online. But the surge in p2p interest fostered more than file-swapping. Bit 4.3 Peer-to-peer is fundamentally an architectural attitude System architecture always profoundly reflects the mindset of the designers. In this case, p2p and client-server are opposing yet complementary attitudes that determine much more than just implementation detail. New and varied virtual networks between p eer clients were devised, developing and testing components that prove critical to many Semantic Web application areas, including collaboration. The reason they prove critical is that p2p networks faced many of the same design problems, albeit on a smaller scale. These problems are often related to issues of identity and resource management in distributed networks. With client-server architectures, most of these issues are resolved through centralized registry and governance mechanisms depending on some single authority. Assuredly, there are situations when this rigid and controlled approach is appropriate, but the approach is not conducive to ease of use, nor to the establishing of informal and fluid collaborations over the Web. Peer-managed Mechanisms In implementation, we can envision two peer-like classes of entities evolving to meet the new paradigm of distributed functionality:  Controller-viewer devices, sort of like a PDA with a screen and a real or projected keyboard, that are the user’s usual proximate interface and local storage device. From it, control and views can be handed off to larger systems, such as a notebook, desktop, or room-integrated system with wall-projection screens. You might see it as a remote control on steroids.  Agent-actuator devices, both physical and software, which negotiate data and interact with the user on demand, and function as the intermediaries between the user-local PDA views and the world at large. Reviewer Austin David suggested, in the course of the first review of this book’s text, the following vision of what this situation might be like for an average user: 100 The Semantic Web My various views would interact with my personal agent, and I would carry the views on my person, or leave them attached to my car or bedroom wall, or in my toaster or radio. The agent would know my calendar, or how I like my toast, or what CD I want to hear in the morning, and would be ‘out there’ to interact with the rest of the Semantic Web. Since its foundation, the Web has been eminently decentralized and based on consensus cooperation among its participants . Its standards are recommendations, its ruling bodies advisory, and its overall compliance driven by individual self-interest rather than absolute regulation. Collaboration processes share these characteristics. Not surprising, since the ongoing development of the Internet as a whole rests on collaborative efforts. Why, therefore, should small-scale collaborative efforts be locked into tools and protocols that reflect a different, server-centric mindset? New Forms of Collaboration Many Web resources are repositories for information gathered from diverse, geographically separated sources. The traditional, centrally managed server model has the bottleneck that all updates must pass through one or more ‘webmasters’ to be published and become accessible to others. Bit 4.4 In the new Web, sources are responsible for their resources Although implicit in the original Web, this distributed responsibility became largely replaced by the webmaster-funnel model of information updating on servers. When primary sources can author and update information directly to the server, without intermediaries, great benefits result. A small-scale, informal version of such co-authoring was explored in The Wiki Way, a technology that used existing Web and client-server technology but in an innovative way. The new Web seeks to implement comparable features pervasively, as basic functionality built into the inf rastructure’s defining protocols. WebDAV Web Distributed Authoring and Versioning, WebDAV (see www.webdav.org), or often just DAV, is a proposed protocol extension to HTTP/1.1. It has for a time been gathering significant support in the Web community. DAV adds new methods and headers for greater base functionality, while still remaining backwards compatibl e with existing HTTP. As the full name implies, much of the new functionality has to do with support for distributed authoring and content management – in effect, promoting collaboration at the level of the base Web protocol. In addition, DAV specifies how to use the new extensions, how to format request and response bodies, how existing HTTP behavior may change, and so on. Unlike many API and overlay protocols that have been developed for specific collabo ration purposes, DAV intends Semantic Web Collaboration and Agency 101 to implement a generic infrastructure support for such functionality, thereby simplifying application development. Bit 4.5 DAV is a basic transport protocol, not in itself semantic capable The point of DAV in the context of the Semantic Web, however, is that it enables transparently much functionality that makes it easier to implement semantic features. In the context of co-authoring, DAV is media-agnostic – the protocol supports the authoring (and versioning) of any content media type, not just HTML or text as some people assume. Thus, it can support any application data format. The Basic Features DAV provides a network protocol for creating interoperable, collaborative applications. Major features of the protocol include:  Locking, which implements well-defined concurrency control to manage situations where two or more collaborators write to the same resource without first merging changes.  Properties, based on XML, which provide storage for arbitrary metadata, such as a list of authors on Web resources.  Namespace manipulation, which support copy and move operations for resources. DAV implements DASL (DAV Searching and Locating protocol), to perform searches based on property values to locate Web resources. DAV Extensions Several extensions to the base DAV protocol are developed in the IETF:  Advanced Collections, which adds support for ordered collections and referential resources (or pointers).  Versioning and Configuration Mangement, which adds versioning support, similar to that provided by RCS or SCCS. The configuration management layer, based on Versioning, provides support for workspaces and configurations.  Access Control, which is implemented as the ability to set and clear access control lists. In a generalized form, access control can be applied to the entire Web. Access control will always be a factor, no matter how ‘free’ and ‘open’ the Web is styled. The difference in the open context is that access and governance methods are decentralized and firmly based on semantic models of identity/authority assertion. Strong public-key cryptography, with its associated digital signatures, is the enabling technology that makes such methods feasible. These issues are discussed in Chapter 3. The future trust model is one of overlapping ‘webs of trust’ that complement existing central trust authorities; an interoperative system that can adequately establish secure identities on demand. 102 The Semantic Web [...]... related to the structure are the protocols that define the process of managing the structures Markup gives an overview of the fundamental methods for tagging structures and meta-information on the Web The Semantic Web: Crafting Infrastructure for Agency Bo Leuf # 2006 John Wiley & Sons, Ltd 118 The Semantic Web  The Bare Bones (HTML) looks at the cornerstone of the published Web as we know it, the hypertext... specifying formatting semantics This component expresses the main relevancy of XSL to the Semantic Web A clarifying note on the terminology used here might be in order When originally conceived, the whole was simply XSL, comprising both stylesheet and formatting The effort was later split more formally into two halves: the transforming language (XSLT) and the formatting vocabulary (XFO) The focus on transformation... structural markup in a more consistent and flexible way than HTML For the traditional Web at large, and importantly for the emerging Mobile Internet initiatives, the derived XHTML reformulation of HTML in the XML context is critical Foremost, XHTML is the transitional vehicle for the predominantly HTML-tagged content on the Web today It is the basis for the second incarnation of wireless application protocol... environments, available as a normal infrastructure the same way as electricity Assuming the infrastructure exists for the devices and software, let us examine what may drive the device functionality 108 The Semantic Web Automation and Agency Deploying generic mechanisms on the Web to facilitate collaboration between humans is important in its own right, but the Semantic Web advocates a much broader view... data and metadata over the network, especially in adaptive networks, is the desire to decouple access from a particular storage location and exposed access method on an identified server – the hitherto pervasive URL-based identity model Therefore, the Semantic Web instead presumes that the location-agnostic URI is the normal access identity for resources – and for that matter for the individual Distributed... versions, for example, we saw these extensions become a part of the ‘browser wars’ as different vendors strove to dominate the Web The deployment of browser specific features in the form of proprietary tags caused a fragmentation of the Web Sites were tailored for a particular browser, and in the worst case ended up unreadable in another client Bit 5.2 HTML is structural markup used mainly for presentation... can expect the first killer applications to appear Agents Interacting Agents are components in an agent infrastructure on the Web (or Web- like networks) – in the first analysis, interacting with services and each other Such an agent infrastructure requires a kind of ‘information food chain’ where every part of the food chain provides information that enables the existence of the next part The chain environment... attractive Mobile agents can take tasks Semantic Web Collaboration and Agency 111 into the network and to other hosts for processing, tasks that would overwhelm the capacity of the original hand-held host Later they would return with results for display on the handheld, or automatically forward more complex displays to more capable systems in proximity to the user Bit 4. 15 Portable computing is likely to... between them Various wrapper agents also need to be devised to transform data representations from locally stored structures at the information sources to the structures represented in the ontologies The Ontobroker project (see ontobroker.semanticweb.org for a W3C overview) was an early attempt to annotate and wrap Web documents and thus provide a generic answering service for individual agents Figure 4. 1... growth of information They are designed to manipulate or collate information from many distributed sources, and early implementations have been around on the Web without users even reflecting on the agent technology that powers the functionality Bit 4. 18 Information agents often function as intermediary interfaces In this capacity, they remove the requirement that the user finds and learns the specific . mind. The Semantic Web: Crafting Infrastructure for Agency Bo Leuf # 2006 John Wiley & Sons, Ltd  The Return of p2p explores the revitalization of p2p technologies applied to the Web and how these. accessible to others. Bit 4. 4 In the new Web, sources are responsible for their resources Although implicit in the original Web, this distributed responsibility became largely replaced by the webmaster-funnel. URL-based identity model. Therefore, the Semantic Web instead presumes that the location-agnostic URI is the normal access identity for resources – and for that matter for the individual. Distributed retrieval

Ngày đăng: 14/08/2014, 09:22

TỪ KHÓA LIÊN QUAN