1. Trang chủ
  2. » Công Nghệ Thông Tin

IGI global selected readings on database technologies and applications aug 2008 ISBN 1605660981 pdf

563 753 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 563
Dung lượng 13,62 MB

Nội dung

Selected Readings on Database Technologies and Applications Terry Halpin Neumont University, USA InformatIon scIence reference Hershey • New York Director of Editorial Content: Managing Development Editor: Senior Managing Editor: Managing Editor: Assistant Managing Editor: Typesetter: Cover Design: Printed at: Kristin Klinger Kristin M Roth Jennifer Neidig Jamie Snavely Carole Coulson Carole Coulson Lisa Tosheff Yurchak Printing Inc Published in the United States of America by Information Science Reference (an imprint of IGI Global) 701 E Chocolate Avenue, Suite 200 Hershey PA 17033 Tel: 717-533-8845 Fax: 717-533-8661 E-mail: cust@igi-global.com Web site: http://www.igi-global.com and in the United Kingdom by Information Science Reference (an imprint of IGI Global) Henrietta Street Covent Garden London WC2E 8LU Tel: 44 20 7240 0856 Fax: 44 20 7379 0609 Web site: http://www.eurospanbookstore.com Copyright © 2009 by IGI Global All rights reserved No part of this publication may be reproduced, stored or distributed in any form or by any means, electronic or mechanical, including photocopying, without written permission from the publisher Product or company names used in this set are for identification purposes only Inclusion of the names of the products or companies does not indicate a claim of ownership by IGI Global of the trademark or registered trademark Library of Congress Cataloging-in-Publication Data Selected readings on database technologies and applications / Terry Halpin, editor p cm Summary: "This book offers research articles focused on key issues concerning the development, design, and analysis of databases"-Provided by publisher Includes bibliographical references and index ISBN 978-1-60566-098-1 (hbk.) ISBN 978-1-60566-099-8 (ebook) Databases Database design I Halpin, T A QA76.9.D32S45 2009 005.74 dc22 2008020494 British Cataloguing in Publication Data A Cataloguing in Publication record for this book is available from the British Library All work contributed to this book set is original material The views expressed in this book are those of the authors, but not necessarily of the publisher If a library purchased a print copy of this publication, please go to http://www.igi-global.com/agreement for information on activating the library's complimentary electronic access to this publication Table of Contents Prologue xviii About the Editor xxvii Section I Fundamental Concepts and Theories Chapter I Conceptual Modeling Solutions for the Data Warehouse Stefano Rizzi, DEIS - University of Bologna, Italy Chapter II Databases Modeling of Engineering Information 21 Z M Ma, Northeastern University, China Chapter III An Overview of Learning Object Repositories 44 Argiris Tzikopoulos, Agricultural University of Athens, Greece Nikos Manouselis, Agricultural University of Athens, Greece Riina Vuorikari, European Schoolnet, Belgium Chapter IV Discovering Quality Knowledge from Relational Databases 65 M Mehdi Owrang O., American University, USA Section II Development and Design Methodologies Chapter V Business Data Warehouse: The Case of Wal-Mart 85 Indranil Bose, The University of Hong Kong, Hong Kong Lam Albert Kar Chun, The University of Hong Kong, Hong Kong Leung Vivien Wai Yue, The University of Hong Kong, Hong Kong Li Hoi Wan Ines, The University of Hong Kong, Hong Kong Wong Oi Ling Helen, The University of Hong Kong, Hong Kong Chapter VI A Database Project in a Small Company (or How the Real World Doesn’t Always Follow the Book) 95 Efrem Mallach, University of Massachusetts Dartmouth, USA Chapter VII Conceptual Modeling for XML: A Myth or a Reality 112 Sriram Mohan, Indiana University, USA Arijit Sengupta, Wright State University, USA Chapter VIII Designing Secure Data Warehouses 134 Rodolfo Villarroel, Universidad Católica del Maule, Chile Eduardo Fernández-Medina, Universidad de Castilla-La Mancha, Spain Juan Trujillo, Universidad de Alicante, Spain Mario Piattini, Universidad de Castilla-La Mancha, Spain Chapter IX Web Data Warehousing Convergence: From Schematic to Systematic 148 D Xuan Le, La Trobe University, Australia J Wenny Rahayu, La Trobe University, Australia David Taniar, Monash University, Australia Section III Tools and Technologies Chapter X Visual Query Languages, Representation Techniques, and Data Models 174 Maria Chiara Caschera, IRPPS-CNR, Italy Arianna D’Ulizia, IRPPS-CNR, Italy Leonardo Tininini, IASI-CNR, Italy Chapter XI Application of Decision Tree as a Data Mining Tool in a Manufacturing System 190 S A Oke, University of Lagos, Nigeria Chapter XII A Scalable Middleware for Web Databases 206 Athman Bouguettaya, Virginia Tech, USA Zaki Malik, Virginia Tech, USA Abdelmounaam Rezgui, Virginia Tech, USA Lori Korff, Virginia Tech, USA Chapter XIII A Formal Verification and Validation Approach for Real-Time Databases 234 Pedro Fernandes Ribeiro Neto, Universidade Estado–do Rio Grande Norte, Brazil Maria Lígia Barbosa Perkusich, Universidade Católica de Pernambuco, Brazil Hyggo Oliveira de Almeida, Federal University of Campina Grande, Brazil Angelo Perkusich, Federal University of Campina Grande, Brazil Chapter XIV A Generalized Comparison of Open Source and Commercial Database Management Systems 252 Theodoros Evdoridis, University of the Aegean, Greece Theodoros Tzouramanis, University of the Aegean, Greece Section IV Application and Utilization Chapter XV An Approach to Mining Crime Patterns 268 Sikha Bagui, The University of West Florida, USA Chapter XVI Bioinformatics Web Portals 296 Mario Cannataro, Università “Magna Græcia” di Catanzaro, Italy Pierangelo Veltri, Università “Magna Græcia” di Catanzaro, Italy Chapter XVII An XML-Based Database for Knowledge Discovery: Definition and Implementation 305 Rosa Meo, Università di Torino, Italy Giuseppe Psaila, Università di Bergamo, Italy Chapter XVIII Enhancing UML Models: A Domain Analysis Approach 330 Iris Reinhartz-Berger, University of Haifa, Israel Arnon Sturm, Ben-Gurion University of the Negev, Israel Chapter XIX Seismological Data Warehousing and Mining: A Survey 352 Gerasimos Marketos,University of Piraeus, Greece Yannis Theodoridis, University of Piraeus, Greece Ioannis S Kalogeras, National Observatory of Athens, Greece Section V Critical Issues Chapter XX Business Information Integration from XML and Relational Databases Sources 369 Ana María Fermoso Garcia, Pontifical University of Salamanca, Spain Roberto Berjón Gallinas, Pontifical University of Salamanca, Spain Chapter XXI Security Threats in Web-Powered Databases and Web Portals 395 Theodoros Evdoridis, University of the Aegean, Greece Theodoros Tzouramanis, University of the Aegean, Greece Chapter XXII Empowering the OLAP Technology to Support Complex Dimension Hierarchies 403 Svetlana Mansmann, University of Konstanz, Germany Marc H Scholl, University of Konstanz, Germany Chapter XXIII NetCube: Fast, Approximate Database Queries Using Bayesian Networks 424 Dimitris Margaritis, Iowa State University, USA Christos Faloutsos, Carnegie Mellon University, USA Sebastian Thrun, Stanford University, USA Chapter XXIV Node Partitioned Data Warehouses: Experimental Evidence and Improvements 450 Pedro Furtado, University of Coimbra, Portugal Section VI Emerging Trends Chapter XXV Rule Discovery from Textual Data 471 Shigeaki Sakurai, Toshiba Corporation, Japan Chapter XXVI Action Research with Internet Database Tools 490 Bruce L Mann, Memorial University, Canada Chapter XXVII Database High Availability: An Extended Survey 499 Moh’d A Radaideh, Abu Dhab Police – Ministry of Interior, United Arab Emirates Hayder Al-Ameed, United Arab Emirates University, United Arab Emirates Index 528 Detailed Table of Contents Prologue xviii About the Editor xxvii Section I Fundamental Concepts and Theories Chapter I Conceptual Modeling Solutions for the Data Warehouse Stefano Rizzi, DEIS - University of Bologna, Italy This opening chapter provides an overview of the fundamental role that conceptual modeling plays in data warehouse design Specifically, research focuses on a conceptual model called the DFM (Dimensional Fact Model), which suits the variety of modeling situations that may be encountered in real projects of small to large complexity The aim of the chapter is to propose a comprehensive set of solutions for conceptual modeling according to the DFM and to give the designer a practical guide for applying them in the context of a design methodology Other issues discussed include descriptive and cross-dimension attributes; convergences; shared, incomplete, recursive, and dynamic hierarchies; multiple and optional arcs; and additivity Chapter II Databases Modeling of Engineering Information 21 Z M Ma, Northeastern University, China As information systems have become the nerve center of current computer-based engineering, the need for engineering information modeling has become imminent Databases are designed to support data storage, processing, and retrieval activities related to data management, and database systems are the key to implementing engineering information modeling It should be noted that, however, the current mainstream databases are mainly used for business applications Some new engineering requirements challenge today’s database technologies and promote their evolution Database modeling can be classified into two levels: conceptual data modeling and logical database modeling In this chapter, the author tries to identify the requirements for engineering information modeling and then investigates the satisfactions of current database models to these requirements at two levels: conceptual data models and logical database models Chapter III An Overview of Learning Object Repositories 44 Argiris Tzikopoulos, Agricultural University of Athens, Greece Nikos Manouselis, Agricultural University of Athens, Greece Riina Vuorikari, European Schoolnet, Belgium Learning objects are systematically organized and classified in online databases, which are termed learning object repositories (LORs) Currently, a rich variety of LORs is operating online, offering access to wide collections of learning objects These LORs cover various educational levels and topics, store learning objects and/or their associated metadata descriptions, and offer a range of services that may vary from advanced search and retrieval of learning objects to intellectual property rights (IPR) management Until now, there has not been a comprehensive study of existing LORs that will give an outline of their overall characteristics For this purpose, this chapter presents the initial results from a survey of 59 well-known repositories with learning resources The most important characteristics of surveyed LORs are examined and useful conclusions about their current status of development are made Chapter IV Discovering Quality Knowledge from Relational Databases 65 M Mehdi Owrang O., American University, USA Current database technology involves processing a large volume of data in order to discover new knowledge However, knowledge discovery on just the most detailed and recent data does not reveal the longterm trends Relational databases create new types of problems for knowledge discovery since they are normalized to avoid redundancies and update anomalies, which make them unsuitable for knowledge discovery A key issue in any discovery system is to ensure the consistency, accuracy, and completeness of the discovered knowledge This selection describes the aforementioned problems associated with the quality of the discovered knowledge and provides solutions to avoid them Section II Development and Design Methodologies Chapter V Business Data Warehouse: The Case of Wal-Mart 85 Indranil Bose, The University of Hong Kong, Hong Kong Lam Albert Kar Chun, The University of Hong Kong, Hong Kong Leung Vivien Wai Yue, The University of Hong Kong, Hong Kong Li Hoi Wan Ines, The University of Hong Kong, Hong Kong Wong Oi Ling Helen, The University of Hong Kong, Hong Kong The retailing giant Wal-Mart owes its success to the efficient use of information technology in its operations One of the noteworthy advances made by Wal-Mart is the development of a data warehouse, which gives the company a strategic advantage over its competitors In this chapter, the planning and implementation of the Wal-Mart data warehouse is described and its integration with the operational systems is discussed The chapter also highlights some of the problems encountered in the developmental Database High Availability: An Extended Survey Another main advantage of the cluster architecture is the inherent fault tolerance provided by multiple nodes Since the physical nodes run independently, the failure of one or more nodes will not affect other nodes in the cluster In the extreme case, a cluster system can still be available even when all but one node survives, making a system based on cluster highly available This architecture also allows a group of nodes to be taken off-line for maintenance while the rest of the cluster continues to provide services online When a node in a shared-disk cluster fails, all data remains accessible to the other nodes In-flight transactions spanning nodes are rolled back Thus, no data remains locked as a result of the failure In most offered database clusters with shared-disk, recovery after node failure is automatic After detecting node failure, the cluster is automatically reconfigured and the same rollforward/roll-back recovery processes that work in the SMP environment are applied Another benefit of the shared disk approach is that it provides unmatched levels of fault tolerance with all data remaining accessible even if there is only one surviving node If a node in the shared disk cluster fails, the system dynamically redistributes the workload among all the surviving cluster nodes This ensures uninterrupted service and balanced cluster-wide resource utilization On the other hand, data in federated databases is divided across databases, and each database is owned by a different node The only way to access data owned by a node is to request the data from the node, and have the node service the request Thus, when the node fails, the data that it owns becomes unavailable, and the entire system becomes unavailable as well Also, any in-flight distributed transactions controlled by that node might have locked data on other nodes Therefore, recovering from node failures requires additional work in resolving these in-flight transactions As instances of a recently-provided database clustering commercial solutions, Oracle Real Application Clusters have started to address cluster architectures using shared storage systems such as SAN (Storage Area Network) Sybase offered Adaptive Server Enterprise with efficient autofailover capability The IBM DB2 Integrated Cluster Environment also uses a shared storage network to achieve both fault tolerance and performance scalability Open-source solutions for database clustering have been database-specific MySQL replication uses a master-slave mechanism as a solution offered by a third-party company with limited support for transactions and scalability (Cecchet et al., 2004) Postgre does not have a database clustering option, although some experiments have been reported using partial replication These extensions to existing database engines often require applications to use additional APIs to benefit from the clustering features database replication In many Internet applications, a large number of users that are geographically dispersed may routinely query and update the same database In this environment, the location of the data can have a significant impact on the application response time and availability A centralized approach manages only one copy of the database This approach is simple, since contradicting views between replicas are not possible The centralized approach suffers from two major drawbacks (Amir, Danilov, & Miskin-Amir, 2002): • • Performance problems due to high server load or high communication latency for remote clients Availability problems caused by server downtime or lack of connectivity; clients in portions of the network that are temporarily disconnected from the server cannot be serviced The server load and server failure problems can be addressed by replicating the database servers to form a cluster of peer servers that  Database High Availability: An Extended Survey Figure Database replication overall structure Secondary Database Server Primary Database Server Secondary Database Server W E N N A TR TI C SA a iti In N O ro ch yn lS S n ti o za ni Replicator N :S A H PS T O coordinate updates If the primary server fails, applications can switch to the replicated copy of the data and continue operations Database replication is different from file replication, which essentially copies files Database-replication products log selected database transactions to a set of internal replication-management tables The software then periodically checks these tables for updated data and moves the data from the source to the target systems while guaranteeing data coherency and consistency Many database-replication products even have built-in tools to allow updating the primary database with any changes that users made to the backup database while the primary database was offline Figure shows the main components of the database replication environment The database replication process is usually done by three primary components: • • •  Primary database server (Publisher): The source of the data being replicated; Secondary database server (Subscriber): The destination of the replicated data; there can be one of more Subscribers; and Replicator: This handles sending the data from the Publisher to the Subscriber(s) Database replication uses a snapshot of the source database to initially synchronize the databases at the publisher and the subscriber As transactions are committed at the publisher side, they are captured and sent to the subscriber(s) Transactional replication mainly is not designed for high availability; the process of promoting the secondary server to assume the role of the primary server is manual, not automatic In addition, returning the primary server to its original role after a failure requires a complete database restoration database log mirroring The database mirroring is another option that enables database-level failover against unplanned downtime caused by server or database failures In the event that the primary database fails, database mirroring enables a second standby database server to be almost instantly available with zero data loss The secondary database will always be updated with the current transaction that is being processed on the primary database server The impact of running Database Mirroring to transaction throughput is minimal Database High Availability: An Extended Survey Figure Database mirroring overview Applications ` ` ` Primary Database Server Database Log Mirroring Secondary Mirror Database Server Log Log Observer Unlike clustering services which works at the server level, database mirroring is implemented at the database level Database mirroring provides nearly instant failover time, taking only a few seconds, while clustering typically has longer failover times Database Mirroring provides added protection against disk failures as there is no shared quorum disk as there is in a clustering solution Unlike clustering, which requires specific hardware configurations, database mirroring works with all standard hardware that support most of today’s DBMS systems Figure demonstrates the overview of how database mirroring works Database mirroring is implemented using three systems: the primary server, the secondary server, and the observer The primary database server usually provides the database services By default, all incoming client connections are directed to the primary server The job of the secondary server is to maintain a copy of the primary server’s mirrored database The secondary server is not restricted to just providing backup services Other databases on the secondary server can be actively supporting other unrelated applications The observer essentially acts as an independent third party with the responsibility of determining which system will assume the role of the primary server The applied strategy in database mirroring is usually done by sending transaction logs between the primary and secondary servers Such practice leads to database mirroring of a real-time log shipping application When a client system writes a transaction to the primary server, that request is written to the primary server’s log file before it is written into the data file That transaction record then gets sent to the secondary server where it gets written to the secondary server’s transaction log After the secondary server has written the record to its log, it sends an acknowledgement to the primary server This lets both systems to know that the record has been received and that the same data now exists in each server’s log file In the case of a commit operation, the primary server waits until it receives an acknowledgement from the mirroring server before it sends its response back to the client saying that the  Database High Availability: An Extended Survey operation is completed The secondary server should be in a state of continuous recovery to keep the data files up-to-date with the incoming transaction log data To facilitate high availability for client applications, database mirroring works in conjunction with the Transparent Client Redirection (TCR) layer, which in turn enables end-user systems to be automatically redirected to the secondary server in the event that the database on the primary server becomes unavailable aVailability bEnchmarking mEthodology In this section, a sample of ad-hoc measurement of availability in database management systems is illustrated by Brown who worked on the software Redundant Array of Independent Disks (RAID) availability benchmarking (Brown, 2000; Brown & Patterson, 2000) Brown’s technique quantifies availability behavior by examining the variations in delivered quality of service as the system is subjected to targeted fault injection The availability benchmarking methodology consists of four parts: (i) a set of quality service metrics that measure the test system’s behavior; (ii) a generator that produces a realistic workload and provides a way to measure the quality of service under that workload; (iii) a fault-injection environment used to compromise the test system’s availability; and (iv) a reporting methodology based on a graphical representation of the test system’s availability behavior The first step in the availability benchmarking methodology is to select appropriate quality of service metrics These metrics must be chosen so that they can reflect degradations in system availability, in the broadest sense of the term The choice depends on properties of the system being benchmarked For example, performance degradation would be seen as a decrease in availability in most systems Thus, a performance-based qual-  ity of service metric is typically an appropriate choice for an availability benchmark But other metrics can be considered as well, including, for example, the consistency or accuracy of results delivered by the test system The second component of the availabilitybenchmarking methodology, the workload generator, typically takes the form of a traditional performance benchmark The role of this component is to produce a realistic workload that places the test system under the kind of load conditions that it typically experiences in practice Using a performance benchmark since a great deal of existing work has been carried out to construct realistic workloads in that context In addition, it requires that the workload generator be able to measure the desired quality of service metrics defined in the first step of the methodology Since quality of service is typically closely tied to performance, a standard performance benchmark often has the desired measurement capability built-in The availability benchmarking methodology specifies that, while the workload generator is running, the test system should be subjected to targeted faults designed to mimic real-world failure cases that may compromise availability The third component of the methodology is a fault-injection environment A key point here is that the injected faults must be chosen to be realistic, either based on a priori knowledge of failure-prone design flaws of some part of the system, or based on historical knowledge of typical failure cases for the system and the kinds of faults that provoke these cases Finally, the last component of the availabilitybenchmarking methodology specifies the way that results are collected and presented Essentially, this component defines the procedural aspects of carrying out an availability benchmark First, the system is run under the generated workload with no faults injected The quality of service values collected during this run are statistically processed to produce a 99% confidence interval demarcating the normal quality of service behavior of Database High Availability: An Extended Survey the system Then, the experiments are repeated multiple times with different combinations of faults injected during those runs; the methodology specifies both single-fault micro benchmarks in which a single fault is injected and the system is left untouched until it stabilizes or crashes, and multiple-fault macro benchmarks in which a series of faults designed to mimic a complex real-world scenario is injected, with human intervention allowed for system maintenance purposes The results of these faulty runs are reported graphically, with quality of service plotted versus time, overlaid with both an indication of when the faults were injected as well as with the 99% confidence interval computed from the normal run directly affect the survival of the business, it will be required to have high-end and fault-tolerant solutions Good operational procedures can make an enormous difference between theoretical availability and the actual availability of a solution Finally, an organization must have an enterprise vision for high availability to gain and sustain its competitive advantage A strategy must be developed to effectively respond to unanticipated events or disruptions Demands, risks, and opportunities abound, ranging from market fluctuations to employee error and misconduct to earthquakes and terrorism rEfErEncEs conclusion High availability does not just happen It is only achieved through strengthening the combination of people, processes, and technology A plan that focuses purely on technology will never achieve high levels of availability because many of the significant factors that affect availability stem from the interaction of people and processes Preparing the proper hardware and software platform is only a starting point From that point on, high availability is the result of good planning and practices in combination with the appropriate technologies Designing a cost-effective high-availability environment for an information system(s) requires understanding the causes of outages, the critical elements for application execution, and the impacts of application outages on the business With today’s technology, there is a range of solutions to support business-critical applications Although outages may occur, recovery is likely to be quick If an application outage lasts for more than a few minutes, it will severely impact business In such cases, a clustering solution may be necessary For constant 24-hour availability, or applications where outages either are life-threatening or will A framework for system high availability (2000) CA: Intel Corporation, Inc Amir, Y., Danilov, C., & Miskin-Amir, M (2002) Practical wide-area database replication (Tech Rep No CNDS-2002-1) Baltimore: Johns Hopkins University Aronoff, E (2003) Building a 24x7 database Saint Johns, MI: Quest Software, Inc Arora, R (2005) High availability strategies of an enterprise Uttar Pradesh, India: TATA Consultancy Services Barraza, O (2002) Achieving 99.9998+% storage uptime and availability Carlsbad, CA: Dot Hill Systems Corp Bauer, M (2001) Oracle8i Parallel Server Concepts Release Redwood City, CA: Oracle Corporation, A76968-01 Bender, W J., & Joshi, A (2004) High availability technical primer McLean, VA: Project Performance Corporation Brien, M O (2000) GoAhead Availability Management Service Technical Brief Bellevue, WA: GoAhead Software, Inc  Database High Availability: An Extended Survey Brown, A (2000) Availability benchmarking of a database system Berkeley, CA: University of California at Berkeley, EECS Computer Science Division Brown, A., & Patterson, D A (2000) Towards availability benchmarks: A case study of software RAID systems In Proceedings of the 2000 USENIX Annual Technical Conference (pp 263-276) San Diego, CA Buch, V., & Cheevers, S (2002) Database architecture: Federated vs clustered Redwood Shores, CA: Oracle Corporation Cai, J., & Leung, S (2002) Building highly available database servers using Oracle real application clusters Redwood Shores, CA: Oracle Corporation Cecchet, E., Marguerite, J., & Zwaenepoel, W (2004) C-JDBC: Flexible database clustering middleware In Proceedings of USENIX Annual Technical Conference, Freenix Track, Boston (pp 9-18 ) Chandrasekaran, S., & Kehoe, B (2003) Technical comparison of Oracle real application clusters vs IBM DB2 UDB ESE Redwood Shores, CA: Oracle Corporation Choosing the right database: The case for OpenBase SQL (2004) Concord, NH: OpenBase International, Ltd DB2operation: The challenge to provide 24x365 availability (2000) Houston, TX: BMC Software, Inc Disaster recovery package for SYBASE adaptive server enterprise (2004) Dublin, CA: Sybase Inc Gribble, S D., Brewer, E A., Hellerstein, J M., & Culler, D (2000) Scalable, distributed data structures for Internet service construction In Proceedings of the 4th Symposium on Operating  Systems Design and Implementation (OSDI 2000), San Diego, CA (pp 319-332) High Availability and More: Achieving a Service Availability™ Solution (2001) Bellevue, WA: GoAhead Software, Inc High availability white paper (2001) San Jose, CA: BlueArc Corporation Klein, D (1988) Architecting and deploying highavailability solutions USA: Compaq Computer Corporation, Inc Kumar, S (2005) Oracle Database 10g Release High Availability Redwood City, CA: Oracle Corporation, Inc Lemme, S (2002) IT managers guide: Maximizing your technology investments in Oracle Database Trend and Application Magazine Lemme, S., & Colby, J R (2001) Implementing and managing Oracle databases (1st ed.) New York: PRIMA Publishing Low cost high availability clustering for the enterprise (2004) Burlington, MA: Winchester Systems Inc and Red Hat Inc Lumpkin, G (2004) Oracle partitioning—a must for data warehouse and OLTP environments Redwood Shores, CA: Oracle Corporation, Inc Otey, M., & Otey, D (2005) Choosing a database for high availability:An analysis of SQL server and Oracle USA: Microsoft Corporation Parikh, A (2004) Trustworthy software Unpublished master of science dissertation, Stevens Institute of Technology, Castle Point on Hudson, Hoboken Providing Open Architecture High Availability Solutions (2001) Bellevue, WA: GoAhead Software, Inc Rosenkrantz, B., & Hill, C (1999) Highly available embedded computer platforms become real- Database High Availability: An Extended Survey ity Chicago, USA: Motorola Computer Group, issue of Embedded Systems Development Russom, P (2001) Strategies and Sybase Solutions for Database Availability Waltham, MA: Hurwitz Group Inc Saito, Y., Bershad, B N., & Levy, H M (2000) Manageability, availability, and performance in Porcupine: A highly scalable, cluster-based mail service In Proceedings of the 17th Symposium on Operating System Principles (SOSP): ACM Transactions on Computer Systems, August, 2000, Vol 18(3), (pp 298-332) Kiawah Island, SC Service availability: A customer-centric approach to availability (2000) Bellevue, WA: GoAhead Software, Inc Singh, H (2001) Distributed fault-tolerant/high availability systems Los Angeles, CA: TRILLIUM Digital System, Inc Standards for a Service Availability™ Solution (2002) USA: Demac Associates for Service Availability™ Forum Tendulkar, V S (2005) MySQL Database Replication and Failover Clustering Mumbai, India: Tata Consultancy Services Ltd Sauers, B (1996) Understanding high availability USA: Hewlett-Packard Company, Inc This work was previously published in Architecture of Reliable Web Applications Software, edited by M A Radaideh and H AlAmeed, pp 1-33, copyright 2007 by IGI Publishing, formerly known as Idea Group Publishing (an imprint of IGI Global)   Index Symbols NET 114 5-Nines 500 A absolute validity interval 237, 242 ACID (see atomicity, consistency, isolation, and d 238 ACID (see atomicity, consistency, isolation, and durability) 266 additivity 13 admission control 239 ADOM, theoretical foundations of 333 ADOM-UML application layer 335 ADOM-UML domain layer 334 advanced distributed learning (ADL) Co-Laboratory (ADL Co-Lab) 48 advanced modeling advertising databases 216 AESharenet 60 aggregation 13 Agile Manufacturing (AM) 21 Alexandria 60 Alliance of Remote Instructional Authoring and Distribution Networks for Europe (ARIADNE) 47, 53, 60 aperiodic 236, 237 application-based domain modeling (ADOM) 330, 331, 332, 338 application-based domain modeling (ADOM), approach 333 application development 520 application development standards 504 application protection 503 arrival pattern 237 arrival pattern of transactions 237 artifact 491 Artificial intelligence 24 association rule mining 359 association rules 305 asynchronous 514 atomicity 238 atomicity, consistency, isolation, and durabiliity 238 atomicity, consistency, isolation, and durability (ACID) 266 authentication authorization 400 automatic failover 516 automating process 504 B B2B e-commerce 24 Balmoral Group, creation of 96 Bayesian network (BN) 424–449 best-first search 434 bioinformatics Web portal 299 BIOME 60 bitmaps 427 Blackboard(R) 54 Blue Web’n 60 Boolean attributes 427 BPM (see business performance management) 266 business environment 191 Business Information Integration 369 business intelligence 192 business performance management (BPM) 266 C CAD/CAPP/CAM 22 Campus Alberta Repository for Educational Objects (CAREO) project 47, 60 Canada’s SchoolNet 60 CAPDM Sample Interactive LOS 60 CELEBRATE network 53 Chow-Liu trees 434 CIMOSA 26 Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited Index CITIDEL 60 classification 306 clustering 512, 523 Co-operative Learning Object Exchange (CLOE) 60 collection and storage of data 193 coloured Petri net (CPN) 247 compatibility function 238 computer-aided design 22 computer-aided manufacturing 22 computer-aided process planning 22 computer-based information technology 22 Computer Integrated Manufacturing (CIM) 21 Computer Science Teaching Center (CSTC) 60 computer terminal network (CTN) 87 conceptual data modeling 33 conceptual data models 22 conceptual design 14, 29 conceptual model 114, 180 conceptual modeling Concurrent Engineering (CE) 21 Connexions 60 consistency 238 Constraint databases 31 Content Object Repository Discovery and Registration/Resolution Architecture (CORDRA) 53 continuous availability 499, 508 conventional database system 235 convergence cookies 399 cost avoidance 253 cost sharing 253 course packs 47 portals 47 Creative Commons 55 cross-dimension attribute D Daily production 192 data access type 238 database 266, 500 database design 123 database management system 175, 500 database management system (DBMS) 253 database management systems 235 Database Management Systems (DBMS) 369 database management systems (DBMS) 135 database mirroring 523 database modeling 27 database schema interoperability 212 database standards 504 Database systems 22 data cleaning 67 data clustering 360 DataCube (DC) 426–430, 437 DataCube (DC) transform (DCT) 427 Data mining 26, 193, 194 data mining 67, 425–426, 429 Data mining activities 198 data mining algorithms 191 Data mining applications 193 data mining applications 194 data mining information 192 data mining in manufacturing 194 data mining results 197 Data mining techniques 194 data mining techniques and tools 191 data protection 503 data warehouse 68, 85 data warehouse design data warehouses (DWs) 1, 134, 135, 450 Data Warehousing 86 DB2 32 DBM (see database management system) 253 deadline 236, 237, 239, 245, 249 decisional model 15 Decision tree 192 decision tree 191, 192 decision tree as a data mining method 199 decision trees 472 descriptive attribute DFM (see dimensional fact model) diagrammatic VQL 177 Digital Scriptorium 61 Think 47 digital learning object repositories specification 53 rights management (DRM) 55 Digital Library for Earth System Education 60 digital manipulatives 496 dimensional fact model (DFM) dimension attribute dimension categorization, via constraint definition 410 Dirichlet priors 432 disaster recovery 503  Index discretionary access control (DAC) 141 distributed database 519 Document-centric documents 30 DOM (Document Object Model) 114 downtime 192 DSpace™ (MIT) 61 DTD (Document Type Definition) 115 DTDs 306 Dublin Core metadata standard 53 durability 238 DWs (see data warehouses) dynamically linking databases and ontologies 212 dynamic hierarchies 12 dynamic information space organization 210 E e-Learning Research and Assessment Network (eLera) 61 e-manufacturing 36 e-service 515 EDI (Electronic Data Interchange) 113 EducaNext (UNIVERSAL) 61 educational objects economy (EOE) 61 Education Network Australia (EdNA) Online 47, 53, 61 eduSource 53 EER Model 125 Eisenhower National Clearinghouse for Mathematics and Science Education 61 electronic data interchange (EDI) 87 electronic service 500 employee database 192 engineering/product design and manufacturing 25 Engineering information 22 Enhanced and Evaluated Virtual Library 61 Enterprise information systems (EISs) 23 enterprise resource planning (ERP) 23 entity/relationship (E/R) model Entity Relationship (ER) model 118 Entity Relationship for XML (ERX) 119 entry points ER-Like Models 126 ESCOT 61 European e-ACCESS repository 53 European Union’s (EU) 135 Expansion of projects 193 expert systems 24 exploratories 61 0 EXPRESS 28 expressive power of language 176 Extensible Entity Relationship Model (XER) 120 eXtensible Markup Language (XML) 369 extensible markup language (XML) 185 eXtensible Markup Language (XML) data sources 369 F failover 514, 516 Fathom Knowledge Network Inc 61 fault recognition 502 tolerance 507 fault-injection environment 524 fault avoidance 502 fault minimization 502 Filamentality 61 Firebird 256 firm 236, 237 firm deadlin 237 floating IP addresses 515 flow measures 13 functional data model 183 fusion places 239, 240 Fuzzy databases 31 fuzzy decision tree 472, 473–476 fuzzy decision tree format 473 fuzzy decision tree inductive learning method 473–475 fuzzy decision tree inference method 475–476 fuzzy inductive learning method 484 fuzzy logic 24 Fuzzy set theory 24 fuzzy set theory 472 G Gateway to Educational Materials (SM) (GEM) consortium 61 genetic algorithm 434 genomics research network architecture (gRNA) 298 geo-data explorer (GEODE) 364 geographic information science (GIS) 186 Geotechnical, Rock and Water Resources Library 61 Global Education Online Depository and Exchange 61 Global Grid Forum 297 Index global manufacturing 21 GNU, General Public License (GNU GPL) 266 government portal 395 graphical user interface (GUI) 89, 266 greedy equivalence search (GES) algorithm 434 grid computing 266 grid protein sequence analysis (GPSA) 298 GUI (see graphical user interface) 266 H hacker 396 hard deadline 236 Harvey Project 62 Health Education Assets Library (HEAL) 62 heartbeat 516 heuristic search algorithms 432 hierarchical coloured Petri net (HCPN) 235, 239, 242, 251 hierarchy high availability 499, 507, 520 hill-climbing search algorithm 432, 434 historical data 69 Humbul Humanities Hub 62 hybrid VQL 179 hybrid OLAP (HOLAP) 359 HyperModel 127 I Iconex 62 iconic VQL 178 IDEF1X 28 iLumina 62 Imprecision 24 imprecision 238, 239, 246 IMS Global Learning Consortium Inc 51, 53–54 Learning Object Metadata (LOM) 51 incorrect knowledge discovery 71 Inductive databases 306 industry analyst, job description 95 information and communication technologies (ICTs) 44 information modeling 22 Information systems 21 information systems (IS) 136 information technology 85 information technology (IT) 503 Institute of Electrical and Electronics Engineers (IEEE) Learning Object Metadata (LOM) standard 46, 53 Learning Technology Standards Committee (LTSC) 46 integration business information problem 369 inter-ontology relationships, creating 213 inter-ontology relationships, deleting 213 Interactive Dialogue with Educators from Across the State (IDEAS) 62 University (IU) Project 62 internal consistency 238 International Organization for Standardization (IS 28 Internet-based applications 515 inventory 87 isolation 238 J J2EE 114 join-tree algorithm 436 JORUM 62 K KDD 305, 306 knowledge-based system 22 Knowledge Agora 62 knowledge discovery 65, 67 knowledge discovery from databases (KDD) 26 knowledge discovery in databases (KDD) 194 knowledge discovery process (KDD) 306 knowledge discovery system 306 Knowledge integration 26 Knowledge maintenance 26 Knowledge Management 26 Knowledge modeling 26 L LabBase 298 Le@rning Federation 62 Learn-Alberta 62 learning management system (LMS) 47 matrix 62 Learning-Objects.net 63 Learning-Objects.net (Acadia University LOR) 62 LearningLanguages.net 62 Learning Objects for the Arc of Washington 63 Learning Activities (LoLa) Exchange 63 Repository, University of Mauritius 62  Index Virtual College (Miami Dade) 63 learning objects 45, 45–47 referatories 47 repositories (LORs) 45–60 linear regression 427 local clustering 515 local disk mirroring 514 logical 238 logical consistency 235, 237 logical constraint 247 logical database models 22, 29 logical model 180 M machine learning 472 machinery capacity 192 man-in-the-middle attack 398 mandatory access control (MAC) 141 manufacturing domain 25 Manufacturing enterprises 36 manufacturing flexibility 25 manufacturing organizations 191 mapping 239, 247 Maricopa Learning Exchange 63 Math Forum 63 MDSCL (multidimensional security constraint langua 137 MDX (multidimensional expressions) 137 message sequence chart 241, 248 message sequence chart (MSC) 244 metadata 45, 46, 47, 53, 54 standards 45 metrics minimum description length (MDL) 432 mining of knowledge 193 MIT OpenCourseWares 63 MOMT (multilevel object-modeling technique) 136 monitoring 239, 248 Moodle 54 MSAnalyzer 298 MSC (see message sequence chart) 244 MSDNAA 63 multidimensional (MD) models 135 multidimensional expressions (MDX) 137 multidimensional OLAP (MOLAP) 358 multidimensional security constraint language (MDS 137 multilevel object-modeling technique (MOMT) 136 Multimedia Educational Resource for Learning and Online Teaching (MERLOT) system 47, 53, 63  -CATS: Community of Academic Technology Staff 63 multiple arc multiple hierarchies, types of 413 multivariate Gaussian 432 myGrid 298 MySQL 256 N namespaces 305 National Institute of Multimedia Education (NIME) 53 Learning Network: Materials 63 Science, Mathematics, Engineering, and Technology Education Digital Library (NSDL) 63 Security Agency of the Slovak Republic 396 Native XML Sources 372 NEEDS 63 negotiation 239, 246, 247, 248 Nesting 385 NetCube 425–426, 434–445, 444 network interface card (NIC) 518 network management 503 network standard 504 New Media Consortium (NMC) 46 node partitioned data warehouse 450 node partitioned data warehouse (NPDW) 451 Not-XML Data Sources 371 O Object-Constraint Language (OCL) 138 object-modeling technique (OMT) 136 object-oriented databases 31 object-relational database management system (ORDBMS) 257, 266 occurrence graph (OG) 243 Occurrence Graph tool 243 ODMG 35 OG (see occurrence graph) 243 OLAP (see online analytical processing) 2, 254, 266 OLAP, summarizability and homogeneity 404 OLTP (see online transaction processing) OMT (object-modeling technique) 136 online analytical processing (OLAP) 2, 90, 137, 254, 266, 358, 425, 451 Online Learning Network 47 online transaction processing (OLTP) 1, 90, 514 open source software (OSS) 266 OpenVES 64 Index operating system (OS) 504 optional arc Oracle 32 Oracle Label Security (OLS10g) 134 ORDBMS (see object-relational database management system) 257, 266 organizational model 15 R patterns 305 PBS TeacherSource 64 PDM (product data management) system 23 performance metrics 239, 247 periodic 236, 248, 249 periodicity 249 Perpetual Inventory (PI) System 91 Petri net 239 physical model 180 PiTR (see point-in-time recovery) 260 planned downtime 509 point-in-time recovery (PiTR) 260 point-of-sales (POS) data 87 polytrees 434 PostgreSQL 257 Primary artifacts 492 probabilistic inference 436 probabilistic relational models 429 probability distribution function (PDF) 428–433 production management 25 product life cycle 23 PROTEUS 299 prototyping 99 publisher 522 Radio Frequency Identification (RFID) 92 ragged (or incomplete) hierarchy 11 rapid failover 517 RDBMS (see relational database management system) 253, 266 real-time databases 234, 235, 236, 237 real-time databases (RTDB) 235 redundancy 502, 509 relational database management system (RDBMS) 253, 266 relational databases 29 Relational Databases and XML 371 Relational Databases Sources 369 relational model 181 relational OLAP (ROLAP) 358 relative validity interval 237 remote disk mirroring 514 replenishment 90 replication 515 replicator 522 resiliency 502 resource reservation 239 return on investment (ROI) analysis 88 RFID 85 risk assessment 502 RTDB (see real-time database) 235 rule discovery methods 472 rule discovery methods applications 481–484 rule discovery methods key concept dictionary 476–479 rule discovery methods key phrase pattern dictionary 479–481 rule discovery process 69 Q S QoS management 247 QoS parameters 239 query 175 -by-browsing (QBB) 182 -by-example (QBE) 182 body 178 by icon (QBI) 178 graph 183 head 178 language 174 mapping 175 window 183 Query Statistics 89 saturated query 428 scalability 502 schema language 32 SDAI functions 35 SDMMS database 354 SDMMS data warehouse 356 Secondary artifacts 492 secondary event security 297 breach 396 security labels 139 seismic data management and mining system (SDMMS) 352, 353 seismo-surfer 364 P  Index semantic-lock technique 238 Semantic Modeling Networks 124 semi-planned downtime 510 sendmail program 396 Sensor Networks 245 sensor networks 234, 236, 245 server high availability 515 service-oriented architecture (SOA) 299 serviceability 502 session high-jacking 398 shared hierarchies 10 SilkRoute 374 simulation model 235 single points of failure (SPOF) 507 small computer system interface (SCSI) 512 SMETE 47 SMP (see symmetric multiprocessor) 261 sniffing 397 SOAP (Simple Object Access Protocol) 114 Social artifacts 493 soft deadline 236 software-development life cycle 242 sources of data for mining 192 SpecAlign 298 specification 239, 244, 247, 248 sporadic 236 SQL (see structured query language) 266 Server 255 SQL injection 400 Standard for the Exchange of Product Model Data (S 28 standby databases 515 star schema stock measures 13 storage area network (SAN) 521 storage availability 515 structured query language (SQL) 266 substitution transitions 239, 240 supply chain management (SCM) 23 supply chain processes 87 symmetric multi-processor (SMP) 518 symmetric multiprocessor (SMP) 261 synchronous 514 system architecture 502 system management monitoring 503 T tabular VQL 176 TD (see timing diagram) 244 temporal hierarchy 10 temporal restriction 235  temporal restriction (see also timing constraint) 235 Teradata Corporation 87 textual data 471–485 textual data format 473 timestamp 237, 245 timing constraint 236, 241 timing diagram (TD) 244 Traits 91 transactional integrity 516 Tree-Like Models 128 tree-shaped structures 192 U UML (see Unified Modeling Language) 235 UML (Unified Modeling Language) 28, 118 UML, as modeling language 330 UML-based methods 118 UML-Based Models 127 unbalanced (or recursive) hierarchy 11 uncertainty 24 Unified Modeling Language (UML) 135, 235 UNIQUE constraints 35 unit measures 13 unplanned downtime 510 unsaturated query 428 user friendly 400 V validity interval 237, 242, 247 variables Virtual Enterprise (VE) 21 virus 398 VISIONARY 182 VISUAL 186 visual metaphor 180 Visual Studio NET 126 Vocabulary Definition Exchange (VDEX) specification 54 W Wal-Mart 85 Web application 397 Web-based supply chain management 24 WebCT 54 WebFINDIT 206 WebFINDIT, design principles of 210 wide area network (WAN) 512 Index workload generator 524 World Wide Web (WWW) 45 World Wide Web Consortium (W3C) 114 X X-Ray 373 XBD 375 XDM 305 XDS 383 XDSQuery 388 XDSSchema 386 XGrammar 125 XML 112, 306, 369 XML (eXtensible Markup Language) 29 XML-Enabled Relational Databases 372 XML-schema 306 XML Authority 128, 129 XML Data Sources 371 XML Designer 126 XML DTD 115 XML Metadata Interchange (XMI) Specification 114 XML Modeling Issues 116 XML Schema 116 XMLSPY 129 XQuery textual language 185 XTABLES 374  ... of Hong Kong, Hong Kong Leung Vivien Wai Yue, The University of Hong Kong, Hong Kong Li Hoi Wan Ines, The University of Hong Kong, Hong Kong Wong Oi Ling Helen, The University of Hong Kong, Hong... Hong Kong, Hong Kong Lam Albert Kar Chun, The University of Hong Kong, Hong Kong Leung Vivien Wai Yue, The University of Hong Kong, Hong Kong Li Hoi Wan Ines, The University of Hong Kong, Hong... of ownership by IGI Global of the trademark or registered trademark Library of Congress Cataloging-in-Publication Data Selected readings on database technologies and applications / Terry Halpin,

Ngày đăng: 20/03/2019, 11:49

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN