1. Trang chủ
  2. » Thể loại khác

Advances in artificial intelligence, ahmed y tawfik, scott d goodwin, 2004 2947

595 27 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 595
Dung lượng 7,95 MB

Nội dung

Lecture Notes in Artificial Intelligence Edited by J G Carbonell and J Siekmann Subseries of Lecture Notes in Computer Science 3060 Berlin Heidelberg New York Hong Kong London Milan Paris Tokyo Ahmed Y Tawfik Scott D Goodwin (Eds.) Advances in Artificial Intelligence 17th Conference of the Canadian Society for Computational Studies of Intelligence, Canadian AI 2004 London, Ontario, Canada, May 17-19, 2004 Proceedings 13 Series Editors Jaime G Carbonell, Carnegie Mellon University, Pittsburgh, PA, USA Jăorg Siekmann, University of Saarland, Saarbrăucken, Germany Volume Editors Ahmed Y Tawk Scott D Goodwin University of Windsor School of Computer Science Windsor, Ontario, N9B 3P4, Canada E-mail: atawfik@cs.uwindsor.ca;sgoodwin@uwindsor.ca Library of Congress Control Number: 2004104868 CR Subject Classification (1998): I.2 ISSN 0302-9743 ISBN 3-540-22004-6 Springer-Verlag Berlin Heidelberg New York This work is subject to copyright All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer-Verlag Violations are liable to prosecution under the German Copyright Law Springer-Verlag is a part of Springer Science+Business Media springeronline.com c Springer-Verlag Berlin Heidelberg 2004 Printed in Germany Typesetting: Camera-ready by author, data conversion by PTP-Berlin, Protago-TeX-Production GmbH Printed on acid-free paper SPIN: 11007128 06/3142 543210 Preface Following a long tradition of excellence, the seventeenth edition of the conference of the Canadian Society for the Computational Studies of Intelligence continued the success of its predecessors This edition reflected the energy and diversity of the Canadian AI community and the many international partnerships that this community has successfully established AI 2004 attracted high-quality submissions from Canada and around the world All papers submitted were thoroughly reviewed by the program committee Each paper was assigned to at least three program committee members Out of 105 submissions to the main conference, 29 papers were included as full papers in this volume, and 22 as short/position papers Three workshops and a graduate symposium were also associated with AI 2004 In this volume, 14 papers selected from 21 submissions to the graduate symposium have been included We invited three distinguished researchers to give talks representing their active research in AI: Fahiem Bacchus, Michael Littman, and Manuela Veloso It would have been impossible to organize such a successful conference without the help of many individuals We would like to express our appreciation to the authors of the submitted papers, and to the program committee members and external referees who provided timely and significant reviews In particular, we would like to thank Luis Rueda for organizing the reviewing of the graduate symposium submissions, and Eric Mulvaney for providing valuable assistance in the preparation of the proceedings To manage the submission and reviewing process we used CyberChair developed by Richard van de Stadt Christine Gă unther from Springer has patiently attended to many editorial details We owe special thanks to Bob Mercer for handling the local arrangements Last, but not least, we would like to thank the General Chair, Kay Wiese and all the steering committee members for all their tremendous efforts in making AI 2004 a successful conference May 2004 Ahmed Y Tawfik and Scott D Goodwin VI Organization AI 2004 was organized by the Canadian Society for the Computational Studies of ´ Intelligence (Soci´et´e Canadienne pour l’Etude de l’Intelligence par Ordinateur) Executive Committee Conference Chair Local Organizer Program Co-chairs Kay Wiese (Simon Fraser University) Bob Mercer (University of Western Ontario) Ahmed Y Tawfik (University of Windsor) Scott D Goodwin (University of Windsor) Program Committee Aijun An (York U.) Peter van Beek (U of Waterloo) Michael Bowling (U of Alberta) Cory Butz (U of Regina) Brahim Chaib-draa (U Laval) Nick Cercone (Dalhousie U.) David Chiu (U of Guelph) Diane Cook (U of Texas at Arlington) Douglas D Dankel (U of Florida) Jim Delgrande (Simon Fraser U.) Joerg Denzinger (U of Calgary) Ren´ee Elio (U of Alberta) Richard Frost (U of Windsor) Ali Ghorbani (U of New Brunswick) Gary Grewal (U of Guelph) Jim Greer (U of Saskatchewan) Howard Hamilton (U of Regina) Bill Havens (Simon Fraser U.) Graeme Hirst (U of Toronto) Michael C Horsch (U of Saskatchewan) Nathalie Japkowicz (U of Ottawa) Froduald Kabanza (U of Sherbrooke) Stefan C Kremer (U of Guelph) Amruth Kumar (Ramapo College) Dekang Lin (U of Alberta) Charles Ling (U of Western Ontario) Jim Little (U of British Columbia) Stan Matwin (U of Ottawa) Gord McCalla (U of Saskatchewan) Omid Madani (U of Alberta) Bob Mercer (U of Western Ontario) Evangelos Milios (Dalhousie U.) Guy Mineau (U Laval) Shiv Nagarajan (QNX Systems) Eric Neufeld (U of Saskatchewan) Alioune Ngom (U of Windsor) Simon Parsons (Brooklyn College) Jeff Pelletier (U of Alberta) Petra Perner (ibai Leipzig) David Poole (U of British Columbia) Fred Popowich (Simon Fraser U.) Gregory Provan (Rockwell) Bob Price (U of Alberta) Robert Reynolds (Wayne State U.) Luis Rueda (U of Windsor) Abdul Sattar (Griffith U.) Dale Schuurmans (U of Alberta) Weiming Shen (NRC) Daniel Silver (Acadia U.) Bruce Spencer (NRC and UNB) Suzanne Stevenson (U of Toronto) Stan Szpakowicz (U of Ottawa) Choh Man Teng (U of West Florida) Andr´e Trudel (Acadia U.) Julita Vassileva (U of Saskatchewan) Shaojun Wang (U of Alberta) Michael Wong (U of Regina) Dan Wu (U of Windsor) Yang Xiang (U of Guelph) VIII Organization Yiyu Yao (U of Regina) Jia You (U of Alberta) Eric Yu (U of Toronto) Hong Zhang (U of Alberta) Kaizhong Zhan (U of Western Ontario) Nur Zincir-Heywood (Dalhousie U.) Additional Reviewers Xiangdong An Mohamed Aoun-Allah Julia Birke Scott Buffett Terry Caelli Shihyen Chen Lei Duan Wael Farag Alan Fedoruk Julian Fogel Song Gao P Gburzynski Ali Ghodsi Jasmine Hamdan Malcolm Heywood Zhihua Hu Jimmy Huang Kamran Karimi Vlado Keselj Daniel Lemire Jingping Liu Wei Liu Yang Liu Xiaohu Lu Xinjun Mao Sehl Mellouli Milan Mosny V Muthukkumarasamy Lalita Narupiyakul Chris Parker Gerald Penn M Shafiei Baozheng Shan Yidong Shen Pascal Soucy Finnegan Southey Marius Vilcu Kimberly Voll Xiang Wang Xin Wang Kun Wu Qiang Yang Manuel Zahariev Hong Zhang Yan Zhao Sponsors National Research Council Canada Conseil National de Recherches Canada Canadian Society for the Computational Studies of Intelligence ´ Soci´et´e Canadienne pour l’Etude de l’Intelligence par Ordinateur Table of Contents Agents A Principled Modular Approach to Construct Flexible Conversation Protocols Roberto A Flores, Robert C Kremer Balancing Robotic Teleoperation and Autonomy for Urban Search and Rescue Environments Ryan Wegner, John Anderson 16 Emotional Pathfinding Toby Donaldson, Andrew Park, I-Ling Lin 31 Natural Language Combining Evidence in Cognate Identification Grzegorz Kondrak 44 Term-Based Clustering and Summarization of Web Page Collections Yongzheng Zhang, Nur Zincir-Heywood, Evangelos Milios 60 The Frequency of Hedging Cues in Citation Contexts in Scientific Writing Robert E Mercer, Chrysanne Di Marco, Frederick W Kroon 75 Learning Finding Interesting Summaries in GenSpace Graphs Efficiently Liqiang Geng, Howard J Hamilton 89 Naăve Bayes with Higher Order Attributes 105 Bernard Rosell, Lisa Hellerstein Preliminary Study of Attention Control Modeling in Complex Skill Training Environments 120 Heejin Lim, John Yen The Reconstruction of the Interleaved Sessions from a Server Log 133 John Zhong Lei, Ali Ghorbani On Customizing Evolutionary Learning of Agent Behavior 146 Jă org Denzinger, Alvin Schur Extending Montague Semantics for Use in NaturalLanguage Database-Query Processing Maxim Roy and Richard Frost University of Windsor, 401 Sunset Ave Windsor Ontario N9B3P4 {royd,Richard} @uwindsor.ca Montague’s semantics has been used in the past for constructing natural-language processors in higher-order functional languages This paper describes the work done and progress so far in extending a Montague–like compositional semantics in constructing natural-language processors to accommodate n-ary transitive verbs for n>2 In the early seventies, Richard Montague [5] developed an approach to the interpretation of natural language in which he claimed that we could precisely define syntax and semantics for natural languages such as English Montague was one of the first to develop a compositional semantics for a substantial part of English By compositional semantics we indicate that the meaning of a compound sentence is determined by the meanings of its constituents and the way they are put together to form sentences In order to build natural-language query processors, Montague’s approach is suitable as it is highly orthogonal However, it is necessary to extend Montague Semantics in order to build natural-language interfaces that can accommodate more complex language structures Higher-order functional-programming languages have been used in the past for constructing natural-language processors based on Montague Semantics Frost and Launchbury [2] illustrated the ease with which a set-theoretic version of a small firstorder subset of Montague Semantics can be implemented in this way Lapalme & Lavier [4] showed how a large part of Montague Semantics could be implemented in a pure higher-order functional programming language Frost & Boulos [3] implemented a compositional semantics, based on a set-theoretic version of Montague semantics, for arbitrarily-nested quantified and negative phrases The approach is based on an extended set theory in which ‘negative’ phrases that denote infinite sets are represented in complement form A comprehensive survey of research on compositional semantics for naturallanguage database queries has identified little work that has been done in compositional semantics for natural-language queries that includes 3-or-more place transitive verbs Currently, in Frost's [3] approach, 2-place transitive verbs are defined as follows (in a simple semantics that doesn't accommodate negation): verb p = [x | (x, image_x) ĸ collect verb_rel; p image_x ] Here transitive verbs not denote relations directly, rather a transitive verb is implemented as function, which takes a predicate on sets as argument Applying this function to a particular predicate returns as a result a set of entities, which are in the result set if the predicate is true of the entity’s image under the associate relation For example: A.Y Tawfik and S.D Goodwin (Eds.): Canadian AI 2004, LNAI 3060, pp 567–568, 2004 © Springer-Verlag Berlin Heidelberg 2004 568 M Roy and R Frost orbits mars = [x|(x,image_x)ĸcollect orbit_rel; mars image_x] mars s = True, if "mars" is_a_member_of s = False, otherwise orbit_rel = [("luna","earth"), ("phobos","mars"), ("deimos","mars")("earth","sol"),("mars","sol")] The definition of orbits uses a programming construct called a list comprehension The general form of a list comprehension is: [body | qualifiers] where each qualifier is either a generator, of the form var ĸ exp or a filter, which is a Boolean expression used to restrict the range of the variables introduced by the generators The collect function used in the above definition of transitive verb is as follows: collect [] collect ((x,y):t) = = [] (x,y:[e2|(e1, e2) ĸ t; e1 = x]): collect [(e1,e2)|(e1,e2) ĸ t;e1 ~= x] By applying collect to the relation orbit_rel, the following is obtained: collect orbit_rel =[("luna",["earth"]),("phobos", ["mars"]), ("deimos",["mars"]),(“earth”, etc So, the final result will be as follows: orbits mars = [x|(x,image_x) ĸ [("luna",["earth"]), ("phobos", ["mars"]), etc ; mars image_x] => [“phobos”, “deimos”] In order to extend Montague semantics to handle n-place transitive verbs (n >2), we can modify the definition of transitive verbs as follows: verb p t [x|(x,image_1_x,image_2_x) ĸ new_collect verb; p image_1_x ; t image_2_x] This will allow us to accommodate phrases such as “discovered phobos in 1873” discover_rel=[(“hall”, ”phobos”,1873), (“galileo”,”europa”,1820),etc Currently we are developing an appropriate definition for new_collect in order to handle n-ary transitive verbs Acknowledgements The authors acknowledge the support of NSERC References Dowty, D R., Wall, R E and Peters, S (1981) Introduction to Montague Semantics D Reidel Publishing Company, Dordrecht, Boston, Lancaster, Tokyo Frost, R A., Launchbury, E J S (1989) Constructing natural language interpreters in a lazy functional language The Computer Journal - Special edition on Lazy Functional Programming, 32(2) 108 –121 Frost, R A., Boulos, P (2002) An Efficient Compositional Semantics for Natural Language Database Queries with Arbitrarily-Nested Quantification and Negation, 15th Conf Of the Canadian Society for Computational Studies of Intelligence, AI2002, pp, 252-268 Lapalme, G and Lavier, F (1990) Using a functional language for parsing and semantic processing Publication 715a, Department d’informatique et recherché operationelle, Universite de Montreal Montague, R (1974) in Formal Philosophy: Selected Papers of Richard Montague, edited by R H Thomason Yale University Press, New Haven CT An Investigation of Grammar Design in NaturalLanguage Speech Recognition Yue Shi and Richard Frost School of Computer Science, University of Windsor, 401 Sunset Ave Windsor ,ON N9B3P4 {shic, richard} @uwindsor.ca Accuracy and robustness are two competitive objectives in natural-language speechrecognition How to achieve good accuracy and good robustness is a significant challenge for speech-recognition researchers Stochastic (statistical) techniques and grammar-based techniques are the two main types of speech-recognition technology A Statistical Language Model (SLM) is simply a probability over all possible sentences or word combinations, whereas grammarbased techniques use grammars to guide the speech-recognizer in a particular application Statistical language models were popular around 1995, whereas grammarbased language models took the pre-eminent position in commercial products by 2001 In order to investigate the effect of grammar design on accuracy and robustness, we conducted an experiment using three types of recognition grammar using a commercial off-the-shelf VXML speech browser 1) A simple word-sequence grammar defining all sequence of words from the dictionary up to length ten For example: 2) =||etc.(to length 10) 3) A syntactic grammar consisting of rules defining all syntactically-correct expressions in the inpouyt language: For example: 4) = (which) ; 5) A Semantic Grammar Based on the observation that some syntactically correct utterances may semantically wrong, some semantic constraints can be directly encoded to syntax to exclude utterances such as “which man orbits mars?” A sample semantic grammar, which requires that an animate (/inanimate) noun phrase be followed by an animate (/inanimate) verb phrase, is as follows: =(which) |(which); The following table shows the size and branching factors of the languages defined by the three grammars used in our experiment: Grammar Word sequence Syntactic Semantic Language size 2.40 * 1027 8.17 * 1015 5.55 * 1012 Branching factor 547 267 96 A.Y Tawfik and S.D Goodwin (Eds.): Canadian AI 2004, LNAI 3060, pp 569–570, 2004 © Springer-Verlag Berlin Heidelberg 2004 570 Y Shi and R Frost Two users were involved in the experiment: a male native-English speaker and a female non-native English speaker Three sets of testing utterances were used: (1) a semantic set, which included queries that are both semantically correct and syntactically correct, (2) a syntax set, which included queries that are only syntactically correct but semantically incorrect, and (3) a word-sequence set, which covered only word sequences that were neither semantically correct nor syntactically correct Recognition results were recorded as : “correct recognition”, “incorrect recognition” (misrecognition), and “not recognized” at all Each of the two users spoke several hundred utterances from the three sets using the three different recognition-grammars The results of the experiment are summarized as follows: Type of query Semantically correct Syntactically rect neither cor- Recognition Grammar word seq syntactic semantic word seq syntactic semantic word seq syntactic semantic Correctly Recognized % 12 66 75 65 15 0 Incorrect% 60 14 44 22 56 30 10 Not recognized % 28 20 21 48 30 78 29 70 90 The results are much as expected However, there were three unexpected results: 1) The recognition rates for the syntactic and semantic grammars were unexpectedly high given the sizes of the languages, which is a good indication of the value of grammar design in commercial speech-recognition products 2) The correct-recognition rate did not improve significantly for the semantic grammars with semantic queries compared to the syntactic grammars, but the misrecognition rate went down significantly, which is a good indicator for the value of semantic constraints 3) The word-sequence grammars were better at spotting words than expected, given the very high branching factor, which is a good indicator for its inclusion in a future “combined grammar” approach These experimental results confirm that recognition accuracy can be improved by encoding semantic constraints in the syntax rules of grammar-driven speech recognition However, the results also indicate that grammar design is a complex process that must take into account application-specific requirements for accuracy and robustness Also, the results suggest that it is worth investigating the use of combined “weighted” recognition grammars The most-constrained semantic grammar could be applied first (having the highest weight) and if it fails to recognize the input, then the syntactic grammar is tried, followed by the word-sequence grammar if the syntactic grammar fails This is the focus of future research This work was supported by a discovery grant from the Natural Science and Engineering Council of Canada Genetic Algorithm Based OSPF Network Routing Using LEDA Lenny Tang, Kay Wiese, and Vive Kumar InfoNet Media Center Simon Fraser University, Surrey, Canada, V3T 2W1 lctang@sfu.ca, wiese@sfu.ca, vive@sfu.ca As a predominant technique to estimate the cause and effect of Quality of Service (QoS) criteria in computer networks, simulations provide insight into how to most efficiently configure protocols to maximize the usage and to estimate the criteria for acceptable performance of network applications This paper investigates the simulation of Genetic Algorithm-based network routing using Open Shortest Path First (OSPF) protocols The simulator is implemented using LEDA (Library of Efficient Data types and Algorithms) [1] and the applicability of the library is examined in the context of Genetic Algorithms OSPF is the most widely used routing protocol OSPF’s popularity comes from its scalability, efficiency, and its self-sufficient nature In supporting hierarchies and autonomous areas, OSPF allows for networks to grow and still be easily maintainable By allowing the administrators to set the cost value for a network link between two routers, OSPF can intelligently route packets along links the administrators deem to be better By broadcasting Link-State Advertisements (LSAs), routers can be added and removed, and OSPF would regenerate the routing tables automatically to reflect the changes in the network OSPF was designed specifically to use Dijkstra’s Shortest Path First algorithm Dijkstra’s algorithm determines the shortest routes to all of the routers in the area using the LSA database By adjusting the costs (weights) of the links one can alter the routes that Dijkstra’s algorithm will calculate In an attempt to optimize the costs associated to the link in OSPF, Fortz and Thorup [2] proposed a local search heuristic, HeurOSPF, to set the weights in a simulated network demand set The results achieved in their HeurOSPF compared quite well to the other heuristics (e.g., InvCapOSPF, UnitOSPF, RandomOSPF, and L2OSPF), and relatively close to the optimal solution Expanding on HeurOSPF, GAOSPF [4], using a genetic algorithm, estimates the candidate cost sets Using genetic algorithms in this domain is a relatively new trend, which is necessitated by the inherent nature of certain protocols that deal with NP-hard problems Also, it has been shown that weight setting for a given demand set is NP-hard [2] Weight setting is the process of applying a cost value to the links in a network In the case of OSPF, once these costs are applied, Dijkstra’s algorithm can be used at each router to determine the shortest path with the lowest total cost to each of the routers in the OSPF area In GAOSPF, the chromosome represents the links in a network In other words, each gene in the chromosome corresponds to a specific link in the network The fitness for a chromosome is calculated by first applying Dijkstra’s algorithm at each router using the chromosome for the cost, so that a next-hop routing table for each node can be formed Then, simulated demands on the network are A.Y Tawfik and S.D Goodwin (Eds.): Canadian AI 2004, LNAI 3060, pp 571–572, 2004 © Springer-Verlag Berlin Heidelberg 2004 572 L Tang, K Wiese, and V Kumar “routed” according to the routing tables just created, and the link bandwidth usage is recorded for each link By adjusting the weights in OSPF we are able to change the routing tables generated by Dijkstra’s algorithm In the simulation, the load demands that are to be routed are predefined GAOSPF routes these demands on the network for each member of the population The fitness function calculates a fitness value for the chromosomes in the population and defines the traits in the chromosomes that GAOSPF is trying to groom The chromosomes are then ranked and mated This process is repeated for a number of generations in hopes that the chromosomes become fitter Network routing can be represented by a weighted graph problem A network consists of routers that are connected by links LEDA provides a convenient way to assign values to nodes and edges in a graph LEDA also has a group of graph algorithms that are ready for use with its data types Our implementation of GAOSPF used LEDA data structures and Dijkstra’s algorithm to implement a GA-based network routing simulator for OSPF protocols In our implementation of GAOSPF, the five main data types we used were graphs, nodes, edges, node arrays and edge arrays The graphs, nodes, and edges represent the network topology, routers, and the links respectively Using these data structures from LEDA, we implemented GAOSPF by building required functions such as: the Crossover function, Routing Table function, and the Fitness function Of particular importance to the OSPF protocol is the Dijkstra’s algorithm Given an edge array storing the weights for a graph and a source node, Dijkstra’s algorithm will return a predecessor node array The predecessor node array is similar to a routing table in that it contains an entry for each node The edge arrays are convenient data types to represent chromosomes in GAOSPF, and as exemplified in the prototype, it is possible to use edge arrays in other GA problem domains One of the promising extensions of our research is to examine the merits of applying different crossover and recombination techniques to improve the results produced in GAOSPF Once a set of crossovers, recombination techniques, and parameters have been found to yield near optimal solutions, a study can be conducted on the routing table changes for different demands to predict changes for a given set of weights in OSPF This can be used to regulate network routing that is adaptive to different types of demands on the network References The Library of Efficient Datatypes and Algorithms (LEDA) Algorithmic Solutions Software GmbH Available from: http://www.algorithmic-solutions.com B Fortz & E Thorup “Internet traffic engineering by optimizing OSPF weights” in Proc IEEE INFOCOM, (2000) 519-528 K.G Coffman, & A.M Odlyzko “Internet growth: Is there a "Moore's Law" for data traffic?”, J Abello, P.M Pardalos & M.G.C Resende (Eds.), Kluwer Academic Publishers, (2001) 47-93 M Ericsson, M.G.C Resende, & P.M Pardalos “A genetic algorithm for the weight setting problem in OSPF routing” Journal of Combinatorial Optimization, Vol (3) Kluwer Academic Publishers, (2002) 299-333 A Multi-agent System for Semantic Information Retrieval Yingge Wang and Elhadi Shakshuki Jodrey School of Computer Science, Acadia University, Nova Scotia, B4P 2R6 {050244w, elhadi.shakshuki}@acadiau.ca Abstract This paper presents an ontology-based multi-agent information retrieval system architecture The aim of this system is to help Web users gather the right information based on the interpretation of the Web at the semantic level The system consists of different types of agents that work together to find and process information and disseminate it to Web users in semantically encoded form Interface agents interact with users, receive users’ queries and display results Semantic agents extract the semantics of information retrieved from web documents, and provide the knowledge in ontologies to other agents Service agents manage the ontology-knowledge base Resources agents locate and retrieve the documents from distributed information resource To demonstrate the facility of the proposed system, a prototype is being developed using Java, Zeus toolkit [5] and Protégé-2000 [6] Introduction The process of finding and searching the desired information from the huge body of the Web of information has become a time-consuming task more than ever Today, Web users face the challenge of locating, filtering and organizing the ever-changing and ever-increasing information from distributed heterogeneous resources A “onefor-all” search engine would not satisfy millions of people, each with different backgrounds, knowledge and needs Software agents exhibit dynamic and flexible character, including autonomy, proactivity, adaptation and social ability [1] In this work, we proposed ontology-based multi-agent information retrieval system that allows users to submit queries and get the desired results through a graphical user interfaces The agents of the system communicate with each other using KQML like language [2] The system allows the users to provide feedbacks to improve the quality of the search When the system receives a query, it parses and processes the query to provide the relevant information, utilizing Web resources The results are based on the semantic of the information, and the understanding of the meaning of the user’s request and feedback An ontologyknowledge base is used as a virtual representation of the user's knowledge The proposed system incorporates several techniques, including semantic extrication, topic extraction, and ontologies comparison System Architecture The main objective of this work is to develop an ontology-based multi-agent information retrieval system, which could interpret the Web at the semantic level, and A.Y Tawfik and S.D Goodwin (Eds.): Canadian AI 2004, LNAI 3060, pp 573–575, 2004 © Springer-Verlag Berlin Heidelberg 2004 574 Y Wang and E Shakshuki retrieve the information based on the users’ interests Figure 1-a shows the architecture of the proposed system The interface agent allows the user to interact with the environments through the use of natural language It provides a graphical user interface (GUI) for the user to submit queries with the desired constrains, to provide feedbacks, and to display results The semantic agent’s main responsibility is to interpret concepts, relations and information using a semantic wrapper [3], and send results to the service agent The semantic agent ensures a mutual understanding between the user and the Web on the semantic level Information extraction and welldefined concepts are the fundamental components for the semantic agent The service agent acts as a service provider that pairs semantic agents according to users’ interests When the service agent gets results back from the semantic agent, it compares these results in terms of ontology and the relevance of the retrieved document By doing so, it attempts to find documents that have potential to answer the user’s query The service agent is the central point for all agents’ activities and information processes, because it maintains the ontology-knowledge base, which stored document relevancy data and websites evaluation history The ontologyknowledge base contains terminological knowledge exploited by the service agent and the semantic agent This knowledge structure is provided by WordNet® [4], which is an online lexical reference system The resources agent acts as a search tool that provides access to a collection of information sources It traverses the Web to its designated location When the resources agents identify a document that contains the required topic or synonyms, they pass the URL to the semantic agent to exam if it fulfills the user’s predefined search constrains, such as published date, file format, written language, file size, etc The URLs that not satisfy search constrains will be removed from the result list, and the rest will be passed on to the service agent An example of interface is shown in Figure 1-b (a) (b) Fig (a) System architecture and (b) An example of an interaction window A Multi-agent System for Semantic Information Retrieval 575 References [1] [2] [3] [4] [5] [6] Genesereth, M.R.and Ketchpel S.P., “Software Agents”, Communications of the ACM, Vol 37(7), pp 48-53, 1994 Finin, T., Labrou, Y and Mayfield, J., ‘‘KQML as an Agent Communication Language’’, In Bradshaw J.M (Ed.) Software Agents, Cambridge, MA: AAA/MIT Press, pp 291-316, 1997 J.L Arjona, R Corchuelo, A.Ruiz and M.Toro “A knowledge Extrication Process Specification”, in Proceedings of IEEE/WIC International Conference on Web Intelligence, pp.61-67, 2003 WordNet®, http://www.cogsci.princeton.edu/~wn/ Zeus toolkit, http://www.btexact.com/projects/agents/zeus/ Protégé-2000, http://protege.stanford.edu/index.html Decision Mining with User Preference Hong Yao Department of Computer Science, University of Regina, SK, Canada, S4S 0A2 {yao2hong }@cs.uregina.ca Statement of Problem We are drowning in information, but starving for knowledge It is necessary to extract interesting knowledge from large and raw data Traditional data mining approaches discover knowledge based on the statistical significance such as frequency of occurrence, which leads a large number of highly frequent rules are generated It is a tough work for users to manually pick the rules that they are really interested It is necessary to prune and summarize discovered rules Most importantly, different users have different interestingness in the same knowledge It is difficult to measure and explain the significance of discovered knowledge without user preference For example, two rules Perfume → Lipstick and Perfume → Diamond may suggest different potential profits to a sales manager, although both are frequent rules It motivates us to allow users to express their preference into the knowledge mining process, because only users know exactly what they demand User preference should be a important utility to measure knowledge Furthermore, our proposed approach emphasizes that the discovered knowledge should have the ability to suggest profitable actions to users according to the user preference, because the discovered knowledge is useful only it can be used in decision-making process of the users to increase utility Thus, our proposed approach discovers high utility knowledge regard as user preference, decisions are made based these high utility knowledge The decisions, which can increase user utility, are recommended to users The quality of the decisions is guaranteed by the utility of decisions Principle of Decision Mining with User Preference Our proposed approach emphasizes the significance of user preference to decision making The proposed decision mining model consists of probability mining model and utility mining model Probability mining model associates each candidate itemset with a probability, the percentage of all transactions that contain the itemset The possibility of itemset can be mined directly from the transaction table using the traditional Apriori algorithm Utility mining model quantifies each candidate itemset with a utility, the sum of the utilities of its items In this paper, the user preference is regarded as a utility and expressed by the associated utility table The value of utility reflects the impact of user preference A.Y Tawfik and S.D Goodwin (Eds.): Canadian AI 2004, LNAI 3060, pp 576−577, 2004 © Springer-Verlag Berlin Heidelberg 2004 Decision Mining with User Preference 577 on the items It can be mined by incorporating the utility table with the transaction table The possible action consequences are quantified by combining the possibility and utility of itemset together Thus, each discovered knowledge has assigned a utility Decisions are made on the basis of maximum expected utility of discovered knowledge In other words, decision knowledge is the knowledge with the most highest expected utility based on the probability and utility of itemset The advantages of decision mining with user preference is that the goal of data mining can be guided by the user preference The user objectives can be maximum, and the most interested knowledge, which has the highest utility, will be recommended to users Our approach consists of four parts First, the user preference is formally defined as a utility table If users prefer item a to item b, the utility of a will be higher than which of b Next, push utility table inside mining process The possibility and utility of itemset can be mined simultaneous by integrating the utility table and the transaction table together In order to reduce search space, and prune uninteresting itemset, the potential mathematic properties of utility are analyzed, the utility boundary property and possibility boundary property of itemset are demonstrated A heuristic approach is proposed to calculate the expected utility of itemsets Only itemsets with high expected utility are kept Third, the detailed algorithm is designed according to the theoretic model By applying these properties, an upper bound on the utility of a k−itemset can be calculated by analyzing the utilities of the already discovered (k − 1)−itemsets Finally, the system is implemented, and the effectiveness of the approach is examined by applying it to large synthetic and real world databases Experiment Results We used a variety of synthetic datasets generated by the IBM synthetic data generator We report our results for a single synthetic dataset of million records with 23,840 items 39 highest utility knowledge are discovered when threshold α = 0.10 There are about eight million records in our real world databases The size of database is about 651M The transaction table represents the purchase of 2238 unique items by about 428, 665 customers 18 highest utility knowledge is discovered when threshold α = 0.10 Conclusion and Potential Applications The discovered knowledge is considered useful or interesting only it conforms to the user preference The ignored user preference should play an important role in knowledge discovery The solution of this problem will contribute to many applications such as recommend system, web mining, information retrieval, and collaborative filtering, because utility also exists such applications For example, if the goal of a supermarket manager is to maximize profit, it is not enough to make decision based on only the transaction database Profit is a utility It is necessary to incorporate user preference into the mining process Coarsening Classification Rules on Basis of Granular Computing Yan Zhao Department of Computer Science, University of Regina, Regina, Saskatchewan, Canada S4S 0A2 yanzhao@cs.uregina.ca Problem Statement The task of classification, as a well-studied field of machine learning and data mining, has two main purposes: describing the classification rules in the training dataset and predicting the unseen new instances in the testing dataset Normally, one expects a high accuracy for precisely descriptive use However, a higher accuracy is intended to generate longer rules, which are not easy to understand, and may overfit the training instances This motivates us to propose the approach of coarsening the classification rules, namely, reasonably sacrifice the accuracy of description in a controlled level in order to improve the comprehensibility and predictability The framework of granular computing provides us a formal and systematic methodology for doing this In this paper, a modified PRISM classification algorithm is presented based on the framework of granular computing The Formal Concepts of Granular Computing The formal general granular computing model is summarized in [2] Briefly, all the available information and knowledge is stored in an information table The definable granules are the basic logic units for description and discussion use The refinement, or coarsening relationship between two definable granules is a partial order, which is reflexive, asymmetric and transitive For the tasks of classification, we are only interested in conjunctively definable granules, which mean that more than one definable granule are conjunctively connected Partition and covering are two commonly used granulations of universe One can obtain a more refined partition by further dividing equivalence classes of a partition Similarly, one can obtain a more refined covering by further decomposing a granule of a covering This naturally defines a refinement (coarsening) order over the partition lattice and the covering lattice Approaches for coarsening decision trees include pre-pruning methods and post-pruning methods For pre-pruning methods, the stopping criterion is critical to the classification performance Too low a threshold can terminate division too soon before the benefits of subsequent splits become evident; while too high a threshold results in little simplification Post-pruning methods engage a nontrivial post process after the complete tree has been generated Pre-pruning A.Y Tawfik and S.D Goodwin (Eds.): Canadian AI 2004, LNAI 3060, pp 578–579, 2004 c Springer-Verlag Berlin Heidelberg 2004 Coarsening Classification Rules on Basis of Granular Computing 579 methods are more time-efficient than post-pruning methods Approaches for coarsening decision rules only include pre-pruning methods, which have the same advantages and disadvantages as they are used for decision trees The Modified-PRISM Algorithm The PRISM algorithm is proposed as an algorithm for inducing modular rules [1] with very restrictive assumptions We modify the PRISM algorithm to get a set of coarsening classification rules by using a pre-pruning method For each value d of the decision attribute: Let χ = “” Select the attribute-value pair αx for which p(d|αx ) is the maximum Let χ = χ ∧ αx Create a subset of the training set comprising all the instances which contain the selected αx Repeat Steps and until p(d|χ) reaches the threshold, or no more subsets can be extracted If threshold = 4.1: Remove the d-labeled instances covered by χ from the training set 4.2: Repeat Step 1-4.1 until all d-labeled instances have been removed Else /* for coarsened rules 4.1’: Remove the attribute-value pairs used for χ from consideration 4.2’: Repeat Step 1-4.1’ until no more attribute-value pairs are left In Step 2, if there is only one αx for which p(d|αx ) is the maximum, then αx is of course selected If there are more than one αx that is the maximum at the same time, then the one, conjuncted with the previous χ, that covers a larger number of instances is selected When the accuracy threshold is one, then the original PRISM algorithm is applied, (Step 4.1 and 4.2) When the accuracy threshold is less than one, we cannot simply remove the d-labeled instances from the training set Instead, we remove the “used” attribute-value pairs from the consideration This greedy method is stated in Step 4.1’ and 4.2’ Conclusion The purposes of coarsening classification rules are for a better understanding of the classification rules and a higher accuracy for prediction By putting the classification in the granular computing framework, we can study the problem formally and systematically We modify the existing PRISM algorithm to coarsen the classification rules References Cendrowska, J PRISM: An algorithm for inducing modular rules International Journal of Man-Machine Studies, 27, 349-370, 1987 Yao, Y.Y and Yao, J.T Granular computing as a basis for consistent classification problems, Proceedings of PAKDD’02, 101-106, 2002 Author Index Abdullah, Nabil 546 Abidi, Syed Sibte Raza 261 Anderson, John 16, 422 Aounallah, Mohamed 454 Baldwin, Richard A 292 Barri`ere, Caroline 187 Bernier, Nicolas 416 Buffett, Scott 429 Chaib-draa, Brahim 416 Chen, Fei 380, 406 Chen, Junjiang 355 Chong, Yong Han 261 Coleman, Ron 467 Copeck, Terry 529 Curry, Robert 161 Denzinger, Jă org 146 Deschenes, Alain 549 Deshmukht, Abhi 355 Dilkina, Bistra N 248 Dilkina, Katia 534 Dinolfo, Michael 480 Donaldson, Toby 31 Ebecken, Nelson F.F 370 Fayer, Mikhail 480 Fitzgerald, John A 493 Fleming, Michael 434 Flores, Roberto A Fox, Mark S 517 Frost, Richard 546, 567, 569 Fu, Ching-Lung 551 Gao, Junbin 380 Gaur, Daya Ram 322 Geiselbrechtinger, Franz 493 Geng, Liqiang 89 Ghorbani, Ali 133 Glen, Edward 549, 554 Goodwin, Scott D 562, 565 Grant, Alastair 429 Gu, Shenshen 391 Hamilton, Howard J 89 Havens, William S 248, 277, 510 Hellerstein, Lisa 105 Hendriks, Andrew 554 Heywood, Malcolm 161 Hruschka, Eduardo R 370 Hruschka, Estevam R Jr 370 Huang, Jingwei 517 Hunter, Aaron 524 Hussein, Ahmed 485 Ibrahim, Zina M 308 Irgens, Morten 277 Japkowicz, Nathalie 529 Johnson, Gregory Jr 339 Johnson, Matthew A 467 Jun, Yang 556 Kechadi, Tahar 493 Kim, In Cheol 461 Kim, Kyoung Min 461 Kondrak, Grzegorz 44 Kremer, Robert C Krishnamurti, Ramesh 322 Kroon, Frederick W 75 Kumar, Amruth N 444 Kumar, Vive 571 Larson, Stephen David 202 Lei, Ming Zhen 559 Lei, John Zhong 133 Li, Xuyong 175 Lim, Heejin 120 Lin, I-Ling 31 Ling, Charles X 175, 499 Liu, Wei 510 Luo, Siwei 540 Marco, Chrysanne McCalla, Gordon Mercer, Robert E Milios, Evangelos Mineau, Guy W Mouhoub, Malek Mulvaney, Eric J Di 75 439 75 60 454 504 562 582 Author Index Nastase, Vivi 449 Neufeld, Eric 292 Ng, Geok See 406 Paquet, S´ebastien 416 Park, Andrew 31 Park, Joong Jo 461 Poirier, Ryan 217 Popowich, Fred 534 Price, Robert 565 Pullan, Wayne 233 Qi, Zou 540 Quirion, S´ebastien 454 Tang, Lenny 571 Tang, Tiffany 439 Tayeb, Laskri M 475 Tawfik, Ahmed Y 308 Tomek, Ivan 559 Verbeurgt, Karsten 480 Wang, Jianning 499 Wang, Yingge 573 Wegner, Ryan 16 Wiese, Kay C 549, 554, 571 Wurr, Alfred 422 Rigouste, Loăs 529 Rosell, Bernard 105 Roy, Maxim 567 Xiang, Yang 355 Xue, Juan 175 Santos, Eugene Jr 339, 485 Schur, Alvin 146 Shakshuki, Elhadi 556, 559, 573 Shi, Daming 380, 406 Shi, Yue 569 Shi, Zhongzhi 175 Silver, Daniel L 217, 551 Sokolova, Marina 449 Song, Myung Hyun 461 Suen, Ching Y 461 Szpakowicz, Stan 449, 529 Yamina, Mohamed Ben Ali Yao, Hong 576 Yen, John 120 Yeung, Daniel So 380 Yu, Songnian 391 Zhang, Yongzheng 60 Zhao, Liang 233 Zhao, Yan 578 Zheng, Jinhua 175 Zincir-Heywood, Nur 60 475 ... Springer Science+Business Media springeronline.com c Springer-Verlag Berlin Heidelberg 2004 Printed in Germany Typesetting: Camera-ready by author, data conversion by PTP-Berlin, Protago-TeX-Production... Goodwin Scheduling Using Constraint-Directed Search 565 Robert Price, Scott D Goodwin Extending Montague Semantics for Use in Natural-Language Database-Query Processing... AcceptToDischargeOffering(a?) • a ∈ s!.tokens) ToInformConstrainedActions input? : P ConstrainedAction; a? : ToOffer ; s! : ToSpeak (a?.input = input?) ∧ (∃ i : Inform | i = InformConstrainedActions(input?)

Ngày đăng: 08/05/2020, 06:41

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN