Advances in artificial intelligence, howard j hamilton, 2000 3743

462 95 0
Advances in artificial intelligence, howard j  hamilton, 2000   3743

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Lecture Notes in Artificial Intelligence Subseries of Lecture Notes in Computer Science Edited by J G Carbonell and J Siekmann Lecture Notes in Computer Science Edited by G Goos, J Hartmanis and J van Leeuwen 1822 Berlin Heidelberg New York Barcelona Hong Kong London Milan Paris Singapore Tokyo Howard J Hamilton Advances in Artificial Intelligence 13th Biennial Conference of the Canadian Society for Computational Studies of Intelligence, AI 2000 Mont´eal, Quebec, Canada, May 14-17, 2000 Proceedings 13 Series Editors Jaime G Carbonell,Carnegie Mellon University, Pittsburgh, PA, USA Jăorg Siekmann, University of Saarland, Saarbrăucken, Germany Volume Editor Howard J Hamilton University of Regina Department of Computer Science Regina, SK S4S 0A2, Canada E-mail: hamilton@cs.uregina.ca Cataloging-in-Publication Data applied for Die Deutsche Bibliothek - CIP-Einheitsaufnahme Advances in artificial intelligence : proceedings / AI 2000, Montr´eal, Quebec, Canada, May 14 - 17, 2000 Howard J Hamilton - Berlin ; Heidelberg ; New York ; Barcelona ; Hong Kong ; London ; Milan ; Paris ; Singapore ; Tokyo : Springer, 2000 ( biennial conference of the Canadian Society for Computational Studies of Intelligence ; 13) (Lecture notes in computer science ; Vol 1822 : Lecture notes in artificial intelligence) ISBN 3-540-67557-4 CR Subject Classification (1998): I.2 ISSN 0302-9743 ISBN 3-540-67557-4 Springer-Verlag Berlin Heidelberg New York This work is subject to copyright All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer-Verlag Violations are liable for prosecution under the German Copyright Law Springer-Verlag is a company in the BertelsmannSpringer publishing group © Springer-Verlag Berlin Heidelberg 2000 Printed in Germany Typesetting: Camera-ready by author, data conversion by Boller Mediendesign Printed on acid-free paper SPIN 10721080 06/3142 543210 Preface AI 2000 was the 13th in the series of biennial Artificial Intelligence conferences sponsored by the Canadian Society for Computational Studies of Intelligence/Soci´et´e canadienne pour l’´etude de l’intelligence par ordinateur For the first time, the conference was held in conjunction with four other conferences Two of these were the annual conferences of two other Canadian societies, Graphics Interface (GI 2000) and Vision Interface (VI 2000), with which this conference has been associated in recent years The other two conferences were the International Symposium on Robotics (ISR 2000) and the Institute for Robotics and Intelligent Systems conference (IRIS 2000) It is our hope that the overall experience will be enriched by this conjunction of conferences The Canadian AI conference has a 25 year tradition of attracting Canadian and international papers of high quality from a variety of AI research areas All papers submitted to the conference received three independent reviews Approximately one third were accepted for plenary presentation at the conference A journal version of the best paper of the conference will be invited to appear in Computational Intelligence The conference attracted submissions from six continents, and this diversity is represented in these proceedings The overall framework is similar to that of the last conference, AI’98 Plenary presentations were given of 25 papers, organized into sessions based on topics Poster presentations were given for an additional 13 papers A highlight of the conference continues to be the invited speakers Three speakers, Eugene Charniak, Eric ˙ Horvitz, and Jan Zytkow, were our guests this year Many people contributed to the success of this conference The members of the program committee coordinated the refereeing of all submitted papers They also made several recommendations that contributed to other aspects of the program The referees provided reviews of the submitted technical papers; their efforts were irreplaceable in ensuring the quality of the accepted papers Our thanks also go to those who organized various conference events and helped with other conference matters, especially Heather Caldwell, Helene Lamadeleine, and Bob Mercer We also acknowledge the help we received from Alfred Hofmann and others at Springer-Verlag Lastly, we are pleased to thank all participants You are the ones who make all this effort worthwhile! March 2000 Howard Hamilton Qiang Yang Program Co-chairs, AI 2000 Organization AI 2000 was organized by the Canadian Society for Computational Studies of Intelligence/Soci´et´e canadienne pour l’´etude de l’intelligence par ordinateur Program Committee General Chair (ISR): Program Co-chairs: Local Chair: Workshops: Committee Members: Paul Johnston, PRECARN Associates Inc Howard J Hamilton, University of Regina Qiang Yang, Simon Fraser University Robert Mercer, University of Western Ontario Zakaria Maamar, Defence Research Establishment Valcartier Dr Weiming Shen, National Research Council of Canada Irene Abi-Zeid, Defence Research Establishment Valcartier Fahiem Bacchus, University of Toronto Ken Barker, University of Ottawa Sabine Bergler, Concordia University Nick Cercone, University of Waterloo David Chiu, University of Guelph Jim Delgrande, Simon Fraser University Chrysanne DiMarco, University of Waterloo Renee Elio, University of Alberta Brian Gaines, University of Calgary Scott Goodwin, University of Regina Ali Ghorbani, University of New Brunswick Jim Greer, University of Saskatchewan Russ Greiner, University of Alberta Graeme Hirst, University of Toronto Nathalie Japkowicz, Daltech / Dalhousie University Luc Lamontagne, Defence Research Establishment Valcartier Guy LaPalme, Universit´e de Montr´eal Dekang Lin, University of Manitoba Charles Ling, University of Western Ontario Alan Mackworth, University of British Columbia Joel Martin, National Research Council Paul McFetridge, Simon Fraser University Stan Matwin, University of Ottawa Gord McCalla, University of Saskatchewan Robert Mercer, University of Western Ontario VIII Organization Committee Members (continued): Guy Mineau, University of Laval Eric Neufeld, University of Saskatchewan Peter Patel-Schneider, AT&T Research Fred Popowich, Simon Fraser University Jonathan Schaeffer, University of Alberta Dale Schuurmans, University of Waterloo Bruce Spencer, University of New Brunswick Ahmed Tawfik, University of Prince Edward Island Andr´e Trudel, Acadia University Afzal Upal, Daltech / Dalhousie University Peter van Beek, University of Alberta Kay Wiese, University of British Columbia Yang Xiang, University of Regina Jianna Zhang, Brock University Referees Irene Abi-Zeid Aijun An Fahiem Bacchus Ken Barker Sabine Bergler Nick Cercone David Chiu Jim Delgrande Chrysanne DiMarco Eugene Eberbach Renee Elio Brian Gaines Lev Goldfarb Scott Goodwin Ali Ghorbani Jim Greer Russ Greiner Graeme Hirst Markian Hlynka Nathalie Japkowicz Vlado Keselj Luc Lamontagne Guy LaPalme Dekang Lin Charles Ling Alan Mackworth Joel Martin Paul McFetridge Stan Matwin Gord McCalla Robert Mercer Guy Mineau Andrew Mironov Eric Neufeld Sponsoring Institutions University of Regina Simon Fraser University Peter Patel-Schneider Fred Popowich Jonathan Schaeffer Dale Schuurmans Daniel Silver Bruce Spencer Ahmed Tawfik Janine Toole Andr´e Trudel Afzal Upal Peter van Beek Kay Wiese Yang Xiang Jianna Zhang Table of Contents Games / Constraint Satisfaction Unifying Single-Agent and Two-Player Search J Schaeffer and A Plaat (University of Alberta) Are Bees Better than Fruitflies? 13 J van Rijswijck (University of Alberta) A Constraint Directed Model for Partial Constraint Satisfaction Problems S Nagarajan, S Goodwin (University of Regina), and A Sattar (Griffith University) 26 Natural Language I Using Noun Phrase Heads to Extract Document Keyphrases 40 K Barker and N Cornacchia (University of Ottawa) Expanding the Type Hierarchy with Nonlexical Concepts 53 C Barri`ere (University of Ottawa) and F Popowich (Simon Fraser University) Using Object Influence Areas to Quantitatively Deal with Neighborhood and Perception in Route Descriptions 69 B Moulin (Laval University), D Kattani (Defense Research Establishment Valcartier and Laval University), B Gauthier, and W Chaker (Laval University) An Extendable Natural Language Interface to a Consumer Service Database 82 P.P Kubon, F Popowich, and G Tisher (Technical University of British Columbia) Knowledge Representation Identifying and Eliminating Irrelevant Instances Using Information Theory 90 M Sebban and R Nock (Universite des Antilles et de la Guyane) Keep It Simple: A Case-Base Maintenance Policy Based on Clustering and Information Theory 102 Q Yang (Simon Fraser University) and J Wu (University of Waterloo) On the Integration of Recursive ALN -Theories 115 A Vit´ oria and M Mamede (Universidade Nova de Lisboa) X Table of Contents Natural Language II Collocation Discovery for Optimal Bilingual Lexicon Development 126 S McDonald (University of Edinburgh), D Turcato, P McFetridge, F Popowich, and J Toole (Simon Fraser University) The Power of the TSNLP: Lessons from a Diagnostic Evaluation of a Broad-Coverage Parser 138 E Scarlett and S Szpakowicz (University of Ottawa) A Parallel Approach to Unified Cognitive Modeling of Language Processing within a Visual Context 151 C Hannon and D Cook (University of Texas at Arlington) AI Applications Interact: A Staged Approach to Customer Service Automation 164 Y Lallement and M.S Fox (University of Toronto) Towards Very Large Terminological Knowledge Bases: A Case Study from Medicine 176 U Hahn and S Schulz (Freiburg University) The Use of Ontologies and Meta-knowledge to Facilitate the Sharing of Knowledge in a Multi-agent Personal Communication System 187 R Liscano (Mitel Corporation), K Baker, and J Meech (National Research Council of Canada) Machine Learning / Data Mining ASERC – A Genetic Sequencing Operator for Asymmetric Permutation Problems 201 K.C Wiese (University of British Columbia), S.D Goodwin, and S Nagarajan (University of Regina) CViz: An Interactive Visualization System for Rule Induction 214 J Han, A An, and N Cercone (University of Waterloo) Learning Pseudo-independent Models: Analytical and Experimental Results 227 Y Xiang (University of Massachusetts), X Hu (Fulcrum Technologies Inc.), N.J Cercone (University of Waterloo), and H.J Hamilton (University of Regina) 436 Huajie Zhang, Charles X Ling, and Zhiduo Zhao Applying Theorem to m-of-n concept, we have the following corollary Corollary 1: Naive Bayes discriminant function gv (x) = gm−n (x) on an ara a bitrary m-of-n concept if only if x ≥ max(m, a0 ) or x < min(m, a0 ), where ≤ x ≤ n, a, a0 , a , and a0 are given by Equation 6, and The corresponding probabilities can now be obtained explicitly (Domingos & Pazzani, 1997): n i n i=m p(C = 1) = 2n n i m−1 i=0 p(C = 0) = 2n p = p(xi = 1|C = 1) = n−1 i=m−1 n i=m n−1 i n i n−1 i n m−1 i=0 i m−2 i=0 q = p(xi = 1|C = 0) = Corollary presents necessary and sufficient condition for training examples of the m-of-n concept to be equivalent to the target functions The next theorem tells us exactly what kinds of m-of-n concepts are NBLS; that is, gv (x) = gm−n (x) for all x Theorem 2: An m-of-n concept is NBLS, if only if its corresponding Naive Bayes discriminant function Equation satisfies the following condition: a0 ∈ (m − 1, m] a Proof: Let N={1,2, ,n},∆ represents a condition SN (∆) denotes a set {y|y ∈ a N and y satisfies ∆} Suppose a0 ∈ (m − 1, m] Then max(m, a0 a ) = m, min(m, SN (x ≥ max(m, SN (x < min(m, a0 a ) = a0 a So a0 )) = SN (x ≥ m) = {m, m + 1, , n} a a0 )) = SN (x < m) = {m − 1, m − 2, , 1} a SN (x ≥ max(m, a0 )) a SN (x < min(m, a0 ) = N a According to Corollary 1, for any x ∈ N , gv (x) = gm−n (x) That is, the m-of-n concept is NBLS The Learnability of Naive Bayes 437 Suppose a m-of-n concept Ω is NBLS Then for any x ∈ N , gv (x) = gm−n (x) There are three cases as below: a (1) m − a0 > (2) m − (2) m − a0 a a0 a =1 or A2 = a An even simpler schema: (2) if P1 (A1 )& &Pk (Ak ) then C = c covers all rules sought as concept definitions in machine learning A good fit between knowledge and data is important, but discoverer should know real-world objects and attributes, not merely data and formal hypotheses: K1: Seek objective knowledge about the real world, not knowledge about data Automated Discovery: A Fusion of Multidisciplinary Principles 445 This principle contrasts with a common data mining practice, when researchers focus entirely on data Sometimes, however, specific knowledge about data is important, for instance about wrong data or data encoding schemas Schemas such as (1) or (2) define vast, sometimes infinite, hypothesis spaces, so that hypotheses must be generated, often piece by piece, evaluated and retained or eliminated K2: [Principle of knowledge construction] All elements of each piece of knowledge are constructed and evaluated by a discovery system Predictions are essential for hypothesis evaluation It is doubtful that we would consider a particular statement a piece of knowledge about external world if it would not enable empirically verifiable predictions: K3: A common characteristic of knowledge is its empirical contents, that is empirically verifiable predictions Knowledge improvement can be measured by the increased empirical contents Logical inference is used to draw empirically verifiable conclusions The premises are typically general statements and some known facts, while conclusions are statements which predict new facts Empirical contents can occurs in regularities (laws, statements, sentences), not in predicates which not have truth value Concepts, understood as predicates, have no empirical contents We can define huge numbers of concepts, but that does not provide knowledge The vast majority of knowledge goes beyond concept definitions: K4: Each concept is an investment; it can be justified by regularities it allows to express Principles of Search Discoverers explore the unknown and examine many possibilities which can be seen as dead ends from the perspective of the eventually accepted solutions, because they not become components of the accepted solutions This process is called search We can conclude that: S1: If you not search, you not discover A simple search problem in AI can be defined by a set of initial states and a set of goal states in a space of states and moves The task is to find a trajectory from an initial state to a goal state In the domain of discovery the goal states are not known in advance, but the basic framework of discovery can be applied (Simon, 1979; Langley et al, 1987): S2: [Herbert Simon 1] Discovery is problem solving Each problem is defined by the initial state of knowledge, including data and by the goals Solutions are generated by search mechanisms aimed at the goals 446 ˙ Jan M Zytkow The initial state can be a set of data, while a goal state may be an equation that fits those data (Langley et al, 1987; Zembowicz & Zytkow, 1991; Dzeroski & Todorovski, 1993; Washio & Motoda, 1997) The search proceeds by construction of terms, by their combinations into equations, by generation of numerical parameters in equations and by evaluation of completed equations Search spaces should be sufficiently large, to provide solutions for many problems But simply enlarging the search space does not make an agent more creative It is easy to implement a program that enumerates all strings of characters If enough time was available, it would produce all books, all data structures, all computer programs But it produces a negligible proportion of valuable results and it cannot tell which are those valuable results S3: [Herbert Simon 2] A heuristic and data-driven search is an efficient and effective discovery tool Data are transformed into plausible pieces of solutions Partial solutions are evaluated and used to guide the search Goal states are supposed to exceed the evaluation thresholds Without that, even the best hypothesis reached in the discovery process can be insufficient A discovery search may fail or take too much time and a discoverer should be able to change the goal and continue S4: [Recovery from failure] Each discovery step may fail and cognitive autonomy requires methods that recognize failure and decide on the next goal Search states can be generated in many orders Search control, which handles the search at run-time, is an important discovery tool S5: [Simple-first] Order hypotheses by simplicity layers; try simpler hypotheses before more complex The implementation is easy, since simpler hypotheses are constructed before more complex Also, simpler hypotheses are usually more general, so they are tried before more complex, that is more specific hypotheses If a simple hypothesis is sufficient, there is no need to make it more complex Do not create the same hypothesis twice, but not miss any: S6: Make search non-redundant and exhaustive within each simplicity layer Beyond Simple-Minded Tools The vast majority of data mining is performed with the use of single-minded tools Those tools miss discovery opportunities if results not belong to a particular hypothesis space They rarely consider the question whether the best fit hypothesis is good enough to be accepted and whether other forms of knowledge are more suitable for a given case They ignore the following principle (Zembowicz & Zytkow, 1996): Automated Discovery: A Fusion of Multidisciplinary Principles 447 O1: [Open-mindness] Knowledge should be discovered in the form that reflects real-world relationships, not one or another tool at hand Statistics Equations and other forms of deterministic knowledge can be augmented with statistical distributions, for instance, y = f (x) + N (0, σ(x)) N (0, σ(x)) represents Gaussian distribution of error, with mean value equal zero and standard deviation σ(x) Most often a particular distribution is assumed rather than derived from data, because traditional statistical data mining operated on small samples and used visualization tools to stimulate human judgement Currently, when large datasets are abundant and more data can be easily generated in automated experiments, we can argue for the verification of assumptions: ST AT 1: Do not make assumptions and not leave unverified assumptions For instance, when using the model y = f (x) + N (0, σ(x)) verify Gaussian distribution of residua, with the use of runs test and other tests of normality Publications in statistics notoriously start from “Let us assume that ” Either use data to verify the assumptions, and when this is not possible, ask what is the risk or cost when the assumptions are not met Another area which requires revision of traditional statistical thinking is testing hypothesis significance Statistics asks how many real regularities are we willing to disregard (error of omission) and how many spurious regularities are we willing to accept (error of admission) In a given dataset, weak regularities cannot be distinguished from patterns that come from random distribution (the significance dilemma for a given regularity can be solved by acquisition of additional data) Automated discovery systems search massive hypothesis spaces with the use of statistical tests, which occasionally mistake a random fluctuation for a genuine regularity: ST AT 2: [Significance 1] Chose a significance threshold that enables middle ground between spurious regularities and weak but real regularities specific to a given hypothesis space While a significance threshold should admit a small percent of spurious regularities, it is sometimes difficult to compute the right threshold for a given search Each threshold depends on the number of independent hypotheses and independent tests When those numbers are difficult to estimate, experiments on random data can be helpful We know that those data contain no regularities, so all detected regularities are spurious and should be rejected by the test of significance We should set the threshold just about that level: ST AT 3: [Significance 2] Use random data to determine the right values of significance thresholds for a given search mechanism 448 ˙ Jan M Zytkow References 1995 Working Notes AAAI Spring Symposium on Systematic Methods of Scientific Discovery Stanford, March 27-29 1998 ECAI Workshop on Scientific Discovery Brighton, August 24 1999 AISB Symposium on Scientific Creativity Edinburgh April 6-9 Chaudhuri, S & Madigan, D eds 1999 Proceedings of the Fifth ACM SIGKDD Intern Conf on Knowledge Discovery and Data Mining, ACM, New York Dzeroski, S & Todorovski, L 1993 Discovering Dynamics, Proc of 10th International Conference on Machine Learning, 97-103 Edwards, P ed 1993 Working Notes MLNet Workshop on Machine Discovery, Blanes Kocabas, S & Langley, P 1995 Integration of research tasks for modeling discoveries in particle physics Working notes of the AAI Spring Symposium on Systematic Methods of Scientific Discovery, Stanford, CA, AAAI Press 87-92 Komorowski, J & Zytkow, J.M 1997 Principles of Data Mining and Knowledge Discovery, Springer Kulkarni, D., & Simon, H.A 1987 The Process of Scientific Discovery: The Strategy of Experimentation, Cognitive Science, 12, 139-175 Langley, P., Simon, H.A., Bradshaw, G., & Zytkow J.M 1987 Scientific Discovery; Computational Explorations of the Creative Processes Boston, MIT Press Nordhausen, B., & Langley, P 1993 An Integrated Framework for Empirical Discovery Machine Learning 12, 17-47 Piatetsky-Shapiro, G & Frawley, W eds 1991 Knowledge Discovery in Databases, Menlo Park, Calif.: AAAI Press Shen, W.M 1993 Discovery as Autonomous Learning from Environment Machine Learning, 12, p.143-165 Shrager J., & Langley, P eds 1990 Computational Models of Scientific Discovery and Theory Formation, Morgan Kaufmann, San Mateo:CA Simon, H.A 1979 Models of Thought New Haven, Connecticut: Yale Univ Press Simon, H.A., Valdes-Perez, R & Sleeman, D eds 1997 Artificial Intelligence 91, Special Issue: Scientific Discovery Vald´es-P´erez, R.E 1993 Conjecturing hidden entities via simplicity and conservation laws: machine discovery in chemistry, Artificial Intelligence, 65, 247-280 Washio, T., & Motoda, H 1997, Discovering Admissible Models of Complex Systems Based on Scale-Types and Identity Constraints, Proc IJCAI’97, 810-817 ˙ Zembowicz, R & Zytkow, J.M 1991 Automated Discovery of Empirical Equations from Data In Ras & Zemankova eds Methodologies for Intelligent Systems, Springer, 429-440 ˙ Zembowicz, R & Zytkow, J.M 1996 From Contingency Tables to Various Forms of Knowledge in Databases, in Fayyad, U., Piatetsky-Shapiro, G., Smyth, P & Uthurusamy eds Advances in Knowledge Discovery & Data Mining, AAAI Press 329-349 ˙ Zytkow, J.M ed 1992 Proceedings of the ML-92 Workshop on Machine Discovery ˙ Zytkow, J.M ed 1993 Machine Learning, 12 ˙ Zytkow, J.M ed 1997 Machine Discovery, Kluwer Author Index Abouelnasr, B.M An, A 214 Baillie, J.-C Baker, K Barker, K Barrie, T Barri`ere, C Bayat, L Bowes, J 357 316 187 40 421 53 293 326 Cercone, N.J 214, 227, 347 Chaker, W 69 Charniak, E 442 Cook, D 151 Cooke, J 326 Cornacchia, N 40 Davies, M.S Delgrande, J Dumont, G.A 305 411 305 Eavis, T Elio, R 280 240 Fox, M.S 164 Ganascia, J.-G 316 Garzone, M 337 Gauthier, B 69 Ghorbani, A.A 293 Goodwin, S.D 26, 201 Greer, J.E 326 Hahn, U 176 Hamilton, H.J 227 Han, J 214, 347 Hannon, C 151 Heckman, N.E 305 Hu, X 227 Hussin, M.F 357 Japkowicz, N Karimi, K Kattani, D Kubon, P.P 280 369 69 82 Lallement, Y 164 Ling, C.X 432 Liscano, R 187 Mamede, M 115 Matwin, S 379 McDonald, S 126 McFetridge, P 126 Meech, J 187 Mercer, R.E 337 Miyamichi, J 267 Morin, J 379 Moulin, B 69 Nagarajan, S 26, 201 Najarian, K 305 Neufeld, E 326 Nock, R 90 Plaat, A Popowich, F 53, 82, 126 Rijswijck, van, J Sattar, A 26 Scarlett, E 138 Schaeffer, J Scheutz, M 389 Schulz, S 176 Sebban, M 90 Shoukry, A.A 357 Situ, Q 400 Stenz, G 254 13 450 Author Index Stroulia, E 400 Suderman, M 411 Szpakowicz, S 138 Wiese, K.C 201 Wolf, A 254 Wu, J 102 Tawfik, A.Y 421 Tisher, G 82 Tokuda, N 267 Toole, J 126 Turcato, D 126 Xiang, Y Upal, M.A 240 Vit´oria, A 115 Yan, J Yang, Q Zhang, H Zhao, Z ˙ Zytkow, J 227 267 102 432 432 443 ... Deutsche Bibliothek - CIP-Einheitsaufnahme Advances in artificial intelligence : proceedings / AI 2000, Montr´eal, Quebec, Canada, May 14 - 17, 2000 Howard J Hamilton - Berlin ; Heidelberg ; New... BertelsmannSpringer publishing group © Springer-Verlag Berlin Heidelberg 2000 Printed in Germany Typesetting: Camera-ready by author, data conversion by Boller Mediendesign Printed on acid-free paper SPIN... for min-max searching Artificial Intelligence, 35:287–310, 1988 [19] A Newell and H Simon Human Problem Solving Prentice-Hall, 1972 [20] N Nilsson Problem-Solving Methods in Artificial Intelligence

Ngày đăng: 08/05/2020, 06:42

Mục lục

  • Lecture Notes in Artificial Intelligence

    • Advances in Artificial Intelligence

    • Preface

    • Organization

      • Program Committee

      • Referees

      • Sponsoring Institutions

      • Table of Contents

      • Unifying Single-Agent and Two-Player Search

        • Introduction

        • Algorithms vs Enhancements

        • Modeling Search

          • Graph Definition

          • Solution Definition

          • Resource Constraints

          • Search Objective

          • Domain Knowledge

          • Search Enhancements

            • State Space Techniques

            • State- and Solution-Space Interaction

            • Successor Ordering Techniques

            • Repeatedly Visiting States

            • Off-Line Computations

            • Search Effort Distribution

            • Conclusion

Tài liệu cùng người dùng

Tài liệu liên quan