1. Trang chủ
  2. » Công Nghệ Thông Tin

Theory and Novel Applications of Machine Learning doc

386 448 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 386
Dung lượng 28,11 MB

Nội dung

Theory and Novel Applications of Machine Learning Theory and Novel Applications of Machine Learning Edited by Meng Joo Er and Yi Zhou I-Tech IV Published by In-Teh In-Teh is Croatian branch of I-Tech Education and Publishing KG, Vienna, Austria. Abstracting and non-profit use of the material is permitted with credit to the source. Statements and opinions expressed in the chapters are these of the individual contributors and not necessarily those of the editors or publisher. No responsibility is accepted for the accuracy of information contained in the published articles. Publisher assumes no responsibility liability for any damage or injury to persons or property arising out of the use of any materials, instructions, methods or ideas contained inside. After this work has been published by the In-Teh, authors have the right to republish it, in whole or part, in any publication of which they are an author or editor, and the make other personal use of the work. © 2009 In-teh www.in-teh.org Additional copies can be obtained from: publication@ars-journal.com First published February 2009 Printed in Croatia p. cm. ISBN 978-3-902613-55-4 1. Theory and Novel Applications of Machine Learning, Meng Joo Er and Yi Zhou Preface Even since computers were invented many decades ago, many researchers have been trying to understand how human beings learn and many interesting paradigms and approaches towards emulating human learning abilities have been proposed. The ability of learning is one of the central features of human intelligence, which makes it an important ingredient in both traditional Artificial Intelligence (AI) and emerging Cognitive Science. Machine Learning (ML) draws upon ideas from a diverse set of disciplines, including AI, Probability and Statistics, Computational Complexity, Information Theory, Psychology and Neurobiology, Control Theory and Philosophy. ML involves broad topics including Fuzzy Logic, Neural Networks (NNs), Evolutionary Algorithms (EAs), Probability and Statistics, Decision Trees, etc. Real-world applications of ML are widespread such as Pattern Recognition, Data Mining, Gaming, Bio-science, Telecommunications, Control and Robotics applications. Designing an intelligent machine involves a number of design choices, including the type of training experience, the target performance function to be learned, a representation of this target function and an algorithm for learning the target function from training. Depending on the resources of training, ML is always categorized as Supervised Learning (SL), Unsupervised Learning (UL) and Reinforcement Learning (RL). It is interesting to note that human beings adopt more or less these three learning paradigms in our learning process. This books reports the latest developments and futuristic trends in ML. New theory and novel applications of ML by many excellent researchers have been organized into 23 chapters. SL is a ML technique for creating a function from training data with pairs of input objects and desired outputs. The task of a SL is to predict the value of the function for any valid input object after having seen a number of training examples (i.e. pairs of inputs and desired outputs). Towards this end, the essence of SL is to generalize from the presented data to unseen situations in a "reasonable" way. The key characteristic of SL is the existence of a "teacher" and the training input-output data. The primary objective of SL is to minimize the system error between the predicated output from the system and the actual output. New developments of SL paradigms are presented in Chapters 1-3. UL is a ML methodology whereby a model is fit to observations by typically treating input objects as a set of random variables and building a joint density model. It is distinguished from SL by the fact that there is no a priori output required. Novel clustering and classification approaches are reported in Chapters 4 and 5. Distinguished from SL, Reinforcement Learning (RL) is a learning process without explicit teacher for any correct instructions. The RL methodology is also different from other UL approaches as it learns from an evaluative feedback of the system. RL has been accepted VI as a fundamental paradigm for ML with particular emphasis on computational aspects of learning. The RL paradigm is a good ML framework to emulate human way of learning from interactions to achieve a certain goal. The learner is termed an agent who interacts with the environment. The agent selects appropriate actions to interact with the environment and the environment responses to these actions and presents new states to the agent and these interactions are continuous. In this book, novel algorithms and latest developments of RL have been included. To be more specific, the proposed methodologies for enhancing Q- learning have been reported in Chapters 6-11. Evolutionary approaches in ML are presented in Chapter 12-14 and real-world applications of ML have been reported in the rest of the chapters. Editors Meng Joo Er School of Electrical and Electronic Engineering, Nanyang Technological University Singapore Yi Zhou School of Electrical and Electronic Engineering, Singapore Polytechnic Singapore Contents Preface V 1. A Drawing-Aid System using Supervised Learning 001 Kei Eguchi 2. Supervised Learning with Hybrid Global Optimisation Methods. Case Study: Automated Recognition and Classification of Cork Tiles 011 Antoniya Georgieva and Ivan Jordanov 3. Supervised Rule Learning and Reinforcement Learning in A Multi-Agent System for the Fish Banks Game 033 Bartłomiej Śnieżyński 4. Clustering, Classification and Explanatory Rules from Harmonic Monitoring Data 045 Ali Asheibi, David Stirling, Danny Sutanto and Duane Robinson 5. Discriminative Cluster Analysis 069 Fernando De la Torre and Takeo Kanade 6. Influence Value Q-Learning: A Reinforcement Learning Algorithm for Multi Agent Systems 081 Dennis Barrios-Aranibar and Luiz M. G. Gonçalves 7. Reinforcement Learning in Generating Fuzzy Systems 099 Yi Zhou and Meng Joo Er 8. Incremental-Topological-Preserving-Map-Based Fuzzy Q-Learning (ITPM-FQL) 117 Meng Joo Er, Linn San and Yi Zhou 9. A Q-learning with Selective Generalization Capability and its Application to Layout Planning of Chemical Plants 131 Yoichi Hirashima VIII 10. A FAST-Based Q-Learning Algorithm 145 Kao-Shing Hwang, Yuan-Pao Hsu and Hsin-Yi Lin 11. Constrained Reinforcement Learning from Intrinsic and Extrinsic Rewards 155 Eiji Uchibe and Kenji Doya 12. TempUnit: A Bio-Inspired Spiking Neural Network 167 Olivier F. L. Manette 13. Proposal and Evaluation of the Improved Penalty Avoiding Rational Policy Making Algorithm 181 Kazuteru Miyazaki, Takuji Namatame and Hiroaki Kobayashi 14. A Generic Framework for Soft Subspace Pattern Recognition 197 Dat Tran, Wanli Ma, Dharmendra Sharma, Len Bui and Trung Le 15. Data Mining Applications in Higher Education and Academic Intelligence Management 209 Vasile Paul Bresfelean 16. Solving POMDPs with Automatic Discovery of Subgoals 229 Le Tien Dung, Takashi Komeda and Motoki Takagi 17. Anomaly-based Fault Detection with Interaction Analysis Using State Interface 239 Byoung Uk Kim 18. Machine Learning Approaches for Music Information Retrieval 259 Tao Li, Mitsunori Ogihara, Bo Shao and DingdingWang 19. LS-Draughts: Using Databases to Treat Endgame Loops in a Hybrid Evolutionary Learning System 279 Henrique Castro Neto, Rita Maria Silva Julia and Gutierrez Soares Caixeta 20. Blur Identification for Content Aware Processing in Images 299 Jérôme Da Rugna and Hubert Konik 21. An Adaptive Markov Game Model for Cyber Threat Intent Inference 317 Dan Shen, Genshe Chen, Jose B. Cruz, Jr., Erik Blasch, and Khanh Pham IX 22. Life-long Learning Through Task Rehearsal and Selective Knowledge Transfer 335 Daniel L. Silver and Robert E. Mercer 23. Machine Learning for Video Repeat Mining 357 Xianfeng Yang and Qi Tian [...]... (WWF, 2006) On the other hand, in the past several years of technological advancement, cork has become one of the 24 Theory and Novel Applications of Machine Learning most effective and reliable natural materials for floor and wall covering Some of the advantages of the cork tiles are their durability, ability to reduce noise, thermal insulation, and reduction of allergens Many of the cork floors installed... operators), BP, BPD (BP with deflection), SA, hybridization of BP and SA (BPSA), and GA They reported 12 Theory and Novel Applications of Machine Learning poor performance for the SA method, but still promoted the use of GO methods instead of standard BP The reported results indicated that the population based methods (GA and DE) were promising and effective, although the winner in their study was their... 0 , (8) 6 Theory and Novel Applications of Machine Learning where Sx,n(t) and Sy,n(t) satisfy ⎧1 S x , n (t ) = ⎨ ⎩0 (if (if ) ) S x ,n (t ) > 0 , S x ,n (t ) < 0 , ⎧ ⎪1 and S y ,n (t ) = ⎨ ⎪0 ⎩ (if (if ) ) S y , n (t ) > 0 , S y , n (t ) < 0 , (9) respectively In Eq.(8), η1 ( . Theory and Novel Applications of Machine Learning Theory and Novel Applications of Machine Learning Edited by Meng Joo Er and Yi Zhou I-Tech IV . parameter. Since the degree of sudden action and hand trembling depends on the conditions of handicapped students, the Theory and Novel Applications of Machine Learning 2 threshold parameter. ( ) ( ) ( ) ( ) ( ) { } txtxtxtxtD x −+−−= 1,1min and ( ) ( ) ( ) ( ) ( ) { } tytytytytD y −+−−= 1,1min , (4) Theory and Novel Applications of Machine Learning 4 In Eqs.(3) and (4), α is a damping factor,

Ngày đăng: 26/06/2014, 23:20

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN