Intelligent systems design and applications

627 21 0
Intelligent systems design and applications

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Intelligent Systems Design and Applications Advances in Soft Computing Editor-in-chief Prof Janusz Kacprzyk Systems Research Institute Polish Academy of Sciences ul Newelska 01-447 Warsaw, Poland E-mail: kacprzyk@ibspan.waw.pl http://www.springer.de/cgi-binlsearch-bockpl?series=4240 Robert John and Ralph Birkenhead (Eds.) Soft Computing Techniques and Applications 2000 ISBN 3-7908-1257-9 James J Buckley and Esfandiar Eslami An Introduction to Fuzzy Logic and Fuzzy Sets 2002 ISBN 3-7908-1447-4 Mieczyslaw Ktopotek, Maciej Michalewicz and Slawomir T Wierzchon (Eds.) Intellligent Information Systems 2000 ISBN 3-7908-1309-5 Ajith Abraham and Mario Koppen (Eds.) Hybrid Information Systems 2002 ISBN 3-7908-1480-6 Peter Sincak, Jan Vascak, Vladimir Kvasnicka and Radko Mesiar (Eds.) The State of the Art in Computational Intelligence 2000 ISBN 3-7908-1322-2 przemyslaw Grzegorzewski, Olgierd Hryniewicz, Maria A Gil (Eds.) Soft Methods in Probability, Statistics and Data Analysis 2002 ISBN 3-7908-1526-8 Bernd Reusch and Karl-Heinz Temme (Eds.) Computational Intelligence in Theory and Practice 2000 ISBN 3-7908-1357-5 Lech Polkowski Rough Sets 2002 ISBN 3-7908-1510-1 Rainer Hampel, Michael Wagenknecht, Nasredin Chaker (Eds.) Fuzzy Control 2000 ISBN 3-7908-1327-3 Mieczyslaw Ktopotek, Maciej Michalewicz and Slawomir T Wierzchon (Eds.) Intelligent Information Systems 2002 2002 ISBN 3-7908-1509-8 Henrik Larsen, Janusz Kacprzyk, Slawomir Zadrozny, Troels Andreasen, Henning Christiansen (Eds.) Flexible Query Answering Systems 2000 ISBN 3-7908-1347-8 Andrea Bonarini, Francesco Masulli and Gabriella Pasi (Eds.) Soft Computing Applications 2002 ISBN 3-7908-1544-6 Robert John and Ralph Birkenhead (Eds.) Developments in Soft Computing 2001 ISBN 3-7908-1361-3 Mieczyslaw Ktopotek, Maciej Michalewicz and Slawomir T Wierzchon (Eds.) Intelligent Information Systems 2001 2001 ISBN 3-7908-1407-5 Antonio Di Nola and Giangiacomo Gerla (Eds.) Lectures on Soft Computing and Fuzzy Logic 2001 ISBN 3-7908-1396-6 Tadeusz Trzaskalik and Jerzy Michnik (Eds.) Multiple Objective and Goal Programming 2002 ISBN 3-7908-1409-1 Leszek Rutkowski, Janusz Kacprzyk (Eds.) Neural Networks and Soft Computing 2003 ISBN 3-7908-0005-8 Jiirgen Franke, Gholamreza Nakhaeizadeh, Ingrid Renz (Eds.) Text Mining 2003 ISBN 3-7908-0041-4 Tetsuzo Tanino, Tamaki Tanaka, Masahiro Inuiguchi Multi-Objective Programming and Goal Programming 2003 ISBN 3-540-00653-2 Mieczyslaw Klopotek, Slawomir T Wierzchon, Krzysztof Trojanowski (Eds.) Intelligent Information Processing and Web Mining 2003 ISBN 3-540-00843-8 AjithAhraham Katrin Franke Mario Koppen (Eds.) Intelligent Systems Design and Applications With 219 Figures and 72 Tables , Springer Editors: Associated Editors: AjithAbraham Oklahoma State University Tulsa, OK, USA Damminda Alahakoon Monash University Victoria, Australia Email: aa@cs.okstate.edu Katrin Franke Fraunhofer Institute for Production Systems and Design Technology Berlin, Germany Email: katrin.Jranke@ipk.jhg.de Mario Koppen Fraunhofer Institute for Production Systems and Design Technology Berlin, Germany Email: mario.koeppen@ipk.jhg.de Leon S L Wang New York Institute of Technology New York, USA Prithviraj Dasgupta University of Nebraska Omaha,USA Vana Kalogeraki University of California Riverside, USA ISSN 16-15-3871 ISBN 978-3-540-40426-2 ISBN 978-3-540-44999-7 (eBook) DOI 10.1007/978-3-540-44999-7 Cataloging-in-Publication Data applied for Bibliographic information published by Die Deutsche Bibliothek Die Deutsche Bibliothek lists this pUblication in the Deutsche Nationalbibliografie; detailed bibliographic data is available in the Internet at This work is subject to copyright All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilm or in other ways, and storage in data banks Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer-Verlag Berlin Heidelberg GmbH Violations are liable to prosecution under German Copyright Law http://www.springer.de © Springer-Verlag Berlin Heidelberg 2003 Originally published by Springer-Verlag Berlin Heidelberg New York in 2003 The use of general descriptive names, registered names, trademarks, etc in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use Typesetting: Digital data supplied by the authors Cover-design: E Kirchner, Heidelberg Printed on acid-free paper 62/3020 hu - ~-'$-' ISDA2003 Foreword Soft computing (computational intelligence) is the fusion/combination of methodologies, such as fuzzy logic (FL) including rough set theory (RST), neural networks (NN), evolutionary computation CEc), chaos computing ceq, fractal theory (FT), wavelet transformation (WT), cellular automata, percolation model (PM) and artificial immune networks (AIN) It is able to construct intelligent hybrid systems It provides novel additional computing capabilities It can solve problems (complex system problems), which have been unable to be solved by traditional analytic methods In addition, it yields rich knowledge representation (symbol and pattern), flexible knowledge acquisition (by machine learning from data and by interviewing experts), and flexible knowledge processing (inference by interfacing between symbolic and pattern knowledge), which enable intelligent systems to be constructed at low cost (cognitive and reactive distributed artificial intelligences) This conference puts emphasis on the design and application of intelligent systems The conference proceedings include many papers, which deal with the design and application of intelligent systems solving the real world problems This proceeding will remove the gap between theory and practice I expect all participants will learn how to apply soft computing (computational intelligence) practically to real world problems Yasuhiko Dote Muroran, Japan http://bank.cssc.muroran-it.ac.jp/ May 2003 VI Foreword ISDA'03 General Chair's Message On behalf of the ISDA'03 organizing committee, I wish to extend a very warm welcome to the conference and Tulsa in August 2003 The conference program committee has organized an exciting and invigorating program comprising presentations from distinguished experts in the field, and important and wide-ranging contributions on state-of-the-art research that provide new insights into 'Current Innovations in Intelligent Systems Design and Applications" ISDA'03 builds on the success of last years ISDA'02 was held in Atlanta, USA, August 07-08,2002 and attracted participants from over 25 countries ISDA'03, the Third International Conference on Intelligent Systems Design and Applications, held during August 10-13, 2003, in Tulsa, USA presents a rich and exciting program The main themes addressed by this conference are: Architectures of intelligent systems Image, speech and signal processing Internet modeling Data mining Business and management applications Control and automation Software agents Knowledge management ISDA'03 is hosted by the College of Arts and Sciences, Oklahoma State University, USA ISDA'03 is technically sponsored by IEEE Systems Man and Cybernetics Society, World Federation on Soft Computing, European Society for Fuzzy Logic and Technology, Springer Verlag- Germany, Center of Excellence in Information Technology and Telecommunications (COEITT) and Oklahoma State University ISDA'03 received 117 technical paper submissions from over 28 countries, tutorials, special technical sessions and workshop proposal The conference program committee had a very challenging task of choosing high quality submissions Each paper was peer reviewed by at least two independent referees of the program committee and based on the recommendation of the reviewers 60 papers were finally accepted The papers offers stimulating insights into emerging intelligent technologies and their applications in Internet security, data mining, image processing, scheduling, optimization and so on I would like to express my sincere thanks to all the authors and members of the program committee that has made this conference a success Finally, I hope that you will find these proceedings to be a valuable resource in your professional, research, and educational activities whether you are a student, academic, researcher, or a practicing professional Enjoy! Ajith Abraham, Oklahoma State University, USA ISDA'03 - General Chair http://ajith.softcomputing.net May 2003 Foreword VII ISDA'03 Program Chair's Message We would like to welcome you all to ISDA'03: the Third International Conference on Intelligent Systems Design and Applications, being held in Tulsa, Oklahoma, in August 2003 This conference, the third in a series, once again brings together researchers and practitioners from all over the world to present their newest research on the theory and design of intelligent systems and to share their experience in the actual applications of intelligent systems in various domains The multitude of high quality research papers in the conference is testimony of the power of the intelligent systems methodology in problem solving and the superior performance of intelligent systems based solutions in diverse real-world applications We are grateful to the authors, reviewers, session chairs, members of the various committees, other conference staff, and mostly the General Chair Professor Ajith Abraham, for their invaluable contributions to the program The high technical and organizational quality of the conference you will enjoy could not possibly have been achieved without their dedication and hard work We wish you a most rewarding professional experience, as well as an enjoyable personal one, in attending ISDA'03 in Tulsa ISDA'03 Program Chairs: Andrew Sung, New Mexico Institute of Mining and Technology, USA Gary Yen, Oklahoma State University, USA Lakhmi Jain, University of South Australia, Australia VIII Foreword ISDA'03 - Organization Honorary Chair Yasuhiko Dote, Muroran Institute of Technology, Japan General Chair Ajith Abraham, Oklahoma State University, USA Steering Committee Andrew Sung, New Mexico Institute of Mining and Technology, USA Antony Satyadas, IBM Corporation, Cambridge, USA Baikunth Nath, University of Melbourne, Australia Etienne Kerre, Ghent University, Belgium Janusz Kacprzyk, Polish Academy of Sciences, Poland Lakhmi Jain, University of South Australia, Australia P Saratchandran, Nanyang Technological University, Singapore Xindong Wu, University of Vermont, USA Program Chairs Andrew Sung, New Mexico Institute of Mining and Technology, USA Gary Yen, Oklahoma State University, USA Lakhmi Jain, University of South Australia, Australia Stream Chairs Architectures of intelligent systems Clarence W de Silva, University of British Columbia, Canada Computational Web Intelligence Yanqing Zhang, Georgia State University, Georgia Information Security Andrew Sung, New Mexico Institute of Mining and Technology, USA Image, speech and signal processing Emma Regentova, University of Nevada, Las Vegas, USA Control and automation P Saratchandran, Nanyang Technological University, Singapore Data mining Kate Smith, Monash University, Australia Software agents Marcin Paprzycki, Oklahoma State University, USA Business and management applications Andrew Flitman, Monash University, Australia Knowledge management Xiao Zhi Gao, Helsinki University of Technology, Finland Special Sessions Chair Yanqing Zhang, Georgia State University, Georgia Foreword Local Organizing Committee Dursun Delen, Oklahoma State University, USA George Hedrick, Oklahoma State University, USA Johnson Thomas, Oklahoma State University, USA Khanh Vu, Oklahoma State University, USA Ramesh Sharda, Oklahoma State University, USA Ron Cooper, COEITT, USA Finance Coordinator Hellen Sowell, Oklahoma State University, USA Web Chairs Andy AuYeung, Oklahoma State University, USA Ninan Sajith Philip, St Thomas College, India Publication Chair Katrin Franke, Fraunhofer IPK-Berlin, Germany International Technical Committee Andrew Flitman, Monash University, Australia Carlos A Coello Coello, Laboratorio Nacional de Informtica Avanzada, Mexico Chun-Hsien Chen, Chang Gung University, Taiwan Clarence W de Silva, University of British Columbia, Canada Costa Branco P J, Instituto Superior Technico, Portugal Damminda Alahakoon, Monash University, Australia Dharmendra Sharma, University of Canberra, Australia Dimitris Margaritis, Iowa State University, USA Douglas Heisterkamp, Oklahoma State University, USA Emma Regentova, University of Nevada, Las Vegas, USA Etienne Kerre, Ghent University, Belgium Francisco Herrera, University of Granada, Spain Frank Hoffmann, Royal Institute of Technology, Sweden Frank Klawonn, University of Applied Sciences Braunschweig, Germany Gabriella Pasi, ITIM - Consiglio Nazionale delle Ricerche, Italy Greg Huang, MIT, USA Irina Perfilieva, University of Ostrava, Czech Republic Janos Abonyi, University ofVeszprem, Hungary Javier Ruiz-del-Solar, Universidad de Chile, Chile Jihoon Yang, Sogang University, Korea Jiming Liu, Hong Kong Baptist University, Hong Kong John Yen, The Pennsylvania State University, USA Jos Manuel Bentez, University of Granada, Spain Jose Mira, UNED, Spain Jose Ramon Alvarez Sanchez, Univ Nac.de Educacion a Distancia, Spain Kalyanmoy Deb, Indian Institute of Technology, India Karthik Balakrishnan, Fireman's Fund Insurance Company, USA Kate Smith, Monash University, Australia IX 618 R Redpath and B Srinivasan In figure it is extremely difficult to identify a three-dimensional cluster The cubes are slightly larger at the known cluster position but the change in size could easily be attributed to noise It can be concluded that if the cluster becomes too spread out it cannot be identified Figure Test data set with dimensional cluster, with spread out (Variance = 2) in DB Miner Figure Test data set 6; A dimensional cluster with spread (variance = 2) in WinViz Criteria for a Comparati ve Study of Visualization Techniques in Data Mining 619 In figure the same test data is fed into Win Viz The cluster can be easily identified The variance is indicated by the spread of the probability distribution for each of dimension 1, and It is also apparent that in the spread of the connecting lines, in spite of this variance, that the display still clearly shows a cluster is present It can be concluded that when the variance of a cluster becomes large it is difficult to identify a cluster in DBMiner There were a number of slightly larger cubes in the region of the cluster This was against a background of noise of only 10% of the number of instances present Win Viz clearly identified the presence of the cluster under the same conditions Conclusion In order to carry out a comparison of visualization techniques suitable criteria for comparison need to be established We have chosen the following criteria as addressing the relevant issues in determining the success or failure of a particular visualization technique We have divided the criteria for comparing the techniques into (1) interface considerations and (2) characteristics of the data set There are four criteria under interface considerations Whether the technique is perceptually satisfying intuitive in its use able to manipulate the display dynamically easy to use There are four criteria to consider under the characteristics of the data set The number of instances in the data set The number of dimensions in the data set The patterns contained within the data set These patterns, if clusters, have more specific characteristics a The dimensionality of the clusters present b The number of clusters present c The variance of the clusters present The number of noise instances present The criteria relating to the characteristics of the data set can be used as the basis to generate a standard set of test data These standard test data sets could then be used to compare the performance of visualization techniques based on the criteria 620 R Redpath and B Srinivasan References DB Miner-Software (1998) Version 4.0 Beta; DB Miner Technology Inc., British Columbia, Canada Chambers 1M, Cleveland WS, Kleiner B, Tukey PA (1983) Graphical Methods for Data Analysis; Chapman and Hall, New York Inselberg A, Dimsdale B (1990) Parallel Coordinates: A Tool for Visualizing Multi-Dimensional Geometry; Visualization '90, San Francisco, CA, pp 361-370 Keirn DA, Bergeron RD, Pickett RM (1995) Test Data Sets for Evaluating Data Visualization Techniques; Perceptual Issues in Visualization (Eds Grinstein G and Levkowitz H.) ; Springer-Verlag; Berlin, pp 9-21 Keirn DA, Kriegel H (1996) Visualization Techniques for Mining Large Databases: A Comparison; IEEE Transactions on Knowledge and Data Engineering, Special Issue on Data Mining; Vol 8, No.6, December, pp 923-938 Pickett RM, Grinstein GG (2002) Evaluation of Visualization Systems; Information Visualization in Data Mining and Knowledge Discovery, Ed Fayyad U Grinstein G.G and Wierse A., Morgan Kaufmann, San Francisco, pp 83-85 Redpath R(2000) A Comparative Study of Visualization Techniques for Data Mining; A thesis submitted to the School of Computer Science and Software Engineering of Monash University in fulfillment of the degree Master of Computing, November 2000 Simoudis E (1996) Reality Check for Data Mining, IEEE Expert Vol 11, No 5:0ctober, pp 26-33 WinViz-Software (1997) for Excel 7.0 Version 3.0; Information Technology Institute, NCB, Singapore Wong P, Bergeron RD (1995) A Multidimensional Multivariate Image Evaluation Tool Perceptual Issues in Visualization (Ed Grinstein, G and Levkowitz, H.) Springer-Verlag Berlin, pp 95-108 Controlling the Spread of Dynamic Self Organising Maps L D Alahakoon School of Business Systems, Monash University, Clayton, Victoria, Australia damminda.alahakoon@infotech.monash.edu.au Summary The Growing Self Organising Map (GSOM) has recently been proposed as an alternative neural network architecture based on the traditional Self Organising Map (SOM) The GSOM provides the user with the ability to control the map spread by defining a parameter called the Spread Factor (SF), which results in enhanced data mining as well as hierarchical clustering opportunities In this paper we highlight the effect of the spread factor on the GSOM and contrast this effect with grid size change (increase and decrease) in the SOM We also present experimental results to describe our claims regarding the differance in the GSOM to the SOM Introduction The Growing Self Organising Map (GSOM) has been proposed as a dynamically generating neural map which has particular advantages for data mining applications [4], [1] Several researchers have previously developed incrementally growing SOM models [6], [8], [5] These models have similarities as well as differances from each other, but attempt to solve the problem of pre-defined , fixed structure SOMs A parameter called the Spread Factor (SF) in the GSOM provides the data analyst with control over the spread of the map The ability to control the spread of the map is unique to the GSOM and could be manipulated by the data analyst to obtain a progressive clustering of a data set at different levels of detail The SF can be assigned values in the range which are independant of the dimensionality of the data set Thus the SF provides a measure of the level of spread across differ ant data sets The spread of the GSOM can be increased by using a higher SF value Such spreading out can continue until the analyst is satisfied with the level of clustering achieved or until each node is identified as a separate cluster A data analyst using the SOM experiments with different map sizes by mapping the data set to grids (generally two dimensional) of different sizes In this paper we highlight the difference between such grid size change in the traditional SOM usage with the controlled map spreading in the GSOM We 622 L.D Alahakoon describe the SF controlled growth of the GSOM as a technique for more representative feature map generation where the inter and intra cluster relationships are better visualised The fixed structure SOM forces distortion on it's mapping due to the inability of expanding when required The traditional SOM does not provide a measure for identifying the size of a feature map with it's level of spread Therefore the data analyst using the SOM can only refer to the length and the width of the grid to relate to map size We show that the shape or size of the SOM cannot be meaningfully related to the spread of the data without accurate knowledge of the distribution of the data set The use of the SF in the GSOM provides a single indicator with which to dictate map spread and also the ability to relate a map to a particular level of spread The identification of clusters in a map will always depend on the data set being used as well as the needs of the application For example, the analyst may only be interested in identifying the most significant clusters for one application Alternatively, the analyst may become interested in more detailed clusters Therefore the clusters identified will depend on the level of significance required by the analyst at a certain instance in time In data mining type applications, since the analyst is not aware of the clusters in the data, it is generally required to study the clusters generated at different levels of spread (detail) to obtain an understanding of the data Attempting to obtain different sized SOMs for this purpose can result in distorted views due to the need of forcing a data set into maps with pre-defined structure Therefore we describe the number of clusters shown in a map also as a variable which depends on the level of significance required by the data analyst We present the SF as our technique of providing the lavel of significance as a parameter to the map generation process In section we provide a description of the GSOM and SF techniques Section explains the workings of the spreadout effect in the GSOM with the SF as a parameter Here we also compare the GSOM and SOM in terms of the spreading out effect In section we describe experimental results to clarify our claims Section provides the conclusions to the paper Controlling the GSOM Spread with the Spread Factor 2.1 Growing Self Organising Map The GSOM is an unsupervised neural network which is initialised with nodes and grows nodes to represent the input data [2], [3] During the node growth, the weight values of the nodes are self organised according to a similar method as the SOM The GSOM process is as follows: Initialisation Phase a) Initialise the weight vectors of the starting nodes (4) with random numbers Controlling the Spread of Dynamic Self Organising Maps 623 b) Calculate the Growth Threshold (GT) for the given data set according to the user requirements Growing Phase a) Present input to the network b) Determine the weight vector that is closest to the input vector mapped to the current feature map (winner), using Euclidean distance (similar to the SaM) This step can be summarised as: Find q' such that Iv - wqll :S Iv - wql 't/q EN where v, ware the input and weight vectors respectively, q is the position vector for nodes and N is the set of natural numbers c) The weight vector adaptation is applied only to the neighbourhood of the winner and the winner itself The neighbourhood is a set of neurons around the winner, but in the GSOM the starting neighbourhood selected for weight adaptation is smaller compared to the SaM (localised weight adaptation) The amount of adaptation (learning rate) is also reduced exponentially over the iterations Even within the neighbourhood weights which are closer to the winner are adapted more than those further away The weight adaptation can be described by: wj(k),j wj(k + 1) = { wj(k) rt Nk+l + LR(k)x (Xk - wj(k)),j E NkH where the learning rate LR(k), kEN is a sequence of positive parameters converging to as k -+ 00 wj(k), wj(k + 1) are the weight vectors of the the node j, before and after the adaptation and NkH is the neighbourhood of the winning neuron at (k + l)th iteration The decreasing of LR(k) in the GSOM, depends on the number of nodes existing in the network at time k d) Increase the error value of the winner (error value is the difference between the input vector and the weight vectors) e) When TEi:S GT (where TE is the total error of node i and GT is the growth threshold), Grow nodes if i is a boundary node Distribute weights to neighbours if i is a non-boundary node f) Initialise the new node weight vectors to match the neighbouring node weights g) Initialise the learning rate (LR) to it's starting value h) Repeat steps b g until all inputs have been presented, and node growth is reduced to a minimum level Smoothing Phase a) Reduce learning rate and fix a small starting neighbourhood b) Find winner and adapt weights of winner and neighbors same as in growing phase 624 L.D Alahakoon Therefore, instead of the weight adaptation in the original SOM, the GSOM adapts it's weights and architecture to represent the input data Therefore in the GSOM a node has a weight vector and two dimensional coordinates which identify it's position in the net, while in the SOM the weight vector is also called the position vector 2.2 The Spread factor As described in the algorithm, the GSOM uses a threshold value called the Growth Threshold (GT) to decide when to initiate new node growth GT will decide the amount of spread of the feature map to be generated Therefore if we require only a very abstract picture of the data, a large GT will result in a map with a fewer number of nodes Similarly a smaller GT will result in the map spreading out more When using the GSOM for data mining, it might be a good idea to first generate a smaller map, only showing the most significant clustering in the data, which will give the data analyst a summarised picture of the inherent clustering in the total data set The node growth in the GSOM is initiated when the error value of a node exceeds the GT The total error value for node i is calculated as : V TEi = LL (Xi,j - Wj)2 (1) Hi j=l where H is the number of hits to the node i and 1) is the dimension of the data Xi,j and Wj are the input and weight vectors of the node i respectively For new node growth : (2) The GT value has to be experimentally decided depending on our requirement for map growth As can be seen from equation the dimension of the data set will make a significant impact on the accumulated error (T E) value, and as such will have to be considered when deciding the GT for a given application The Spread Factor SF was introduced to address this limitation, thus eleminating the need for the data analyst to consider data dimensionality when generating feature maps Although exact measurement of the effect of the SF is yet to be carried out, it provides an indicator for identifying the level of spread in a feature map (across differ ant dimensionalities) The derivation of the SF has been described in [4] The formula used is : GT = - D x In (SF) Therefore, instead of having to provide a GT, which would take different values for different data sets, the data analyst has to provide a value SF, which will be used by the system to calculate the GT value depending on the dimension of the data This will allow the GSOM's to be identified with their spread factors, and will be a basis for comparision of different maps Controlling the Spread of Dynamic Self Organising Maps 625 The Spreading out Effect of the GSOM compared to the traditional SOM 3.1 The Relationship between the Shape of the SOM to the Input Data Distribution x y' Y' yl Iy II X (a) X' (b) Fig The shift of the clusters on a feature map due to the shape and size Figure shows the diagram of a SOM with four clusters A, B, C and D which can be used to explain the spread of clusters due to the change of grid size in a SOM As shown in Figure l(a) the SOM has a grid oflength and width X and Y respectively The intra cluster distances are x and y as shown in Figure l(a) In Figure l(b) a SOM has been generated on the same data but the length of the grid has been increased (to Y' > Y) while the width has been maintained at the previous value (X = X') The intra cluster distances in Figure l(b) are x' and y' It can be seen that inter cluster distances in y direction has changed in such a way that the cluster positions have been forced into maintaining the proportions of the SOM grid The clusters themselves have been dragged out in y direction due to the intra cluster distances also being forced by the grid Therefore in Figure X : Y ::::;j X : y and X' : Y' ::::;j x' : y' This phenomena can be considered in an intuitive manner as follows [7]: Considering two dimensional maps, the inter and intra cluster distances in the map can be separately identified in the X and Y directions We simply visualise the spreading out effect of the SOM as the inter and intra cluster distances in the X and Y directions proportionally being adjusted to fit in with the width and length of the SOM 626 L.D Alahakoon The same effect has been described by Kohonen [7] as a limitation of the SOM called the oblique orientation This limitation has been observed and demonstrated experimentally with a two dimensional grid, and we use the same experiment to indicate the limitations of the SOM for data mining I' \ \ I~I -' (a) (b) Fig Oblique orientation of a SOM Figure 2(a) shows a x SOM for a set of artificial data selected from a uniformly distributed dimensional square region The attribute values x, y in the data are selected such that x : y = : and as such the grid in Figure 2(a) is well spread out providing an optimal map In figure 2(b) the input value attributes x : y :j: : while the input data demands a grid of : or similar proportions As such it has resulted in a distorted map with a crushed effect Kohonen has described oblique orientation as resulting from significant differences in variance of the components (attributes) of the input data Therefore the grid size of the feature map has to be initialised to match the values of the data attributes or dimensions to obtain a properly spread out map For example, consider a two dimensional data set where the attribute values have the proportion x : y In such an instance a two dimensional grid can be initialised with n x m nodes where n : m = x : y Such a feature map will produce an optimal spread of clusters maintaining the proportionality in the data But in many data mining applications the data analyst is not aware of the data attribute proportions Also the data mostly is of very high dimensions, and as such it becomes impossible to decide a suitable two dimensional grid structure and shape Therefore initialing with an optimal grid for SOM becomes a non-feasible solution Kohonen has suggested a solution to this problem by introducing adaptive tensorial weights in calculating the distance for identifying the winning nodes in the SOM during training The formula for distance calculation is N d2 [x(t), Wi(t)] = L 1j;:)~j(t) -lLi,j(tW j=l (3) Controlling the Spread of Dynamic Self Organising Maps 627 where ~j are the attributes (dimensions) of input x, the Pi,j are the attributes of Wi and 'l/Ji,j is the weight of the jth attribute associated with node i respectively The values of Wi,j are estimated recursively during the unsupervised learning process [7J The resulting adjustment has been demonstrated using artificial data sets in Figure l[illl[ill(a) ~ I [E§3} I IErn I

Ngày đăng: 20/01/2020, 15:51

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan