1. Trang chủ
  2. » Công Nghệ Thông Tin

Advances in Neuro-Information Processing docx

1,3K 6,7K 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 1.258
Dung lượng 31,02 MB

Nội dung

Lecture Notes in Computer Science Commenced Publication in 1973 Founding and Former Series Editors: Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen Editorial Board David Hutchison Lancaster University, UK Takeo Kanade Carnegie Mellon University, Pittsburgh, PA, USA Josef Kittler University of Surrey, Guildford, UK Jon M Kleinberg Cornell University, Ithaca, NY, USA Alfred Kobsa University of California, Irvine, CA, USA Friedemann Mattern ETH Zurich, Switzerland John C Mitchell Stanford University, CA, USA Moni Naor Weizmann Institute of Science, Rehovot, Israel Oscar Nierstrasz University of Bern, Switzerland C Pandu Rangan Indian Institute of Technology, Madras, India Bernhard Steffen University of Dortmund, Germany Madhu Sudan Massachusetts Institute of Technology, MA, USA Demetri Terzopoulos University of California, Los Angeles, CA, USA Doug Tygar University of California, Berkeley, CA, USA Gerhard Weikum Max-Planck Institute of Computer Science, Saarbruecken, Germany 5506 Mario Köppen Nikola Kasabov George Coghill (Eds.) Advances in Neuro-Information Processing 15th International Conference, ICONIP 2008 Auckland, New Zealand, November 25-28, 2008 Revised Selected Papers, Part I 13 Volume Editors Mario Köppen Network Design and Research Center, Kyushu Institute of Technology 680-4, Kawazu, Iizuka, Fukuoka 820-8502, Japan E-mail: mkoeppen@ieee.org Nikola Kasabov Auckland University of Technology Knowledge Engineering and Discovery Research Institute (KEDRI) School of Computing and Mathematical Sciences 350 Queen Street, Auckland 10110, New Zealand E-mail: nkasabov@aut.ac.nz George Coghill Auckland University of Technology, Robotics Laboratory Department of Electrical and Computer Engineering 38 Princes Street, Auckland 1142, New Zealand E-mail: g.coghill@auckland.ac.nz Library of Congress Control Number: 2009929832 CR Subject Classification (1998): F.1, I.2, I.5, G.4, G.3, C.3 LNCS Sublibrary: SL – Theoretical Computer Science and General Issues ISSN ISBN-10 ISBN-13 0302-9743 3-642-02489-0 Springer Berlin Heidelberg New York 978-3-642-02489-4 Springer Berlin Heidelberg New York This work is subject to copyright All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer Violations are liable to prosecution under the German Copyright Law springer.com © Springer-Verlag Berlin Heidelberg 2009 Printed in Germany Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India Printed on acid-free paper SPIN: 12692460 06/3180 543210 Preface The two volumes contain the papers presented at the ICONIP 2008 conference of the Asia Pacific Neural Network Assembly, held in Auckland, New Zealand, November 25–28, 2008 ICONIP 2008 attracted around 400 submissions, with approx 260 presentations accepted, many of them invited ICONIP 2008 covered a large scope of topics in the areas of: methods and techniques of artificial neural networks, neurocomputers, brain modeling, neuroscience, bioinformatics, pattern recognition, intelligent information systems, quantum computation, and their numerous applications in almost all areas of science, engineering, medicine, the environment, and business One of the features of the conference was the list of 20 plenary and invited speakers, all internationally established scientists, presenting their recent work Among them: Professors Shun-ichi Amari, RIKEN Brain Science Institute; Shiro Usui, RIKEN Brain Science Institute, Japan; Andrzej Cichocki, RIKEN Brain Science Institute; Takeshi Yamakawa, Kyushu Institute of Technology; Kenji Doya, Okinawa Institute of Science and Technology; Youki Kadobayashi, National Institute of Information and Communications Technology, Japan; SungBae Cho, Yonsei University, Korea; Alessandro Villa, University of Grenoble, France; Danilo Mandic, Imperial College, UK; Richard Duro, Universidade da Coruna, Spain, Andreas Kănig, Technische Universităt Kaiserslautern, Germany; o a Yaochu Jin, Honda Research Institute Europe, Germany; Bogdan Gabrys, University of Bournemouth, UK; Jun Wang, Chinese University of Hong Kong; Mike Paulin, Otago University, New Zealand; Mika Hirvensalo, University of Turku, Finland; Lei Xu, Chinese University of Hong Kong and Beijing University, China; Wlodzislaw Duch, Nicholaus Copernicus University, Poland; Gary Marcus, New York University, USA The organizers would also like to thank all special session organizers for their strong efforts to enrich the scope and program of this conference The ICONIP 2008 conference covered the following special sessions: “Data Mining Methods for Cybersecurity,” organized by Youki Kadobayashi, Daisuke Inoue, and Tao Ban, “Computational Models and Their Applications to Machine Learning and Pattern Recognition,” organized by Kazunori Iwata and Kazushi Ikeda, “Lifelong Incremental Learning for Intelligent Systems,” organized by Seiichi Ozawa, Paul Pang, Minho Lee, and Guang-Bin Huang, “Application of Intelligent Methods in Ecological Informatics,” organized by Michael J Watts and Susan P Worner,“Pattern Recognition from Real-world Information by SVM and Other Sophisticated Techniques,” organized by Ikuko Nishikawa and Kazushi Ikeda, “Dynamics of Neural Networks,” organized by Zhigang Zeng and Tingwen Huang, “Recent Advances in Brain-Inspired Technologies for Robotics,” organized by Kazuo Ishii and Keiichi Horio, and “Neural Information Processing VI Preface in Cooperative Multi-Robot Systems,” organized by Jose A Becerra, Javier de Lope, and Ivan Villaverde Another feature of ICONIP 2008 was that it was preceded by the First Symposium of the International Neural Network Society (INNS) on New Directions in Neural Networks (NNN 2008), held November 25–25, 2008 This symposium was on the topic “Modeling the Brain and Neurvous systems,” with two streams: Development and Learning and Computational Neurogenetic Modeling Among the invited speakers were: A Villa, J Weng, G Marcus, C Abraham, H Kojima, M Tsukada, Y Jin, L Benuskova The papers presented at NNN 2008 are also included in these two volumes ICONIP 2008 and NNN 2008 were technically co-sponsored by APNNA, INNS, the IEEE Computational Intelligence Society, the Japanese Neural Network Society (JNNS), the European Neural Network Society (ENNS), the Knowledge Engineering and Discovery Research Institute (KEDRI), Auckland University of Technology, Toyota USA, Auckland Sky City, and the School of Computing and Mathematical Sciences at the Auckland University of Technology Our sincere thanks to the sponsors! The ICONIP 2008 and the NNN 2008 events were hosted by the Knowledge Engineering and Discovery Research Institute (KEDRI) of the Auckland University of Technology (AUT) We would like to acknowledge the staff of KEDRI and especially the Local Organizing Chair Joyce DMello, the Web manager Peter Hwang, and the publication team comprising Stefan Schliebs, Raphael Hu and Kshitij Doble, for their effort to make this conference an exciting event March 2009 Nikola Kasabov Mario Kăppen o George Coghill ICONIP 2008 Organization ICONIP 2008 was organized by the Knowledge Engineering and Discovery Research Institute (KEDRI) of the Auckland University of Technology (AUT) Conference Committee General Chair Program Co-chairs Publicity Chairs Plenary Chairs Panels Chairs Tutorial Chairs Special Session Chairs Poster Session Chairs Workshop Chairs Demonstrations Chairs Local Organizing Chair Technical Support Chair Nikola Kasabov Mario Kăppen, George Coghill, o Masumi Ishikawa Shiro Usui, Bill Howel, Ajit Narayanan, Suash Deb Takeshi Yamakawa, Andreas Kănig, o Tom Gedeon Robert Kozma, Mario Fedrizzi, M.Tsukada, Stephen MacDonell Sung-Bae Cho, Martin McGinnity, L Benuskova Soo-Young Lee, Richard Duro, Shaoning Pang Bernadete Ribeiro, Qun Song, Frances Joseph Mike Paulin, Irwin King, Kaori Yoshida, Napoleon H Reyes Sue Worner, Russel Pears, Michael Defoin-Platel Joyce Mello Peter Hwang Track Chairs Neurodynamics: Takeshi Aihara, Tamagawa University, Japan Cognitive Neuroscience: Alessandro Villa, UJF Grenoble, France Brain Mapping: Jagath Rajapakse, Nanyang Technological University, Singapore Neural Network Learning Paradigms: Nik Kasabov, Auckland University of Technology, New Zealand Kernel Methods and SVM: Bernardete Ribeiro, University of Coimbra, Portugal Ensemble Methods for Neural Networks: Andre C.P.L.F de Carvalho, University of Sao Paulo, Brazil Information Algebra: Andrzej Cichocki, RIKEN, Japan Neural Networks for Perception: Akira Iwata, Nagoya Institute of Technology, Japan Neural Networks for Motoric Control: Minho Lee, Kyungpook National University, Korea VIII Organization Neural Networks for Pattern Recognition: Paul Pang, Auckland University of Technology, New Zealand Neural Networks for Robotics: Richard Duro, University of a C oruna, Spain ˜ Neuromorphic Hardware: Leslie S Smith, University of Stirling, UK Embedded Neural Networks: Andreas Koenig, University of Kaiserslautern, Germany Neural Network-Based Semantic Web, Data Mining and Knowledge Discovery: Irwin King, The Chinese University of Hong Kong, Hong Kong Computational Intelligence: Wlodzislaw Duch, Nicolaus Copernicus University, Poland Bioinformatics: Sung-Bae Cho, Yonsei University, Korea Neural Paradigms for Real-World Networks: Tom Gedeon, The Australian National University, Australia Quantum Neural Networks: Mika Hirvensalo, University of Turku, Finland Neural Network Implementation in Hardware and Software: George Coghill, Auckland University of Technology, New Zealand Biologically Inspired Neural Networks: Nik Kasabov, Auckland University of Technology, New Zealand International Technical Committee Abbass, Hussein Abe, Shigeo Aihara, Takeshi Alippi, Cesare Ando, Ruo Andras, Peter Asoh, Hideki Ban, Tao Bapi, Raju Barczak, Andre Luis Chautard Barros, Allan Kardec Becerra Permuy, Jos´ Antonio e Behera, Laxmidhar Behrman, Elizabeth Beliczynski, Bartlomiej Carvalho, Andre C.P.L.F de Chang, Jyh-Yeong Cho, Sung-Bae Choi, Seungjin Chung, I-Fang Cichocki, Andrzej Coghill, George Cohen, Avis Dauwels, Justin de Lope, Javier de Souto, Marcilio Dorronsoro, Jose Dourado, Antonio Duch, Wlodzislaw Duro, Richard Elizondo, David Erdi, Peter Fukumura, Naohiro Fung, Wai-keung Furukawa, Tetsuo fyfe, colin Garcez, Artur Gedeon, Tom Grana, Manuel Gruen, Sonja Guo, Shanqing Hagiwara, Katusyuki Hammer, Barbara Hartono, Pitoyo Hayashi, Akira Hayashi, Hatsuo Hikawa, Hiroomi Hirvensalo, Mika Honkela, Antti Horio, Keiichi Huang, Kaizhu Ikeda, Kazushi Inoue, Daisuke Ishida, Fumihiko Iwata, Kazunori Iwata, Akira Kadone, Hideki Kanoh, Shin’ichiro Kasabov, Nikola Kim, Kyung-Joong Kimura, Shuhei King, Irwin Kitajima, Tatsuo Koenig, Andreas Koeppen, Mario Kondo, Toshiyuki Kurita, Takio Kurogi, Shuichi Lai, Weng Kin Organization Lee, Minho Lendasse, Amaury Lim, CP Liu, Ju Liu, Shih-Chii Lu, Bao-Liang Ludermir, Teresa Mandziuk, Jacek Matsui, Nobuyuki Mayr, Christian McKay, Bob Meier, Karlheinz Mimura, Kazushi Miyamoto, Hiroyuki Molter, Colin Morie, Takashi Morita, Kenji Nakajima, Koji Nakauchi, Shigeki Nguyen-Tuong, Duy Nishii, Jun Nishikawa, Ikuko Ohnishi, Noboru Omori, Toshiaki Ozawa, Seiichi Pang, Paul Patel, Leena Peters, Jan Phillips, Steven Rajapakse, Jagath Reyes, Napoleon Ribeiro, Bernardete Rueckert, Ulrich Sakai, Ko Sato, Shigeo Sato, Naoyuki Schemmel, Johannes Setiono, Rudy Shibata, Tomohiro Shimazaki, Hideaki Shouno, Hayaru Silva, Catarina Small, Michael Smith, Leslie S Spaanenburg, Lambert Stafylopatis, Andreas Suematsu, Nobuo Suh, Il Hong Sum, John Suykens, Johan IX Takenouchi, Takashi Tambouratzis, Tatiana Tanaka, Yoshiyuki Tang, Ke Tateno, Katsumi van Schaik, Andre Villa, Alessandro Villaverde, Ivan Wada, Yasuhiro Wagatsuma, Hiroaki Watanabe, Keigo Watanabe, Kazuho Watts, Michael Wu, Jianhua Xiao, Qinghan Yamaguchi, Nobuhiko Yamauchi, Koichiro Yi, Zhang Yoshimoto, Junichiro Zhang, Zonghua Zhang, Liming Zhang, Liqing Zhang, Byoung-Tak Additional Referees Pong Meau Yeong, Hua Nong Ting, Sim Kok Swee, Yap Keem Siah, Shahrel Azmin Suandi, Tomas Henrique Bode Maul, Nor Ashidi Mat Isa, Haidi Ibrahim, Tan Shing Chiang, Dhanesh Ramachand Ram, Mohd Fadzli Mohd Salleh, Khoo Bee Ee Sponsoring Institutions Asia Pacific Neural Network Assembly (APNNA) International Neural Network Society (INNS) IEEE Computational Intelligence Society Japanese Neural Network Society (JNNS) European Neural Network Society (ENNS) Knowledge Engineering and Discovery Research Institute (KEDRI) Auckland University of Technology (AUT) Toyota USA Auckland Sky City School of Computing and Mathematical Sciences at the Auckland University of Technology INNS NNN 2008 Organization INNS NNN 2008 was organized by the Knowledge Engineering and Discovery Research Institute (KEDRI) of the Auckland University of Technology (AUT) Conference Committee General Chair Program Chair Program Co-chairs Local Organizing Chair Technical Support Chair Publishing Committee Symposium Co-chairs Symposium Co-chairs Juyang Weng Nikola Kasabov Mario Koeppen, John Weng, Lubica Benuskova Joyce DMello Peter Hwang Stefan Schliebs, Kshitij Dhoble, Raphael Hu Juyang Weng, Jeffrey L Krichmar, Hiroaki Wagatsuma Lubica Benuskova, Alessandro E.P Villa, Nikola Kasabov Program Committee Alessandro E.P Villa Anil Seth wCharles Unsworth Chris Trengove Cliff Abraham Danil Prokhorov Frederic Kaplan Hiroaki Wagatsuma Igor Farkas James Wright Jason Fleischer Jeffrey L Krichmar Jiri Sima Juyang Weng Kazuhiko Kawamura Lubica Benuskova Michael Defoin Platel Michal Cernansky Ming Xie Peter Jedlicka Rick Granger Rolf Pfeifer Roman Rosipal Xiangyang Xue Zhengyou Zhang Reviewers Danil Prokhorov Ming Xie Frederic Kaplan Hiroaki Wagatsuma Kaz Kawamura Juyang Weng Lubica Benuskova Igor Farkas XII Organization Chris Trengove Charles Unsworth Roman Rosipal Jiri Sima Alessandro Villa Peter Jedlicka Michal Cernansky Michael Defoin-Platel Xiangyang Xue Wickliffe Abraham James Wright Zhengyou Zhang Anil Seth Jason Fleischer Nikola Kasabov Sponsoring Institutions Auckland University of Technology (AUT) Asia Pacific Neural Network Assembly (APPNA) International Neural Network Society (INNS) Knowledge Engineering and Discovery Research Institute (KEDRI) IEEE Computational Intelligence Society Personalised Modelling for Multiple Time-Series Data Prediction 1243 Acknowledgements We would like to thank Dr Antoaneta Serguieva from Centre for Empirical Finance, Brunel University, and Brunel Business School for her contributions in our previous work of building a global model to extract dynamic interaction between stock markets in Asia Pacific References Antoniou, A., Pescetto, G., Violaris, A.: Modelling international price relationships and interdependencies between the stock index and stock index future markets of three EU countries: A Multivariate Analysis Journal of Business Finance and Accounting 30, 645–667 (2003) Caporale, G.M., Serguieva, A., Wu, H.: A mixed-game agent-based model for simulating financial contagion In: Proceedings of the 2008 Congress on Evolutionary Computation, pp 3420–3425 IEEE Press, Los Alamitos (2008) Chan, Z., Kasabov, N., Collins, L.: A two-stage methodology for gene regulatory network extraction from time-course gene expression data Expert System with Applications 30, 59–63 (2006) Chiang, T.C., Doong, S.: Empirical analysis of stock returns and volatility: Evidence from seven Asian stock markets based on TAR-GARCH model Review of Quantitative Finance and Accounting 17, 301–318 (2001) Collins, D., Biekpe, N.: Contagion and interdependence in African stock markets The South African Journal of Economics 71(1), 181–194 (2003) D’haeseleer, P., Liang, S., Somogyi, R.: Gene expression data analysis and modelling In: Proceedings of the Pacific Symposium on Biocomputing, Hawaii (1999) Goldfeld, S., Quandt, R.: A Markov model for switching regressions Journal of Econometrics 1(1), 3–16 (1973) Kasabov, N.: Evolving connectionist systems: Methods and applications in bioinformatics, Brain Study and Intelligent Machines Springer, Heidelberg (2002) Kasabov, N., Chan, Z., Jain, V., Sidorov, I., Dimitrov, D.: Gene regulatory network discovery from time-series gene expression data – A computational intelligence approach In: Pal, N.R., Kasabov, N., Mudi, R.K., Pal, S., Parui, S.K (eds.) ICONIP 2004 LNCS, vol 3316, pp 1344–1353 Springer, Heidelberg (2004) 10 Kasabov, N.: Global, local and personalised modelling and pattern discovery in bioinformatics: An integrated approach Pattern Recognition Letters 28, 673–685 (2007a) 11 Kasabov, N.: Evolving Connectionist Systems: The Knowledge Engineering Approach Springer, Heidelberg (2007b) 12 Masih, A., Masih, R.: Dynamic modelling of stock market interdependencies: An empirical investigation of Australia and the Asian NICs, Working Papers, 98–18, 1323–9244, University of Western Australia (1998) 13 Welch, G., Bishop, G.: An Introduction to the Kalman Filter, Computer Science Working Papers TR95-041, University of North Carolina at Chapel Hill (2006) 14 Serguieva, A., Kalganova, T., Khan, T.: An intelligent system for risk classification of stock investment projects Journal of Applied Systems Studies 4(2), 236–261 (2003) 15 Serguieva, A., Khan, T.: Knowledge representation in risk analysis Business and Management Working Papers Brunel University, pp 1–21 (March 2004) 1244 H Widiputra, R Pears, and N Kasabov 16 Serguieva, A., Wu, H.: Computational intelligence in financial contagion analysis In: Seventh International Conference on Complex Systems, Boston, Massachusetts (2007); InterJournal on Complex Systems 2229, 1–12 (2008) 17 Song, Q., Kasabov, N.: Dynamic evolving neuro-fuzzy inference system (DENFIS): Online learning and application for time-series prediction IEEE Transactions of Fuzzy Systems 10, 144–154 (2002) 18 Vapnik, V.N.: Statistical Learning Theory Wiley Inter-Science, Chichester (1998) Dynamic Neural Fuzzy Inference System Yuan-Chun Hwang and Qun Song Auckland University of Technology, Knowledge Engineering and Discovery Research Institute, Auckland, New Zealand {peter.hwang,qun.song}@aut.ac.nz Abstract This paper proposes an extension to the original offline version of DENFIS The new algorithm, DyNFIS, replaces original triangular membership function with Gaussian membership function and use back-propagation to further optimizes the model Fuzzy rules are created for each clustering centre based on the clustering outcome of evolving clustering method For each test data, the output of DyNFIS is calculated through fuzzy inference system based on m-most activated fuzzy rules and these rules are updated based on backpropagation to minimize the error DyNFIS shows improvement on multiple benchmark data and satisfactory result in NN3 forecast competition Introduction Dynamic Evolving Neural Fuzzy Inference System, DENFIS[1], is an online local model was proposed by Qun et al in 2002 The algorithm creates a Takagi-Sugeno (TS) fuzzy inference system (FIS)[2, 3] where the fuzzy rules are created based on the clustering outcome of the input vectors The system evolves by updating its clusters and rules as new input vectors enter the system The concept of developing a fuzzy inference system based on data clustering is one of the reasons why DENFIS performs well as an online algorithm since this method captures the different sub-groups of data points that represent similar characteristics and develop a rule for each subgroup Online algorithms [4, 5] process the input data piece by piece and usually aim for fast processing speed and minimal memory usage since it is designed to process streams of data Because of this, the complexity of the algorithm is often minimised the input data discarded once it’s processed Offline algorithm[6-8] on the other hand has the benefit of being able to process the entire dataset as a whole This often allows for better prediction accuracy since there are more data for the algorithm to learn from DENFIS was originally designed as an online algorithm and because of this; the FIS and learning method were simplified to meet the requirements DENFIS offline was proposed together with the online version of DENFIS, which sacrifices the dynamic evolving aspect of the DENFIS algorithm and replaces it with more sophisticated learning algorithm aims to provide higher accuracy It has shown improvement in prediction accuracy; however, more optimisation can be applied to further improve its accuracy In the original DENFIS algorithm, the antecedents are created based on the cluster centres and the consequences are created based on the samples near the cluster M Köppen et al (Eds.): ICONIP 2008, Part I, LNCS 5506, pp 1245–1250, 2009 © Springer-Verlag Berlin Heidelberg 2009 1246 Y.-C Hwang and Q Song centres No further supervised learning is applied once the fuzzy rules are created until more input vectors enter the system Triangular membership function was used to reduce the computational complexity This paper proposed an improved offline version of DENFIS that extends the original algorithm with more in-depth learning In the proposed algorithm, DyNFIS, Gaussian MF is used in place of the triangular MF Since Gaussian MF extends infinitely, it provides better cover of the problem space, the gaps that may be introduced by triangular MF due to its finite range is eliminated DyNFIS also allows the antecedents and consequences to be optimised using back-propagation to minimise the error in active rules With the above improvements, DyNFIS is expected to be more accurate due to back-propagation and generalised due to its Gaussian MF and therefore more suitable for real world applications that are not time critical Algorithm Description The DyNFIS offline learning process is outlined as follows: Cluster the data to find n cluster centres using Evolving Clustering Method (ECM[1]) Create one fuzzy rule for each cluster Antecedent is created based on each cluster’s centre position Consequence is calculated by deriving a linear function with the samples that belong to the clusters For each testing sample, derive the output from m rules close it and adjust the fuzzy membership function and its consequence using backpropagation to minimise the error Repeat step three for multiple epochs or until desire accuracy reached Consider the data is composed of N data pairs with P input variables and one output variable {[xi1, xi2,…,xij], yi}, i = {1, 2,…,N}, j = {1,2,…,P} M fuzzy rules are defined initially through the clustering procedure, the lth rule has the form of: Rl: If x1 is about Fl1 and x2 is about Fl2 … xp is about Flp and then nl = β + x1β1 + x2 β + + x p β p (1) Flj are the fuzzy sets defined by the following Gaussian type membership function (MF): ⎡ ( x − m) ⎤ Gaussian MF = α exp ⎢− ⎥ 2σ ⎦ ⎣ (2) Using the modified centre average defuzzification procedure, the output value of the system can be calculated for an input vector xi = [x1, x2… xp ] as follows: ∑l=1 nl ∏ j =1α lj exp(− p M f (x i ) = ∑ l=1 ∏ j =1α lj exp(− M p ( x ji − mlj ) 2 2σ lj ( x ji − mlj ) 2 2σ lj ) (3) ) Dynamic Neural Fuzzy Inference System 1247 Suppose the DyNFIS is given a training input-output data pair [xi, ti] The system minimizes the following objective function: E = [ f ( xi ) − t i ]2 (4) The back-propagation (steepest descent) algorithm is used to obtain the formulas (5)-(11) for the optimisation of the parameters nl , α lj , mlj σ lj and β l mlj (k + 1) = mlj (k ) − ηm Φ( xi )[ f ( k ) (xi ) − ti ] σ lj (k ) × [nl (k ) − f ( k ) ( xi )][ xij − mlj (k )] nl ( k + 1) = nl (k ) − η nφ ( xi )[ f ( k ) ( xi ) − t i ] α lj (k + 1) = α lj ( k ) − (7) ησ φ ( xi )[ f (k ) ( xi ) − ti ] σ lj × [nl (k ) − f ( k ) ( xi )][ xij − mlj (k )]2 φ ( xi ) = ∏ p j =1 α lj exp(− ∑ ∏ M l =1 ( x ji − mlj ) α exp( − j =1 lj p (6) ηαφ ( xi )[ f (k ) ( xi ) − ti ] α lj ( k ) × [nl ( k ) − f ( k ) ( xi )] σ li (k + 1) = σ (k ) − (5) 2σ lj (8) ) ( x ji − mlj ) ) 2σ lj (9) β l (k + 1) = β l ( k ) − η β φ ( xi )[ f ( k ) ( xi ) − ti ] (10) β lj ( k + 1) = β l ( k ) − η β φ ( xi )[ f ( k ) ( x i ) − t i ] xij (11) Where η m , ηn , ηα , ησ and η β are learning rate for updating the parameters: nl , α lj , mlj , σ lj and β l respectively In DyNFIS algorithm, the following indexes are used: • training data points: i=1,2,…,N; • input variables: j=1,2,…,P; • fuzzy rules: l=1,2,…,M; • training iterations: k=1,2,… 1248 Y.-C Hwang and Q Song Case Study – Benchmark We have applied DyNFIS on Mackey-Glass dataset, which has been widely used as a benchmark problem in the area of neural networks, fuzzy systems and hybrid systems for time series prediction The dataset was created with the following differential equation: dx(t ) 0.2 x( t − τ ) = − 0.1x (t) dt + x 10 ( t − τ ) (12) The integer time points for the above equation were obtained using the fourthorder Runge-Kutta method Here we assume that the time step is 0.1, x ( ) = ; τ = 17 ; and x (t ) = for t < [1] The goal is to predict the values x(t+6) from input vectors [x(t-18), x(t-12), x(t-6), x(t)] for any value of the time t The following experiment was conducted with 1000 data points, from t = 118 to 1117 The first 500 data points were taken as the training data and the remaining 500 as the testing data A model is created using the training data and then tested on the test data The test results are shown in table along with other published results for comparison Table Prediction accuracy comparison of several offline algorithms on t+6 Mackay Glass dataset [1] MLP-BP Neurons or Rules 60 MLP-BP Methods Epochs Training NDEI Testing NDEI 50 0.083 0.090 60 500 0.021 0.022 ANFIS 81 50 0.032 0.033 ANFIS 81 200 0.028 0.029 DENFIS I (TS) 883 0.023 0.019 DENFIS II (MLP) 58 100 0.017 0.016 DyNFIS (TS) 55 100 0.017 0.016 As shown in table 1, DyNFIS performs better than DENFIS I offline and is on par with DENFIS II offline (MLP) The improvements are expected due to additional learning and the use of Gaussian membership function The performance similarity between DyNFIS and DENFIS II (MLP) is likely to be caused by the additional learning performed at different level DENFIS II (MLP)’s prediction is optimized by improving the consequence of each rules without optimizing the membership functions and DyNFIS optimizes the fuzzy membership function instead and does less optimization of each rule’s accuracy Dynamic Neural Fuzzy Inference System 1249 The prediction error on the T+6 Mackay Glass dataset is very low across top performing algorithms and has become difficult to show significant differences A more difficult benchmark test on the performance of the DyNFIS algorithm is carried out on the T+85 Mackay Glass dataset is used which is a much more difficult For predicting the T+85 output value, the 3000 data points, from t = 201 to 3200, were extracted as the training data and 500 data points, from t = 5001 to 5500, were extracted as the testing data Table Prediction accuracy comparison of several offline algorithms on t+85 Mackay Glass dataset [1] 50 Training NDEI 0.083 Testing NDEI 0.090 50 500 2 60 500 0.032 0.024 0.068 0.023 0.020 0.017 0.033 0.025 0.068 0.019 0.020 0.018 Methods Neurons or Rules Epochs MLP-BP 60 ANFIS ANFIS DENFIS I (TS) DENFIS I (TS) DENFIS II (MLP) DyNFIS (TS) 81 81 116 883 58 91 As shown in table 2, DyNFIS has no difficulty in solving the more difficult problem, T+85 Mackay Glass dataset Case Study – Real World The proposed algorithm was entered in the NN3 Neural Network Forecasting competition[9] under the original name “DENFIS” to forecast 11 time series It achieved 10th place using one set of parameters for all time-series problem Better performance can be expected should the parameters be optimised for each time-series’ training data Conclusion and Direction for Further Research This paper presents an improved version of DENFIS offline algorithm, DyNFIS, which enhances its learning by adapting additional learning to the fuzzy inference system as well as using Gaussian membership function The algorithm uses M highly activated fuzzy rules to dynamically compose an inference system for calculating the output for a given input vector The proposed system demonstrates superiority when compared with global models including MLP, ANFIS and the original DENFIS offline version on benchmark data In the real life problems, it also demonstrated high performance in the NN3 competition with minimal parameter tuning 1250 Y.-C Hwang and Q Song DyNFIS builds an initial fuzzy inference system based on the result of the clustering process These fuzzy memberships are then optimised using back-propagation to minimise the error in each rule In the recall process, DENFIS would give satisfactory results if the recall examples are near the cluster centres Future research directions include: automated parameter optimisation, alternate fuzzy rule types and application of DyNFIS model for classification problems References Kasabov, N.K., Qun, S.: DENFIS: dynamic evolving neural-fuzzy inference system and its application for time-series prediction IEEE Transactions on Fuzzy Systems 10(2), 144–154 (2002) Wang, L.-X.: Adaptive Fuzzy Systems and Control: Design and Stability Analysis, p 352 Prentice-Hall, Englewood Cliffs (1994) Takagi, T., Sugeno, M.: Fuzzy identification of systems and its applications to modeling and control IEEE Transactions on Systems, Man, and Cybernetics 15(1), 116–132 (1985) Kasabov, N.: Evolving fuzzy neural networks for on-line supervised/unsupervised, knowledge-based learning IEEE Trans SMC - part B Cybernetics 31(6), 902–918 (2001) Domeniconi, C., Gunopulos, D.: Incremental support vector machine construction In: ICDM 2001, Proceedings IEEE International Conference on Data Mining (2001) Jang, J.-S.R.: ANFIS: Adaptive-Network-Based Fuzzy Inference System IEEE Transactions on Systems, Man, and Cybernetics 23, 665–684 (1993) Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators Neural Networks 2(5), 359–366 (1989) Vapnik, V.N.: Statistical Learning Theory John Wiley and Sons, New York (1998) Crone, S.F.: NN3 Neural Network Forecasting competition (2006) (18/10/2006), http://www.neural-forecasting-competition.com/NN3/results.htm (cited 06/11/2008) Author Index Abe, Shigeo I-937 Adachi, Masaharu I-14 Adam, Stavros II-308 Ahmad, Zainal II-469 Al-Ani, Ahmed I-103 Al-Jumaily, Adel I-103 ´ Alvarez, I II-402, II-410, II-418 Amemiya, Yoshihito II-851 Ando, Ruo I-555 Ang, Zhi Ping I-428 Aoki, Kazuhiro I-469 Arakawa, Masao I-995 Asahina, Akiyo I-361 Asai, Tetsuya II-851 Asthana, Akshay II-655 ˇ Babinec, Stefan I-200, II-284 Ban, Tao I-563, I-571 Bando, Takashi I-679 Bandyopadhyay, Sanghamitra I-605, II-543 Barambones, Oscar I-1053 Barczak, Andre I-1095 Barczak, Andre L.C I-945 Barreto, Guilherme A I-1180 Barros, Allan Kardec I-169 Basford, Kaye I-529 Beˇ uˇkov´, L n s a ’ubica I-111, I-285, I-671 Becerra, Jose Antonio I-1013 Beigy, Hamid I-829 Bellas, Francisco I-1013, I-1037 Benso, Victor Alberto Parcianello II-859 Berenz, Vincent II-244 Bhattacharyya, Malay I-605 Bian, Peng I-251 Blas, Nuria Gmez II-123 o Băcker, Sebastian I-513 o Boel, Ren´ II-615 e Boroomand, Arefeh I-883 Brito, D.S I-169 Brown, Andrew II-1049 Brusic, Vladimir I-529 Buteneers, Pieter I-56 Caama˜ o, Pilar I-1013 n Cacciola, Matteo II-720 Carvalho, Andr´ C.P.L.F de I-461, e I-486, II-252 Casey, Matthew I-40 ˇ n Cerˇ ansk´, Michal I-285, I-671 y Chan, Ching Hau II-808 Chan, Hsiao-Lung I-194 Chan, Jonathan Hoyin II-260 Chandra, B II-27, II-37, II-867 Chandra Sekhar, C II-394 Chao, Pei-Kuang I-194 Chau, Sheau Wei II-808 Chaves, Adriana da Costa F II-386 Chen, Jialiang I-436 Chen, Peng-Chuan I-194 Cheng, Wei-Chen II-3, II-378 Cho, Sung-Bae I-1121 Choi, Byung-Uk I-747 Choi, Sun-Wook II-1072 Chong, Tow Chong I-428 Chung, Jen-Feng II-1038 Chung, Wan Kyun I-64 Cichocki, Andrzej I-177, I-224, I-318, I-400, I-453, I-469, II-979, II-1021 Coelho, Andr´ L.V I-1180, II-276 e Conilione, Paul II-792 Corrˆa, Marcelo F II-461 e Costea, Ruxandra L II-885 Crutcher, Keith A II-70 Cysneiros, Francisco Jos´ A II-19 e d’Anjou, Alicia I-1045 Dadgostar, Farhad I-587 Dauwels, Justin I-177, I-224, I-318, I-400, I-469 Defoin-Platel, Michaăl I-1229 e Dehuri, Satchidananda I-1121 Delgado, Soledad II-437 de Lope, Javier I-1029 de Mingo, Luis Fernando II-893 Dhir, Chandra Shekhar I-613, II-80 D´ Miguel Angel II-123 ıaz, Doi, Kunio I-504 1252 Author Index Domingues, Marco A.O II-19 Doya, Kenji I-22 Duarte, Jo˜o I-723, II-97 a Duch, Wlodzislaw II-70, II-88 Duro, Richard J I-1013, I-1037 Echegoyen, Zelmar I-1021, I-1045 Eggers, G II-575 Eggert, Julian I-275, II-567, II-960 Eto, Masashi I-579 Farkaˇ, Igor II-292 s Fern´ndez, Luis II-437 a Fiasch´, Maurizio II-720 e Figueiredo, Karla II-461 Franz, Liz I-111 Fujii, Masami I-369 Fujimura, Hirotada II-835 Fukaya, Naoki I-679 Fukushima, Yasuhiro I-72, I-423 Funase, Arao II-1021 Funaya, Hiroyuki I-731, I-929 Furber, Steve II-1049, II-1057 Furuhashi, Takeshi II-1013 Fyfe, Colin II-935 Gabrys, Bogdan I-1172, II-607 Gao, Tao II-695, II-736 Gao, Wen-chun II-695, II-736 Garcez, Artur d’ Avila II-335 Garro, Beatriz A II-800 Gedeon, Tom II-236, II-655 Gerdelan, Antony P I-1079 Geweniger, Tina II-61 Goecke, Roland II-655 Gon¸alves, Nicolau II-559 c Gordon, Sean W I-962 G´rriz, J.M II-402, II-410, II-418 o Gr´bner, Martin I-893 a Gra˜ a, Manuel I-1021, I-1045, I-1053 n Gross, Horst-Michael I-805, I-813, II-567 Gu, Xiaodong II-535, II-768 Guan, Cuntai I-1137 Guizilini, Vitor Campanholo I-1110 Guo, Shanqing I-563, I-571 Guti´rrez, Abraham II-437, II-893 e Hada, Takahiro II-203 Hagiwara, Katsuyuki II-187 Hammer, Barbara II-61 Hara, Kazuyuki II-195 Harada, Hiroshi I-851 Harris, Chris I-408 Harun, Mohd Hafez Hilmi II-445 Hasegawa, Mikio I-851 Hasegawa, Osamu I-32 Hasegawa, Tomonari I-837 Hasenfuss, Alexander II-61 Hashimoto, Genta II-583 Hashimoto, Sho I-145 Hatakeyama, Takashi I-352 Hatori, Yasuhiro I-301 Hattori, Motonobu I-344 Hayakawa, Yoshihiro I-875 Hayashi, Akira I-663, I-715 Hayashi, Hatsuo I-161 Hayashida, Kenji I-970 Hazeyama, Hiroaki I-539, I-547 Hellbach, Sven II-567 Herbarth, Olf II-816 Hikawa, Hiroomi II-835 Hirabayashi, Miki I-267 Hirai, Yuzo II-353 Hirayama, Jun-ichiro I-361, II-951 Hiro, Takafumi II-583 Hisada, Masayuki I-1163 Hitomi, Kentarou I-679 Hlalele, Nthabiseng II-485 Ho, Kevin II-324, II-919 Hon, Hock Woon II-808 Honda, Yutaro I-32 Honma, Shun’ichi I-621 Hori, Michihiro II-843 Hoshino, Yuuki I-621 Hotta, Kazuhiro II-344, II-361, II-671 Hu, Yingjie I-1221 Hudson, Nigel II-979 Hung, Chihli II-139 Hwang, Yuan-Chun I-1245 Ichiyama, Shohei II-583 Ide, Yoshinori I-416 Igarashi, Jun II-1065 Ikeda, Kazushi I-679, I-731, I-929 Inada, Tomohisa I-494 Inohira, Eiichi I-240 Inoue, Daisuke I-579 Ishii, Kazuo I-771, I-779 Ishii, Shin I-361, I-469, II-951 Author Index Ishikawa, Masumi I-763 Ishikawa, Satoru I-293 Ishikawa, Seiji I-504 Ishizaki, Shun I-344 Iswandy, Kuncup II-477 Itai, Yoshinori I-504 Ito, Masahiro I-986 Ito, Norihiko II-663 Itoh, Hideaki I-327 Iwahashi, Naoto I-953 Iwasa, Kaname II-300 Iwata, Akira II-300, II-859 Iwata, Kazunori I-663 Jarur, Mary I-1211 Jeong, Jaeseung II-979 Jeong, Seungdo I-747 Jin, Xin II-1057 Jin, Yaochu I-48, I-216 Jones, Ben I-216 Jung, Tzyy-Ping II-1038 Juszczyszyn, Krzysztof II-607 Kadlec, Petr I-1172 Kadobayashi, Youki I-539, I-547 Kaji, Daisuke II-903 Kalra, P.K II-27, II-37, II-867 Kamimura, Ryotaro II-163, II-171, II-268, II-943 Kamioka, Takumi I-22 Kanda, Atsushi I-771 Kane, G.J II-575 Kaneko, Shuzo II-776 Karras, Dimitrios Alexios I-595, I-1063, II-112, II-308 Kasabov, Nikola I-3, I-962, I-1163, I-1196, I-1204, I-1221, I-1229, I-1237 Kaseda, Chosei I-995 Kashimori, Yoshiki I-444 Kaski, Samuel I-629 Kato, Shigeru I-638 Kato, Shuzo I-851 Kato, Takatoshi I-352, I-621 Katsumata, Shiori I-88 Katsuragawa, Shigehiko I-504 Kawachi, Ryo I-970 Kawai, Yuji I-844 Kawano, Tomohiko I-755 Kawato, Mitsuo I-336 Kawawaki, Dai I-336 1253 Kawewong, Aram I-32 Kazemitabar, Seyed Jalal I-829 Kazienko, Przemyslaw II-607 Khan, Mukaram II-1057 Khoo, Suisin II-236 Khushaba, Rami N I-103 Kie, Patrick Low Tiong II-509, II-927 Kikuchi, Shinichi I-344 Kim, Hyoungseop I-504 King, Irwin II-53 Kirstein, Stephan I-805, I-813 Kitajima, Tatsuo I-310 Kitano, Katsunori I-978 Klette, Reinhard I-1002, II-527 Ko, Li-Wei II-1038 Kobayakawa, Shunsuke II-679 Kobayashi, Kunikazu II-147 Koga, Takanori II-583, II-1065 Kojima, Hiroshi I-88 Komazawa, Daisuke I-731 Kănig, Andreas II-477, II-827 o Kordos, Miroslaw II-453 Kărner, Edgar I-805, I-813, II-567 o Koshimizu, Hiroshi I-1146 Kuba, Hinata II-361 Kubota, Shigeru I-310 Kugler, Mauricio II-300, II-859 Kuiper, Martin II-615 Kuremoto, Takashi II-147 Kuroda, Shigeru I-72, I-423 Kuroda, Taro I-851 Kurokawa, Hiroaki II-776 Kuroyanagi, Susumu II-300, II-859 Kvi˜era, V´clav I-893 c a Laaksonen, Jorma II-703 Lai, Pei Ling II-935 Lai, Weng Kin II-599 Lan, Shaohua I-571 Latchoumane, Charles II-979 Lau, Phooi Yee II-745 Lauwereyns, Johan I-416 Lee, Chong Ho II-1072 Lee, Hong II-639 Lee, Nung Kion I-478 Lee, Sanghoon I-64 Lee, Sang Hyoung I-64 Lee, Shih-Tseng I-194 Lee, Soo Young I-613, II-80 Lester, David II-1049 1254 Author Index Leung, Chi-sing II-324, II-919 Leung, Chi Sing II-316 Li, Fajie I-1002 Li, Jianming I-428 Li, Ming Hui I-428 Li, Xi I-521 Li, Yongming II-493 Lian, Xiao-Chen II-647 Liao, Ling-Zhi II-994 Libralon, Giampaolo L I-486 Lim, Gi Hyun I-747 Lim, Kian Guan I-428 Lim, Mei Kuan II-599 Lin, Chin-Teng II-1038 Lin, Chunhao II-426 Lin, Lanxin II-426 Lin, Ming-An I-194 Lin, Tzu-Chao II-687 Lin, Yang-Cheng I-647 Liou, Cheng-Yuan II-3, II-378 Liu, Mu-kun II-687 Liu, Qingshan II-1003 Liu, Qingzhong I-723 Liu, Zheng-guang II-695, II-736 Liu, Zhifeng II-527 Loo, Chu Kiong I-1103 Lopez-Guede, Jose Manuel I-1053 Lopez-Pe˜ a, Fernando I-1037 n L´pez, Luis Fernando de Mingo II-123 o L´pez, M II-402, II-410, II-418 o Lorena, Ana C I-486 Lu, Bao-Liang II-647 Lu, Shao-Wei II-1038 Lucena, Fausto I-169 Luo, Ming I-129 Ma, Jun I-707, II-623 Ma, Liying II-711 Ma, Yu II-768 Maia, Jos´ Everardo B I-1180 e Makula, Matej I-671 Mandic, Danilo P I-453 Maravall, Dar´ ıo I-1029 Marinov, Corneliu A II-885 Mariyama, Toshisada I-327 Markoˇov´, M´ria I-111 s a a Marmulla, R II-575 Mart´ ınez, V´ ıctor II-893 Marwala, Tshilidzi II-485, II-517, II-728, II-752 Masisi, Lesedi Melton II-517 Matsuda, Michiyuki I-469 Matsuda, Nobuo II-703 Matsui, Hideyuki II-155 Matsumoto, Kazuya I-1196 Matsumoto, Masashi I-688 Matsushima, Fumiya I-352 Matsuyama, Yasuo I-352, I-621, II-369 Matsuzaki, Masunori II-583 Matykiewicz, Pawel II-70 Maul, Tom´s II-599 a Maurice, Monique I-224, I-400, II-979 Mayr, Christian I-137 Meechai, Asawin II-260 Megali, Giuseppe II-720 Mendis, B Sumudu U II-655 Menhaj, Mohammad B I-883 Messom, Chris I-587, I-1095 Mishra, Bijan Bihari I-1121 Mistry, Jaisheel II-752 Miyamoto, Daisuke I-539, I-547 Miyamoto, Goh I-851 Miyazawa, Yasuhiro I-384 Miyoshi, Seiji II-195 Mogi, Ken I-377 Monk, Travis I-408 Morabito, Francesco C II-720 Moriguchi, Yoshitaka II-344 Morikawa, Kazuya I-937 Morishige, Ken-ichi I-336 Morita, Masahiko I-384 Morita, Mikio I-763 Morita, Satoru II-784 Mouri, Motoaki II-1021 Mudroch, Martin I-893 Mulder, Wim De II-615 Murata, Yoshitoshi I-851 Murray, Alan I-95 Musial, Katarzyna II-607 Musirin, Ismail II-445 Nagamatu, Masahiro I-787 Nagata, Kenji I-696 Nakajima, Hidekazu I-259 Nakajima, Koji I-875 Nakajima, Yukiko I-986 Nakamura, Takeshi I-469 Nakao, Koji I-579 Nakayama, Hirotaka I-995 Nascimento, Mari´ C.V I-461 a Author Index Nattkemper, Tim W I-513, II-911 Nelwamondo, Fulufhelo II-485, II-517 Nelwamondo, Fulufhelo V II-752 Neoh, Tze Ming I-1103 Neto, Manoel J.R II-45 Neves, Jo˜o Carvalho das I-723 a Neves, Joao Carvalho II-97 Ng, Ee Lee II-599 Nikkilă, Janne II-559 a Nina, Zhou II-551 Nishida, Youichi I-352 Nishida, Yuya I-787 Nishigami, Toru I-208 Nishiguchi, Junya I-995 Nishikawa, Hitoshi I-821 Nishikawa, Ikuko I-986 Nishino, Eisuke I-579 Nishioka, Ryota I-962 Nitta, Katsumi II-220 Nomoto, Atsuo II-671 Nomura, Yoshihiko I-929 Nonaka, Yukihiro I-161 Noor, Rabiatul Adawiah Mat II-469 Nugaev, Ildar II-131 Obayashi, Masanao II-147 Ohashi, Hirodata I-267 Ohkouchi, Kazuya I-579 Ohnishi, Noboru I-169 Okada, Masato I-655, II-195 Okamoto Jr., Jun I-1110 Oliveira, Pedro G de II-276 Omori, Takashi I-293 Ong, Chuan Poh I-1103 Onoda, Takashi II-663 Osana, Yuko II-203, II-212 Osman, Hassab Elgawi II-104, II-760 Osman, Yasir Salih II-808 Othman, Muhammad Murtadha II-445 Ozawa, Seiichi I-821, I-1163, I-1196 Pang, Shaoning I-962, I-1163, I-1196 Partzsch, Johannes I-137 Patel, Leena N I-95 Patel, Pretesh B II-728 Pavlou, Athanasios I-40 Pears, Russel I-1237 Pecha˜, Pavel I-893 c Peng, Jianhua I-129 Pestian, John P II-70 1255 Phon-Amnuaisuk, Somnuk I-153, I-232 Pilichowski, Maciej II-88 Plana, Luis II-1049 Pontin, David R I-909 Posp´ ıchal, Jiˇ´ I-200, II-284 rı Prieto, Abraham I-1037 Prom-on, Santitham II-260 Prudˆncio, Ricardo B.C II-45 e Puntonet, C.G II-402, II-410, II-418 Quek, Chai I-1137 Ralescu, Anca L I-453 Ram´ ırez, J II-402, II-410, II-418 Ramanathan, Kiruthika I-428 Rast, Alexander II-1057 Raszkowsky, J II-575 Rebhan, Sven II-960 Ren, Lu II-335 Reyes, Napoleon H I-1071, I-1079 Ribeiro, Bernardete I-723, II-97 Ribeiro, Marcelo N II-45 Richter, Matthias II-816 Roeder, Stefan W II-816 Rosen, Alan I-794 Rosen, David B I-794 Rossi, Andr´ L.D II-252 e Roy, Asim I-821 Rush, Elaine I-1204 Rutkowski, Tomasz M I-453 Saeki, Katsutoshi II-877 Saethang, Thammakorn II-260 Saha, Sriparna II-543 Saito, Toshimichi I-837, I-844, I-1146 Sakai, Ko I-301 Sakakibara, Kazutoshi I-986 Sakamoto, Hirotaka I-986 Sakamoto, Kazuhiro I-259 Sakumura, Yuichi I-469 Salas-Gonzalez, D II-402, II-410, II-418 Salazar, Diogo R.S II-11 Sallis, Philip I-917, I-1211 Samsudin, Mohamad Faizal bin II-631 Samura, Toshikazu I-344 Sarrafzadeh, Abdolhossein I-587 Sato, Hideaki II-703 Sato, Masa-aki I-336 Sato, Masanori I-771 Sato, Masayoshi I-293 Sato, Naoyuki I-186 1256 Author Index Sato, Yasuomi D I-867 Sawada, Koji I-352 Schă ny, Rene I-137 u Scherbart, Alexandra I-513, II-911 Schleif, Frank-Michael II-61 Schliebs, Stefan I-1229, II-615 Schramm, Lisa I-48 Schrauwen, Benjamin I-56 Segovia, F II-402, II-410, II-418 Sekine, Yoshifumi II-877 Sekino, Masashi II-220 Sendhoff, Bernhard I-48, I-216 Shanmuganathan, Subana I-917 Sharif, Waqas II-960 Shen, Minfen I-436 Shi, Luping I-428 Shiba, Naoto I-494 Shibata, Katsunari I-755, II-631, II-970 Shibata, Tomohiro I-679, II-1029 Shiblee, Md II-27, II-37 Shiino, Masatoshi I-867 Shimizu, Atsushi II-212 Shimizu, Ryo II-877 Shin, Heesang I-1071 Shouno, Osamu II-228 Silva, Catarina I-723 ˇ ıma, Jiˇ´ II-179 S´ rı Soares, Carlos II-252 Sol´-Casals, Jordi I-224, II-979 e Solgi, Mojtaba I-80 Soltic, Snjezana I-1129 Song, Qun I-1204, I-1221, I-1245 Sonoh, Satoshi II-1065 Sossa, Humberto II-800 Souza, Renata M.C.R de II-11, II-19 Stroobandt, Dirk I-56 Sudo, Tamami I-377 Suemitsu, Atsuo I-384 Suetake, Noriaki II-583 Suetsugu, Ken I-494 Suh, Il Hong I-64, I-747 Sum, John II-324, II-919 Sum, Pui Fai II-316 Sun, Lisha II-426 Sun, Tieli II-501 Sung, Andrew H I-723 Susnjak, Teo I-945 Suzuki, Kenji II-244 Suzuki, Michiyasu I-369 Suzuki, Takeshi I-259 Tabata, Yusuke I-14 Tagawa, Yoshihiko I-494 Tajima, Fumiaki II-703 Takahashi, Haruhisa II-344, II-361, II-671 Takahashi, Hiromu II-1013 Takata, Mika II-369 Takazawa, Kazuhiro I-444 Takeda, Taichi I-851 Takeuchi, Johane II-228 Takeuchi, Jun’ichi I-579 Takumi, Ichi II-1021 Tamada, Hirotaka I-715 Tamei, Tomoya II-1029 Tanaka, Toshihisa I-453 Tanaka, Toshiyuki II-155 Tanaka, Yuji I-867 Taniguchi, Tadahiro I-953 Tanino, Tetsuzo I-970 Tanscheit, Ricardo II-386 Tatsumi, Keiji I-970 Tay, Yong Haur II-745 Then, Siu Jing II-808 Theng, Lau Bee II-509, II-591, II-927 Timm, Wiebke I-513 Ting, K.H I-436 Togashi, Yuki I-293 Tohru, Yagi II-1021 Toledo, Franklina M.B de I-461 Tomita, Jyun-ichiro II-353 Torikai, Hiroyuki I-145, I-208 Tou, Jing Yi II-745 Tovar, Gessyca Maria II-851 Tran, Ha Nguyen I-851 Tripathi, B.K II-867 Trujillo, Marcelo I-1211 Tsai, I-Ling II-1038 Tsuboyama, Manabu I-32 Tsuda, Ichiro I-72, I-423 Tsujino, Hiroshi II-228 Tsukada, Minoru I-72, I-416, I-423 Tsukada, Yuki I-469 Tung, Sau Wai I-1137 Uchibe, Eiji I-22 Uchino, Eiji II-583 Ueda, Michihito II-843 Uezu, Tatsuya II-195 Author Index Uota, Shiori I-240 Utsunomiya, Hiroki II-970 Vasilyev, Vladimir II-131 V´zquez, Roberto A II-800 a Vellasco, Marley II-386, II-461 Vellasco, Pedro II-461 Verma, Anju I-1204 Verma, Brijesh II-639 Versaci, Mario II-720 Verstraeten, David I-56 Vialatte, Fran¸ois-Benoit I-177, I-224, c I-318, I-400, I-469, II-979 Vieira, Armando II-97, I-723 Vieira, V.M.M II-575 Vig´rio, Ricardo II-559 a Vijaya Rama Raju, A II-394 Villaverde, Ivan I-1021, I-1045 Villmann, Thomas II-61 Vrahatis, Michael N II-308 Wagatsuma, Hiroaki I-119, I-859, I-1087 Wakita, Toshihiro I-293 Wang, Chen-Cheng I-647 Wang, Dianhui I-478, I-521, II-792 Wang, Fuxiang I-737 Wang, Hongjiang II-316 Wang, Jun II-1003 Wang, Linlu II-535 Wang, Lipo II-551 Wang, Shiyu I-1188 Wang, Xuebing I-779 Wang, Yuanyuan II-535, II-768 Watanabe, Kazuho I-655 Watanabe, Sumio I-688, I-696, II-903 Watts, Michael J I-901, I-909 Weber, Theophane I-177, I-318 Weng, Juyang I-80 Wersing, Heiko I-805, I-813 Widiputra, Harya I-1237 Willert, Volker I-275 Wilson, Peter II-1049 Wimalaratna, Sunil II-979 Wong, Kok Wai I-638 Wong, Wei Kin I-1103 Worner, Sue P I-901, I-909 Wu, Tony I-194 Wu, Yongjun I-129 Xiu, Jie I-1188 Xiu, Yan I-1188 Xu, Qiuliang I-563 Yamada, Kazuhiro I-978 Yamagata, Masaya I-579 Yamaguchi, Yoko I-186, I-859 Yamaguti, Yutaka I-72, I-423 Yamakawa, Takeshi I-369, I-962 Yamakawa, Toshitaka I-369 Yamasaki, Hironobu II-663 Yamauchi, Koichiro I-293, I-1154 Yamazaki, Keisuke I-629 Yang, Fengqin II-501 Yang, Fu-Shu II-1038 Yang, Haiqin II-53 Yano, Masafumi I-259 Yao, Xin I-216 Yeh, Chien-Ting II-687 Yeh, Chung-Hsing I-647 Yi, Chuho I-747 Yokoi, Hirokazu I-240, II-679 Yonekawa, Masato II-776 Yong, Yen San II-808 Yoshikawa, Tomohiro II-1013 Yoshino, Tetsuma I-621 Yoshioka, Katsunari I-579 Yoshioka, Taku I-336 Yu, Yan I-571 Yuan, Zhijian II-987 Yun, Yeboon I-995 Zakos, John II-639 Zender, Paul M II-70 Zeng, Xiaoping II-493 Zhang, Changhai II-501 Zhang, Chen I-275 Zhang, Jun I-737, II-695, II-736 Zhang, Kau I-1163 Zhang, Liming I-251, II-535 Zhang, Ping I-529 Zhang, Qi I-392 Zhao, Zhonghua I-563 Zhu, Dingyun II-655 Zorkadis, Vasilios C I-595 Zulueta, Ekaitz I-1053 Zuppicich, Alan I-1129 1257 ... (iECOS) integrate principles from different levels of information processing in the brain, including cognitive-, neuronal-, genetic- and quantum, in their dynamic interaction over time The paper introduces... 911 On Weight-Noise-Injection Training Kevin Ho, Chi-sing Leung, and John Sum Intelligent Control of Heating, Ventilating and Air Conditioning Systems ... the training sample into the updated weights, such that: PSPTh = c max(PSP) (5) Creating and merging neurons based on localised incoming information and on system’s performance are main operations

Ngày đăng: 22/03/2014, 15:20

TỪ KHÓA LIÊN QUAN