1. Trang chủ
  2. » Khoa Học Tự Nhiên

Neural networks desing martin t hagan 2nd edition

1K 93 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 1.013
Dung lượng 11,68 MB

Nội dung

www.elsolucionario.org Neural Network Design 2nd Edition Hagan Demuth Beale De Jesús Neural Network Design 2nd Edtion Martin T Hagan Oklahoma State University Stillwater, Oklahoma Howard B Demuth University of Colorado Boulder, Colorado Mark Hudson Beale MHB Inc Hayden, Idaho Orlando De Jesús Consultant Frisco, Texas Copyright by Martin T Hagan and Howard B Demuth All rights reserved No part of the book may be reproduced, stored in a retrieval system, or transcribed in any form or by any means electronic, mechanical, photocopying, recording or otherwise - without the prior permission of Hagan and Demuth MTH To Janet, Thomas, Daniel, Mom and Dad HBD To Hal, Katherine, Kimberly and Mary MHB To Leah, Valerie, Asia, Drake, Coral and Morgan ODJ To: Marisela, María Victoria, Manuel, Mamá y Papá Neural Network Design, 2nd Edition, eBook OVERHEADS and DEMONSTRATION PROGRAMS can be found at the following website: hagan.okstate.edu/nnd.html A somewhat condensed paperback version of this text can be ordered from Amazon www.elsolucionario.org Contents Preface Introduction Objectives History Applications Biological Inspiration Further Reading 1-1 1-2 1-5 1-8 1-10 Neuron Model and Network Architectures Objectives Theory and Examples Notation Neuron Model Single-Input Neuron Transfer Functions Multiple-Input Neuron Network Architectures A Layer of Neurons Multiple Layers of Neurons Recurrent Networks Summary of Results Solved Problems Epilogue Exercises i 2-1 2-2 2-2 2-2 2-2 2-3 2-7 2-9 2-9 2-10 2-13 2-16 2-20 2-22 2-23 An Illustrative Example Objectives Theory and Examples Problem Statement Perceptron Two-Input Case Pattern Recognition Example Hamming Network Feedforward Layer Recurrent Layer Hopfield Network Epilogue Exercises 3-1 3-2 3-2 3-3 3-4 3-5 3-8 3-8 3-9 3-12 3-15 3-16 Perceptron Learning Rule Objectives Theory and Examples Learning Rules Perceptron Architecture Single-Neuron Perceptron Multiple-Neuron Perceptron Perceptron Learning Rule Test Problem Constructing Learning Rules Unified Learning Rule Training Multiple-Neuron Perceptrons Proof of Convergence Notation Proof Limitations Summary of Results Solved Problems Epilogue Further Reading Exercises ii 4-1 4-2 4-2 4-3 4-5 4-8 4-8 4-9 4-10 4-12 4-13 4-15 4-15 4-16 4-18 4-20 4-21 4-33 4-34 4-36 Signal and Weight Vector Spaces Objectives Theory and Examples Linear Vector Spaces Linear Independence Spanning a Space Inner Product Norm Orthogonality Gram-Schmidt Orthogonalization Vector Expansions Reciprocal Basis Vectors Summary of Results Solved Problems Epilogue Further Reading Exercises 5-1 5-2 5-2 5-4 5-5 5-6 5-7 5-7 5-8 5-9 5-10 5-14 5-17 5-26 5-27 5-28 Linear Transformations for Neural Networks Objectives Theory and Examples Linear Transformations Matrix Representations Change of Basis Eigenvalues and Eigenvectors Diagonalization Summary of Results Solved Problems Epilogue Further Reading Exercises iii 6-1 6-2 6-2 6-3 6-6 6-10 6-13 6-15 6-17 6-28 6-29 6-30 www.elsolucionario.org Supervised Hebbian Learning Objectives Theory and Examples Linear Associator The Hebb Rule Performance Analysis Pseudoinverse Rule Application Variations of Hebbian Learning Summary of Results Solved Problems Epilogue Further Reading Exercises 7-1 7-2 7-3 7-4 7-5 7-7 7-10 7-12 17-4 7-16 7-29 7-30 7-31 Performance Surfaces and Optimum Points Objectives Theory and Examples Taylor Series Vector Case Directional Derivatives Minima Necessary Conditions for Optimality First-Order Conditions Second-Order Conditions Quadratic Functions Eigensystem of the Hessian Summary of Results Solved Problems Epilogue Further Reading Exercises iv 8-1 8-2 8-2 8-4 8-5 8-7 8-9 8-10 8-11 8-12 8-13 8-20 8-22 8-34 8-35 8-36 10 Performance Optimization Objectives Theory and Examples Steepest Descent Stable Learning Rates Minimizing Along a Line Newton’s Method Conjugate Gradient Summary of Results Solved Problems Epilogue Further Reading Exercises 9-1 9-2 9-2 9-6 9-8 9-10 9-15 9-21 9-23 9-37 9-38 9-39 Widrow-Hoff Learning Objectives Theory and Examples ADALINE Network Single ADALINE Mean Square Error LMS Algorithm Analysis of Convergence Adaptive Filtering Adaptive Noise Cancellation Echo Cancellation Summary of Results Solved Problems Epilogue Further Reading Exercises v 10-1 10-2 10-2 10-3 10-4 10-7 10-9 10-13 10-15 10-21 10-22 10-24 10-40 10-41 10-42 11 12 Backpropagation Objectives Theory and Examples Multilayer Perceptrons Pattern Classification Function Approximation The Backpropagation Algorithm Performance Index Chain Rule Backpropagating the Sensitivities Summary Example Batch vs Incremental Training Using Backpropagation Choice of Network Architecture Convergence Generalization Summary of Results Solved Problems Epilogue Further Reading Exercises 11-1 11-2 11-2 11-3 11-4 11-7 11-8 11-9 11-11 11-13 11-14 11-17 11-18 11-18 11-20 11-22 11-25 11-27 11-41 11-42 11-44 Variations on Backpropagation Objectives Theory and Examples Drawbacks of Backpropagation Performance Surface Example Convergence Example Heuristic Modifications of Backpropagation Momentum Variable Learning Rate Numerical Optimization Techniques Conjugate Gradient Levenberg-Marquardt Algorithm Summary of Results Solved Problems Epilogue Further Reading Exercises vi 12-1 12-2 12-3 12-3 12-7 12-9 12-9 12-12 12-14 12-14 12-19 12-28 12-32 12-46 12-47 12-50 B Notation Lyapunov Stability Lyapunov Function Va Zero Derivative Set, Largest Invariant Set and Closure Z , L and L Bounded Lyapunov Function Set   =  a:V  a     Hopfield Network Parameters Circuit Parameters T i j , C , R i , I i ,  Amplifier Gain  B-8 Introduction C Software Introduction We have used MATLAB, a numeric computation and visualization software package, in this text However, MATLAB is not essential for using this book The computer exercises can performed with any available programming language, and the Neural Network Design Demonstrations, while helpful, are not critical to understanding the material covered in this book MATLAB is widely available and, because of its matrix/vector notation and graphics, is a convenient environment in which to experiment with neural networks We use MATLAB in two different ways First, we have included a number of exercises for the reader to perform in MATLAB Many of the important features of neural networks become apparent only for large scale problems, which are computationally intensive and not feasible for hand calculations With MATLAB, neural network algorithms can be quickly implemented, and large scale problems can be tested conveniently If MATLAB is not available, any other programming language can be used to perform the exercises The second way in which we use MATLAB is through the Neural Network Design Demonstrations, which can be downloaded from the website hagan.okstate.edu/nnd.html These interactive demonstrations illustrate important concepts in each chapter The icon to the left identifies references to these demonstrations in the text MATLAB, or the student edition of MATLAB, version 2010a or later, should be installed on your computer in a a folder named MATLAB To create this directory or folder and complete the MATLAB installation process, follow the instructions given in the MATLAB documentation Take care to follow the guidelines given for setting the path After the Neural Network Design Demonstration software has been loaded into the MATLAB directory on your computer (or if the MATLAB path has been set to include the directory containing thedemonstration software), the demonstrations can be invoked by typing nnd at the MATLAB prompt All demonstrations are easily accessible from a master menu C C-1 C Software www.elsolucionario.org Overview of Demonstration Files Running the Demonstrations You can run the demonstrations directly by typing their names at the MATLAB prompt Typing help nndesign brings up a list of all the demos you can choose from Alternatively, you can run the Neural Network Design splash window (nnd) and then click the Contents button This will take you to a graphical Table of Contents From there you can select chapters with buttons at the bottom of the window and individual demonstrations with popup menus Sound Many of the demonstrations use sound In many cases the sound adds to the understanding of a demonstration In other cases it is there simply for fun If you need to turn the sound off you can give MATLAB the following command and all demonstrations will run quietly: nnsound off To turn sound back on: nnsound on You may note that demonstrations that utilize sound often run faster when sound is off In addition, on some machines which not support sound errors can occur unless the sound is turned off List of Demonstrations General nnd - Splash screen nndtoc - Table of contents nnsound - Turn Neural Network Design sounds on and off Chapter 2, Neuron Model and Network Architectures nnd2n1 - One-input neuron nnd2n2 - Two-input neuron Chapter 3, An Illustrative Example nnd3pc - Perceptron classification nnd3hamc - Hamming classification nnd3hopc - Hopfield classification C-2 Overview of Demonstration Files Chapter 4, Perceptron Learning Rule nnd4db - Decision boundaries nnd4pr - Perceptron rule Chapter 5, Signal and Weight Vector Spaces nnd5gs - Gram-Schmidt nnd5rb - Reciprocal basis Chapter 6, Linear Transformations for Neural Networks nnd6lt - Linear transformations nnd6eg - Eigenvector game Chapter 7, Supervised Hebbian Learning nnd7sh - Supervised Hebb Chapter 8, Performance Surfaces and Optimum Points nnd8ts1 - Taylor series #1 nnd8ts2 - Taylor series #2 nnd8dd - Directional derivatives nnd8qf - Quadratic function Chapter 9, Performance Optimization nnd9sdq - Steepest descent for quadratic function nnd9mc - Method comparison nnd9nm - Newton's method nnd9sd - Steepest descent Chapter 10, Widrow-Hoff Learning nnd10nc - Adaptive noise cancellation nnd10eeg - Electroencephalogram noise cancellation nnd10lc - Linear pattern classification Chapter 11, Backpropagation nnd11nf - Network function nnd11bc - Backpropagation calculation nnd11fa - Function approximation nnd11gn - Generalization C C-3 C Software Chapter 12, Variations on Backpropagation nnd12sd1- Steepest descent backpropagation #1 nnd12sd2 - Steepest descent backpropagation #2 nnd12mo - Momentum backpropagation nnd12vl - Variable learning rate backpropagation nnd12ls - Conjugate gradient line search nnd12cg - Conjugate gradient backpropagation nnd12ms - Maquardt step nnd12m - Marquardt backpropagation Chapter 13, Generalization nnd13es - Early stoppinng nnd13reg - Regularization nnd13breg - Bayesian regularization nnd13esr - Early stopping/regularization Chapter 14, Dynamic Networks nnd14fir - Finite impulse response network nnd14iir - Infinite impulse response network nnd14dynd - Dynamic derivatives nnd14rnt - Recurrent network training Chapter 15, Associative Learning nnd15uh - Unsupervised Hebb nnd15edr - Effect of decay rate nnd15hd - Hebb with decay nnd15gis - Graphical instar nnd15is - Instar nnd15os - Outstar Chapter 16, Competitive Networks nnd16cc - Competitive classification nnd16cl - Competitive learning nnd16fm1 - 1-D feature map nnd16fm2 - 2-D feature map nnd16lv1 - LVQ1 nnd16lv2 - LVQ2 Chapter 17, Radial Basis Networks nnd17nf - Network function nnd17pc - Pattern classification nnd17lls - Linear least squares nnd17ols - Orthogonal least squares nnd17no - Nonlinear optimization C-4 Overview of Demonstration Files www.elsolucionario.org Chapter 18, Grossberg Network nnd18li - Leaky integrator nnd18sn - Shunting network nnd18gl1 - Grossberg layer nnd18gl2 - Grossberg layer nnd18aw - Adaptive weights Chapter 19, Adaptive Resonance Theory nnd19al1 - ART1 layer nnd19al2 - ART1 layer nnd19os - Orienting subsystem nnd19a1 - ART1 algorithm Chapter 20, Stability nnd20ds - Dynamical system Chapter 21, Hopfield Network nnd21hn - Hopfield network C C-5 Index I Index A Abbreviated notation 2-8 ADALINE network 10-2 decision boundary 10-4 mean square error 10-4 Adaptive filtering 10-13 Adaptive noise cancellation 10-15 Adaptive resonance theory (ART) 19-2 Amacrine cells 18-4 Amari, S 18-2 AND gate 4-7 Anderson, J.A 1-2, 1-3, 15-2, 18-2 Angle 5-7 Apple and Orange Example 3-2 Hamming network solution 3-8 Hopfield solution 3-12 perceptron 3-3 perceptron solution 3-5 problem statement 3-2 Applications of neural networks 1-5 aerospace 1-5 automotive 1-5 banking 1-5 defense 1-6 electronics 1-6 entertainment 1-6 financial 1-6 insurance 1-6 manufacturing 1-6 medical 1-6 oil and gas 1-6 robotics 1-7 securities 1-7 speech 1-7 telecommunications 1-7 transportation 1-7 ART1 fast learning 19-19 Layer 19-4 Layer 19-10 learning law L1 - L2 19-17 L2 - L1 19-20 orienting subsystem 19-13 resonance 19-17 subset/superset dilemma 19-17 summary 19-21 vigilance 19-15 ART2 19-23 ART3 19-23 ARTMAP 19-23 Associative learning Hebb rule 7-4 instar rule 15-11 Kohonen rule 15-15 outstar rule 15-17 pseudoinverse rule 7-7 unsupervised Hebb rule 15-5 Associative memory 7-3 autoassociative memory 7-10 Hopfield network 21-5 linear associator 7-3 Associative networks 15-3 instar 15-9 outstar 15-16 Attractors 21-11 Autoassociative memory 7-10 Autocorrelation function 22-24 B Backpropagation 11-7 batching 12-7 CGBP 12-15 choice of network architecture 11-18 conjugate gradient 12-14 convergence 11-20 Index-1 Index delta-bar-delta 12-13 drawbacks 12-3, 14-3 example 11-14 generalization 11-22 initial weights 12-6 Jacobian matrix 12-23 Levenberg-Marquardt 12-19, 12-21 Jacobian calculation 12-22 Marquardt sensitivity 12-24 LMBP 12-25 MOBP 12-11 performance index 11-8 performance surface 12-3 Quickprop 12-14 SDBP 12-2 sensitivity 11-10 summary 11-13 SuperSAB 12-14 VLBP 12-12 Backpropagation order 14-4 Backpropagation-through-time (BPTT) 14-2, 1411, 14-22 Backward elimination 17-18 Basis set 5-5 Batch SOFM learning algorithm 26-6 Batch training 11-17 Batching 12-7 Bayes’ rule 13-10 Bayes’ Theorem 13-10 Bayesian analysis 13-12 effective number of parameters 13-16 evidence 13-13 Gauss-Newton approximation to Bayesian regularization 13-17 likelihood function 13-13 posterior density 13-13 prior density 13-13 Bayesian regularization 13-12, 23-6 effective number of parameters 23-6 Baysian analysis most probable 13-14 Biological inspiration of neural networks 1-8 Bipolar cells 18-3 Brightness constancy 18-8 C Carpenter, G 19-2 Center 17-6 CGBP 12-15 Chain rule 11-9 Change of basis 6-6 similarity transformation 6-8 Chemical vapor deposition 24-2 Choice of network architecture 11-18 Circular hollow 8-16 Clustering 22-9 Coding the targets 22-7 Committee of networks 22-18, 25-10 Competitive learning 16-7 adaptive resonance theory 19-2 ART1 19-4 ART2 19-23 ART3 19-23 ARTMAP 19-23 Fuzzy ARTMAP 19-23 instar rule 16-7 Kohonen rule 16-7 learning rate 16-9 LVQ2 16-21 problems 16-9 Competitive networks 16-5 ART1 19-4 Grossberg 18-13 Hamming network 16-3 Lateral inhibition 16-5 learning vector quantization 16-16 self-organizing feature map 16-12 winner-take-all 16-5 Conditioned stimulus 15-3 Cones 18-3 Confusion matrix 22-21, 25-7 Conjugate directions 9-16 Conjugate gradient 9-15, 12-14 golden section search 12-17 interval location 12-16 interval reduction 12-16 Content-addressable memory 21-16 Contour plot 8-8 Contrast enhance 18-18 Index-2 Index www.elsolucionario.org control electromagnet 27-2 Correlation matrix 10-6 Cross-correlation function 22-25 Cross-entropy 22-17 Cross-validation 13-6 Curse of dimensionality 17-11 Featural filling-in 18-6 Feature extraction 22-6, 25-3 Finite impulse response 14-6 Fitting 22-8 Forest cover 26-2 Forward selection 17-18 Fovea 18-5 Fukushima, K 18-2 Function approximation 11-4, 22-8 Fuzzy ARTMAP 19-23 D Decay rate 15-7 Decision boundary 4-5, 10-4, 11-4 Delay 2-13 Delta rule 7-13, 10-7 Delta-bar-delta 12-13 Descent direction 9-3 Diagonalization 6-13 Directional derivative 8-5 Distortion measure 22-23 Domain 6-2 Dynamic networks 14-2 G E Early stopping 13-7, 13-19, 25-7 Echo cancellation 10-21 EEG 10-15 Effective number of parameters 13-16, 13-23, 236 Eigenvalues 6-10 Eigenvectors 6-10 Electrocardiagram 25-2 electromagnet 27-2 Elliptical hollow 8-17 Emergent segmentation 18-6 Equilibrium point 20-4 Euclidean space 5-3 evidence 13-13 Excitatory 18-10 Extended Kalman filter algorithm 22-14 Extrapolation 13-3, 22-21, 22-27 F Fahlman, A.E 12-14 False negative 22-21 False positive 22-22 Ganglion cells 18-4 Gauss-Newton algorithm 12-21 Jacobian matrix 12-20 Gauss-Newton approximation to Bayesian regularization 13-17 Generalization 11-22, 13-2 Golden Section search 12-17 Gradient 8-5 Gradient descent 9-2 Gram-Schmidt orthogonalization 5-8 Grossberg competitive network 18-13 choice of transfer function 18-20 Layer 18-13 Layer 18-17 learning law 18-22 relation to Kohonen law 18-24 Grossberg, S 1-3, 15-2, 18-2, 19-2 H Hamming network 3-8, 16-3 feedforward layer 3-8, 16-3 recurrent layer 3-9, 16-4 Hebb rule 7-4, 21-18 decay rate 15-7 performance analysis 7-5 supervised 7-4 unsupervised 7-4, 15-5 with decay 7-12 Hebb, D.O 1-3, 7-2 Hebb’s postulate 7-2 Hebbian learning 7-2 variations 7-12 Index-3 I Index Hessian 8-5 eigensystem 8-13 Hidden layer 2-11 High-gain Lyapunov function 21-13 Hinton, G.E 11-2 Histogram of errors 22-21 History of neural networks 1-2 Hoff, M.E 1-3, 10-2, 11-2 Hopfield model 21-3 Hopfield network 3-12, 6-2, 21-5 attractors 21-11 design 21-16 content-addressable memory 21-16 effect of gain 21-12 example 21-7 Hebb rule 21-18 high-gain Lyapunov function 21-13 Lasalle’s invariance theorem 21-7 Lyapunov function 21-5 Lyapunov surface 21-22 spurious patterns 21-20 Hopfield, J.J 1-4 Horizontal cells 18-4 Hubel, D.H 16-2, 18-12 I Illusions 18-4 Incremental training 11-17 Infinite impulse response 14-7 Inhibitory 18-10 Inner product 5-6 Input selection 22-12 Input weight 14-3 Instar 15-9 Instar rule 15-11, 16-7 Integrator 2-13 Interpolation 13-3 Interval location 12-16 Interval reduction 12-16 Invariant set 20-13 J Jacobian matrix 12-20 Jacobs, R.A 12-13 K Kohonen rule 15-15, 16-7 graphical representation 16-7 Kohonen, T 1-3, 15-2, 18-2 L LaSalle’s corollary 20-14 LaSalle’s invariance theorem 20-13 invariant set 20-13 Set L 20-13 Z 20-12 Lateral inhibition 16-5 Layer 2-9 competitive 16-5 problems 16-9 hidden 2-11 output layer 2-11 superscript 2-11 Layer weight 14-3 Layered Digital Dynamic Network (LDDN) 14-3 Le Cun, Y 11-2 Leaky integrator 18-9 Learning rate 9-3, 10-8 competitive learning 16-9 stable 9-6, 10-10 Learning rules 4-2 ART1 19-21 backpropagation 11-7 competitive learning 16-7 delta rule 7-13 Grossberg competitive network 18-22 Hebb rule 7-4 Hebbian learning 7-2 learning vector quantization 16-16 LMS algorithm 10-7 local learning 15-5 perceptron 4-8, 4-13 proof of convergence 4-15 performance learning 8-2 pseudoinverse rule 7-7 reinforcement learning 4-3 supervised learning 4-3 unsupervised learning 4-3 Index-4 Index Widrow-Hoff 7-13 Learning vector quantization (LVQ) 16-16 subclass 16-17 Levenberg-Marquardt algorithm 12-19, 12-21, 22-14, 24-7 Jacobian calculation 12-22 Jacobian matrix 12-20 Likelihood function 13-13 Linear associator 7-3 Linear independence 5-4 Linear initialization 26-6 Linear least squares 17-11 Linear separability 4-19 Linear transformation 6-2 change of basis 6-6 domain 6-2 matrix representation 6-3 change of basis 6-6 range 6-2 Linear vector spaces 5-2 LMBP 12-25 LMS algorithm 10-2, 10-7 adaptive filtering 10-13 adaptive noise cancellation 10-15 analysis of convergence 10-9 learning rate 10-8 stable learning rate 10-10 Local learning 15-5 Long term memory (LTM) 18-12, 18-22 LVQ2 16-21 Lyapunov function 20-12 Lyapunov stability theorem 20-6 M Mach, E 1-2 magnet 27-2 Marquardt algorithm 12-19 Marquardt sensitivity 12-24 Matrix representation 6-3 change of basis 6-6 diagonalization 6-13 McClelland, J.L 1-4, 11-2 McCulloch, W.S 1-3, 4-2 Mean square error 10-4, 11-8 Mean squared error 22-16 Memory associative 7-3 autoassociative 7-10 Mexican-hat function 16-11 Minima 8-7 first-order conditions 8-10 global minimum 8-7 necessary conditions 8-9 second-order conditions 8-11 strong minimum 8-7 sufficient condition 8-11 weak minimum 8-7 Minkowski error 22-17 Minsky, M 1-3, 4-2 Missing data 22-8 MOBP 12-11 Molecular dynamics 24-3 Momentum 12-9, 15-7 Monte Carlo simulation 25-9 Most probable 13-14 Multilayer perceptron 11-2 Myocardial infarction 25-2 N NARX network 22-10 Negative definite matrix 8-11 Negative semidefinite 8-11 Neighborhood 16-12 Network architectures 2-9 layer 2-9 multiple layers 2-10 Neural Network Toolbox for MATLAB 1-5 Neuron model 2-2 multiple input neuron 2-7 single input neuron 2-2 transfer functions 2-3 Newton’s method 9-10 Nguyen-Widrow weight initialization 22-13, 24-9 Nilsson, N 16-2 Noise cancellation adaptive 10-15 echo cancellation 10-21 Norm 5-7 Index-5 I Index www.elsolucionario.org Normalization 22-5, 25-6, 26-4 Novelty detection 22-28 Ockham’s razor 13-2 On-center/off-surround 16-11, 18-14 Optic disk 18-5 Optimality first-order conditions 8-10 necessary conditions 8-9 second-order conditions 8-11 sufficient condition 8-11 Optimization conjugate gradient 9-15, 12-14 descent direction 9-3 Gauss-Newton 12-21 Levenberg-Marquardt 12-19, 12-21 Newton’s method 9-10 quadratic termination 9-15 steepest descent 9-2 stable learning rates 9-6 Oriented receptive field 18-20 Orienting subsystem 19-13 Orthogonal least squares 17-18 Orthogonality 5-7 Orthonormal 5-9 Outliers 22-19 Outstar 15-16 Outstar rule 15-17 Overfitting 13-3, 22-27 single-neuron 4-5 test problem 4-9 training multiple-neuron perceptrons 4-13 two-input case 3-4 unified learning rule 4-12 Performance Index 11-8 Performance index 8-2, 22-16 cross-entropy 22-17 mean squared error 22-16 Minkowski error 22-17 quadratic function 8-12 Performance learning 8-2 Pitts, W.H 1-3, 4-2 Positive definite 20-5 Positive definite matrix 8-11 Positive semidefinite 8-11, 20-5 posterior density 13-13 Post-training analysis 22-18 Prediction 22-10, 22-24 Preprocessing 22-5 coding the targets 22-7 feature extraction 22-6 normalization 22-5, 25-6, 26-4 principal component analysis 22-6 Principal component analysis 22-6 Prior density 13-13 Probability estimation 24-2 Projection 5-8 Prototype patterns 21-16 Pseudoinverse rule 7-7 P Q Papert, S 1-3, 4-2 Parker, D.B 11-2 Pattern classification 11-3, 22-9, 25-2 Pavlov, I 1-2 Perceptron 3-3 architecture 4-3 constructing learning rules 4-10 decision boundary 4-5 learning rule 4-8, 4-13 proof of convergence 4-15 multilayer 11-2 multiple-neuron 4-8 Quadratic function 8-12 circular hollow 8-16 elliptical hollow 8-17 Hessian eigensystem 8-13 saddle point 8-18 stationary valley 8-19 Quadratic termination 9-15 Quantization error 22-23 Quickprop 12-14 O Index-6 Index R R value 22-20 Radial basis network 17-2 backpropagation 17-25 center 17-6 pattern classification 17-6 Range 6-2 RBF 17-2 Real-time recurrent learning (RTRL) 14-2, 14-11, 14-12 Receiver operating characteristic (ROC) curve 22-22, 25-8 Reciprocal basis vectors 5-10 Recurrent 14-2 Recurrent network 2-13, 2-14, 20-2 regression 22-8 Regression/scatter plot 22-20 Regularization 13-8, 13-19, 13-21 Reinforcement learning 4-3 Resonance 19-17 Retina 18-3 Rods 18-3 Rosenblatt, F 1-3, 4-2, 10-2, 11-2, 16-2 Rosenfeld, E 1-2 Rumelhart, D.E 1-4, 11-2 S Saddle point 8-8, 8-18 Scaled conjugate gradient algorithm 22-14 SDBP 12-2 Segmentation 22-9 Self-organizing feature map (SOFM) 16-12, 2216, 26-2 distortion measure 22-23 neighborhood 16-12 quantization error 22-23 topographic error 22-23 Sensitivity 11-10 backpropagation 11-11 Sensitivity analysis 22-28 Set L 20-13 Z 20-12 Shakespeare, W 1-5 Short term memory (STM) 18-12, 18-17 Shunting model 18-10 Similarity transform 6-8 Simulation order 14-4 Smart sensor 23-2 Softmax 22-7, 24-6 Spanning a space 5-5 Spurious patterns 21-20 Stability asymptotically stable 20-3, 20-5 concepts 20-3 equilibrium point 20-4 in the sense of Lyapunov 20-3, 20-4 LaSalle’s corollary 20-14 LaSalle’s invariance theorem 20-13 Lyapunov function 20-12 Lyapunov stability theorem 20-6 pendulum example 20-6 Stability-plasticity dilemma 19-2 Stationary point 8-10 minima 8-7 saddle point 8-8 Stationary valley 8-19 Steepest descent 9-2 learning rate 9-3 minimizing along a line 9-8 stable learning rates 9-6 Stimulus-response 15-2 conditioned stimulus 15-3 unconditioned stimulus 15-3 Stopping criteria 22-15 Subclass 16-17 Subset selection 17-18 Subset/superset dilemma 19-17 SuperSAB 12-14 Supervised learning 4-3 Hebb rule 7-4 performance learning 8-2 target 4-3 training set 4-3 T Tapped delay line 10-13 Target 4-3 Index-7 I Index Taylor series expansion 8-2 vector case 8-4 Test set 13-6 Thomas Bayes 13-10 Tikhonov 13-8 Time constant 18-9 Tollenaere, T 12-14 Topographic error 22-23 Training process 22-2 Training set 4-3 sequence 15-5 Transfer functions 2-3, 2-6 competitive 2-6 global vs local 17-9 hard limit 2-3, 2-6 hyperbolic tangent sigmoid 2-6 linear 2-4, 2-6 log-sigmoid 2-4, 2-6 positive linear 2-6 saturating linear 2-6 softmax 22-7, 24-6 symmetric saturating linear 2-6 symmetrical hard limit 2-6 table 2-6 Type I error 22-22 Vision normalization 18-8 Visual cortex 18-4 VLBP 12-12 von der Malsburg, C 16-2, 18-12 W Weight indices 2-7 Weight initialization 22-13, 24-9 Weight matrix 2-7 Werbos, P.J 11-2 White noise 22-24 Widrow, B 1-3, 10-2, 11-2 Widrow-Hoff algorithm 7-13, 10-7 adaptive filtering 10-13 Wiesel, T 16-2, 18-12 Williams, R.J 11-2 Winner-take-all 16-5 U Unconditioned stimulus 15-3 Unsupervised learning 4-3 Hebb rule 7-4, 15-5 V Validation set 13-7 Vector expansion 5-9 reciprocal basis vectors 5-10 Vector space 5-2 angle 5-7 basis set 5-5 orthonormal 5-9 projection 5-8 spanning 5-5 vector expansion 5-9 Vigilance 19-15 Vision 18-3 Index-8 www.elsolucionario.org This book provides a clear and detailed coverage of fundamental neural network architectures and learning rules In it, the authors emphasize a coherent presentation of the principal neural networks, methods for training them and their applications to practical problems Features · Extensive coverage of training methods for both feedforward networks (including multilayer and radial basis networks) and recurrent networks In addition to conjugate gradient and Levenberg-Marquardt variations of the backpropagation algorithm, the text also covers Bayesian regularization and early stopping, which ensure the generalization ability of trained networks · Associative and competitive networks, including feature maps and learning vector quantization, are explained with simple building blocks · A chapter of practical training tips for function approximation, pattern recognition, clustering and prediction, along with five chapters presenting detailed real-world case studies · Detailed examples and numerous solved problems Slides and comprehensive demonstration software can be downloaded from hagan.okstate.edu/nnd.html About the Authors Martin T Hagan (Ph.D Electrical Engineering, University of Kansas) has taught and conducted research in the areas of control systems and signal processing for the last 35 years For the last 25 years his research has focused on the use of neural networks for control, filtering and prediction He is a Professor in the School of Electrical and Computer Engineering at Oklahoma State University and a co-author of the Neural Network Toolbox for MATLAB Howard B Demuth (Ph.D Electrical Engineering, Stanford University) has twenty-three years of industrial experience, primarily at Los Alamos National Laboratory, where he helped design and build one of the world's first electronic computers, the "MANIAC." Demuth has fifteen years teaching experience as well He is co-author of the Neural Network Toolbox for MATLAB and currently teaches a Neural Network course for the University of Colorado at Boulder Mark Hudson Beale (B.S Computer Engineering, University of Idaho) is a software engineer with a focus on artificial intelligence algorithms and software development technology Mark is co-author of the Neural Network Toolbox for MATLAB and provides related consulting through his company, MHB Inc., located in Hayden, Idaho Orlando De Jesús (Ph.D Electrical Engineering, Oklahoma State University) has twentyfour years of industrial experience, with AETI C.A in Caracas, Venezuela, Halliburton in Carrollton, Texas and is currently working as Engineering Consultant in Frisco, Texas Orlando’s dissertation was a basis for the dynamic network training algorithms in the Neural Network Toolbox for MATLAB ... it can be solved using three different types of neural networks These three networks are representative of the types of networks that are presented in the remainder of the text In addition, the... with that weight, while the column indices indicate the source of the input for that weight Thus, the indices in w 3 say that this weight represents the connection to the third neuron from the... output a  t  is computed from its input u  t  according to a? ?t? ?? = u t – 1 (2.7) Thus the output is the input delayed by one time step (This assumes that time is updated in discrete steps

Ngày đăng: 16/10/2021, 15:35

TỪ KHÓA LIÊN QUAN