1. Trang chủ
  2. » Tất cả

Evolutionary algorithm for training compact single hidden layer feedforward neural networks hieu trung huynh and yonggwan won, member, IEEE

20 4 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 20
Dung lượng 377,23 KB

Nội dung

Evolutionary Algorithm for Training Compact Single Hidden Layer Feedforward Neural Networks Hieu Trung Huynh and Yonggwan Won, Member, IEEE EVOLUTIONARY ALGORITHM FOR TRAINING COMPACT SINGLE HIDDEN LA[.]

Advanced Artificial Intelligence EVOLUTIONARY ALGORITHM FOR TRAINING COMPACT SINGLE HIDDEN LAYER FEEDFORWARD NEURAL NETWORKS H I E U T R U N G H U Y N H A N D Y O N G G WA N W O N , M E M B E R , I E E E Group members: Mr Nhan Mr Dien.Vo Mr Tu IUH University ABSTRACT An effective training algorithm called extreme learning machine (ELM) has recently proposed for single hidden layer feedforward neural networks (SLFNs) It randomly chooses the input weights and hidden layer biases, and analytically determines the output weights by a simple matrix-inversion operation This algorithm can achieve good performance at extremely high learning speed However, it may require a large number of hidden units due to non-optimal input weights and hidden layer biases In this paper, group of authors propose a new approach, evolutionary least-squares extreme learning machine (ELS-ELM), to determine the input weights and biases of hidden units using the differential evolution algorithm in which the initial generation is generated not by random selection but by a least squares scheme Experimental results for function approximation show that this approach can obtain good generalization performance with compact networks IUH University INTRODUCTION • An approach frequently used in machine learning is feedforward neural network due to its ability to approximate complex nonlinear mappings directly from input patterns • Traditionally, training neural networks has been based on the gradient-descent in which network weights are tuned by error propagation from the output layer to the input layer • The gradient-descent-based algorithms may converge very slowly to the solution of the given problem if the learning rate is adequately small IUH University PROBLEM SOLVE • These problems have been overcome by many improvements proposed by many researchers: • D Nguyen and B.Widrow proposed a method for choosing the initial values of weights to improve the learning speed • A method based on multidimensional geometry to determine the optimal biases and magnitude of initial weight vectors was proposed by Jim Y F Yam and Tommy W S Chow • Using the second order information of the cost function in the training process was also proposed by some researchers • In addition, there are many methods proposed by many researchers to overcome over-fitting in training • However, up to now, most of the training algorithms based on the gradient descent are still slow due to many iterative steps that are required in the learning process IUH University PROBLEM SOLVE • Recently, Huang et al showed that a single hidden-layer feedforward neural network (SLFN) can learn distinct observations with arbitrary small error if the activation function is chosen properly • An effective training algorithm for SLFNs called extreme learning machine (ELM) was also proposed by Huang et al • In ELM, the input weights and biases of hidden units are randomly chosen, and the output weights of SLFNs can be determined through the inverse operation of the output matrix of hidden layer • This algorithm can avoid many problems which occur in the gradient-descent-based learning methods such as local minima, learning rate, epochs, etc It can obtain better generalization performance at higher learning speed in many applications • However, it often requires a large number of hidden units and long time for responding to new input patterns IUH University PROBLEM SOLVE • There are some approaches to overcome this problem H T Huynh and Y Won proposed an approach to determine the input weights and hidden layer biases by using a linear model, and then the output weights are also calculated by Moore-Penrose (MP) generalized inverse • Training SLFNs by our proposed method consists of two steps: the first step is the initialization of population based on the linear model proposed In the second step, the input weights and hidden layer biases are estimated by the DE process, and the output weights are determined through MP generalized inverse • Its performance in regression problems can be improved In addition, it can also obtain a compact SLFN as E-ELM and LS-ELM, which results in the fast response of the trained network to new input patterns • However, this approach can take longer time for training process in comparison with the original ELM and LS-ELM IUH University DIFFERENTIAL EVOLUTION • Mutation: the mutant vector is generated as vi,G+1= 0r1,G+F(0r2,G - 0r3,G), where r1, r2, r3 ∈{1, 2, …, NP} are different random indices and F ∈ [0,2] is a constant factor used to control the amplification of the differential variation • Crossover: the trial vector is formed so that where rand b(j) is the j-th evaluation of a uniform random number generator, CR is the crossover constant and rnbr(i) is a randomly chosen index which ensures at least one parameter from vji,G+1 • Selection: The new generation is determined by: IUH University SINGLE HIDDEN LAYER FEEDFORWARD NEURAL NETWORKS An SLFN with N hidden units and C output units is depicted IUH University SINGLE HIDDEN LAYER FEEDFORWARD NEURAL NETWORKS The ELM algorithm can be described as follows Randomly assign input weights and hidden layer biases Step Compute the hidden layer output matrix H Calculate the output weights A Step Step This algorithm can obtain good generalization performance at high learning speed However, it often requires a large number of hidden units and takes long time for responding new patterns IUH University EVOLUTIONARY EXTREME LEARNING MACHINE (E-ELM) • The E-ELM algorithm was proposed by Q Y Zhu In E-ELM, the DE process is used to tune the input weights and hidden layer biases, and the MP generalized inverse is used to determine the output weights First, the population of the initial generation is generated randomly Each individual in the population is a set of the input weights and hidden layer biases defined by: • The output weights corresponding to each individual are determined by the MP generalized inverse Three steps of DE process are used; individuals with better fitness values are retained to the next generation The fitness of each individual is chosen as the root-mean squared error (RMSE) on the whole training set or the validation set IUH University EVOLUTIONARY EXTREME LEARNING MACHINE (E-ELM) We can summarize the E-ELM algorithm as follows: Mutation Crossover Initialization: G Selection IUH University Training process Determine the output weights for each individual Evaluate the fitness for each individual EVOLUTIONARY EXTREME LEARNING MACHINE (E-ELM) • This algorithm can obtain more compact networks than the original ELM • However, it is slow in training due to iteration of the DE process and it does not obtain small input weights and hidden layer biases IUH University SLFNS • This present an improvement of E-ELM, evolutionary least-squares extreme learning machine (ELS-ELM) • It utilizes our method of linear model for generating the initial population • Following this initialization, the DE process is applied to find further optimal set of the input weights and hidden layer biases • In this proposed ELS-ELM, the DE process with initial population obtained by least-square scheme is used for tuning the input weights and hidden layer biases, and the MP generalized inverse operation is used for determining the output weights IUH University SLFNS We can summarize the SLFNs algorithm as follows: Randomly assign the values for the matrix B Estimate input weights wm and biases bm of ⍬ Determine the output weights A Calculate the hidden-layer output matrix H IUH University Initialization Mutation Crossover Selection Evaluate the fitness for each individual Compute the hidden layer output matrix H Training process Determine the output weights EXPERIMENTAL RESULTS IUH University EXPERIMENTAL RESULTS IUH University EXPERIMENTAL RESULTS IUH University EXPERIMENTAL RESULTS IUH University CONCLUSION • In this paper, an improvement of ELM, evolutionary least-squares extreme learning machine (ELS-ELM), for training single hidden layer feedforward neural networks (SLFNs) was proposed • Instead of being randomly assigned as in ELM, the input weights and the hidden layer biases in our ELS-ELM were estimated by using the differential evolution (DE) process while the output weights were determined by MP generalized inverse • However, unlike E-ELM, our method initiated the individual of the initial generation by the least-squares scheme This method can obtain the trained networks with small number of hidden units as E-ELM and LS-ELM while IUH University producing the better RMSE for regression problems Thank you! 👏👏👏 IUH University ... University SINGLE HIDDEN LAYER FEEDFORWARD NEURAL NETWORKS An SLFN with N hidden units and C output units is depicted IUH University SINGLE HIDDEN LAYER FEEDFORWARD NEURAL NETWORKS The ELM algorithm. .. effective training algorithm called extreme learning machine (ELM) has recently proposed for single hidden layer feedforward neural networks (SLFNs) It randomly chooses the input weights and hidden layer. .. improvement of ELM, evolutionary least-squares extreme learning machine (ELS-ELM), for training single hidden layer feedforward neural networks (SLFNs) was proposed • Instead of being randomly assigned

Ngày đăng: 27/11/2022, 00:16

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

w