1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Tài liệu Mạng thần kinh thường xuyên cho dự đoán P9 ppt

12 233 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 12
Dung lượng 419,39 KB

Nội dung

Recurrent Neural Networks for Prediction Authored by Danilo P. Mandic, Jonathon A. Chambers Copyright c 2001 John Wiley & Sons Ltd ISBNs: 0-471-49517-4 (Hardback); 0-470-84535-X (Electronic) 9 A Class of Normalised Algorithms for Online Training of Recurrent Neural Networks 9.1 Perspective A normalised version of the real-time recurrent learning (RTRL) algorithm is intro- duced. This has been achieved via local linearisation of the RTRL around the current point in the state space of the network. Such an algorithm provides an adaptive learn- ing rate normalised by the L 2 norm of the gradient vector at the output neuron. The analysis is general and also covers simpler cases of feedforward networks and linear FIR filters. 9.2 Introduction Gradient-descent-based algorithms for training neural networks, such as the back- propagation, backpropagation through time, recurrent backpropagation (RBP) and real-time recurrent learning (RTRL) algorithm, typically suffer from slow convergence when dealing with statistically nonstationary inputs. In the area of linear adaptive filters, similar problems with the LMS algorithm have been addressed by utilising normalised algorithms, such as NLMS. We therefore introduce a normalised RTRL- based learning algorithm with the idea to impose similar stabilisation and convergence effects on training of RNNs, as normalisation imposes on the LMS algorithm. In the area of linear FIR adaptive filters, it is shown (Soria-Olivas et al. 1998) that a normalised gradient-descent-based learning algorithm can be derived starting from the Taylor series expansion of the instantaneous output error of an adaptive FIR filter, given by e(k +1)=e(k)+ N  i=1 ∂e(k) ∂w i (k) ∆w i (k)+ 1 2! N  i=1 N  j=1 ∂ 2 e(k) ∂w i (k)∂w j (k) ∆w i (k)∆w j (k)+··· . (9.1) 150 OVERVIEW From the mathematical description of LMS 1 from Chapter 2, we have ∂e(k) ∂w i (k) = −x(k − i +1),i=1, 2, ,N, (9.2) and ∆w i (k)=µ(k)e(k)x(k − i +1),i=1, 2, ,N. (9.3) Due to the linearity of the FIR filter, the second- and higher-order partial derivatives in (9.1) vanish. Combining (9.1)–(9.3) yields e(k +1)=e(k) − µ(k)e(k)x(k) 2 2 (9.4) for which the nontrivial solution gives the learning rate of a normalised LMS algorithm µ NLMS (k)= 1 x(k) 2 2 . (9.5) The stability analysis of adaptive algorithms can be undertaken using contractive operators and fixed point iteration. For the contractive operator T , it follows that Tz 1 − Tz 2   γz 1 − z 2 , 0  γ<1, z 1 , z 2 ∈ R N . (9.6) The convergence analysis of LMS, for instance, can be undertaken starting from the misalignment 2 vector v(k)=w(k) − ˜ w(k) by setting z 1 = v(k + 1), z 2 = v(0) and T =[I − µ(k)x(k)x T (k)] (Gholkar 1990). Detailed convergence analysis for a class of gradient-based learning algorithms for recurrent neural networks is given in Chapter 10. 9.3 Overview A class of normalised gradient-based algorithms is derived starting from the LMS algorithm for linear adaptive filters through to a normalised algorithm for training recurrent neural networks. For each case the adaptive learning rate has been derived. Stability of such algorithms is addressed in Chapter 10. The normalised algorithms are shown to outperform standard algorithms with fixed learning rate. 1 The two core equations for adaptation of the LMS algorithm are e(k)=d(k) − x T (k)w(k), w(k +1)=w(k)+µ(k)e(k)x(k). 2 The misalignment vector is defined as v(k)=w(k) − ˜ w(k), where ˜ w(k) is the set of optimal weights of the system. A CLASS OF NORMALISED ALGORITHMS FOR TRAINING OF RNNs 151 100 200 300 400 500 600 700 800 900 1000 −30 −25 −20 −15 −10 −5 0 Number of iteration Averaged squared prediction error in dB NGD LMS NLMS NNGD Figure 9.1 Comparison of convergence of the averaged squared prediction error with the LMS, NLMS, NGD and NNGD algorithms, with logistic activation function, for a coloured input 9.4 Derivation of the Normalised Adaptive Learning Rate for a Simple Feedforward Nonlinear Filter The equations that define the adaptation for a neural adaptive filter with one neuron (Figure 2.6), trained by a nonlinear gradient descent (NGD) algorithm, are e(k)=d(k) − Φ(x T (k)w(k)), (9.7) w(k +1)=w(k)+η(k)Φ  (x T (k)w(k))e(k)x(k), (9.8) where e(k) is the instantaneous error at the output neuron, d(k) is some train- ing (desired) signal, x(k)=[x 1 (k), ,x N (k)] T is the input vector, w(k)= [w 1 (k), ,w N (k)] T is the weight vector, Φ( · ) is a nonlinear activation function of a neuron and ( · ) T denotes the vector transpose. The learning rate η is supposed to be a small positive real number. Following the approach from Mandic (2000a), if the output error (9.7) is expanded using a Taylor series expansion, we have e(k +1)=e(k)+ N  i=1 ∂e(k) ∂w i (k) ∆w i (k)+ 1 2! N  i=1 N  j=1 ∂ 2 e(k) ∂w i (k)∂w j (k) ∆w i (k)∆w j (k)+··· . (9.9) From (9.7) and (9.8), the elements of (9.9) are ∂e(k) ∂w i (k) = −Φ  (x T (k)w(k))x i (k),i=1, 2, ,N, (9.10) 152 DERIVATION OF THE NORMALISED ALGORITHM 0 100 200 300 400 500 600 700 800 900 1000 −30 −25 −20 −15 −10 −5 NNGD NLMS LMS Number of iteration Averaged squared prediction error in dB Figure 9.2 Comparison of convergence of the averaged squared prediction error of the LMS, NLMS and NNGD algorithms for a coloured input and tanh activation function with β =1 and ∆w i (k)=w i (k +1)− w i (k)=η(k)Φ  (x T (k)w(k))e(k)x i (k),i=1, 2, ,N. (9.11) The second partial derivatives are ∂ 2 e(k) ∂w i (k)∂w j (k) = −Φ  (x T (k)w(k))x i (k)x j (k),i,j=1, 2, ,N. (9.12) Let us denote net(k)=x T (k)w(k). Combining (9.9)–(9.12) yields e(k +1)=e(k) − η(k)[Φ  (net(k))] 2 e(k) N  i=1 x 2 i (k) − 1 2! η 2 (k)e 2 (k)[Φ  (net(k))] 2 Φ  (net(k)) N  i=1 N  j=1 x 2 i (k)x 2 j (k)+··· . (9.13) A truncated Taylor series expansion of (9.13) gives e(k +1)=e(k)[1 − η(k)[Φ  (net(k))] 2 x(k) 2 2 ]. (9.14) A CLASS OF NORMALISED ALGORITHMS FOR TRAINING OF RNNs 153 0 500 1000 1500 2000 2500 3000 −24 −22 −20 −18 −16 −14 −12 −10 −8 −6 −4 Number of iteration Averaged squared prediction error in dB IIR LMS LMS Rec Per NNGD NLMS Figure 9.3 Convergence comparison of averaged squared prediction error for feedforward and recurrent structures, tanh activation function with β = 4 and coloured input The aim is for the error e(k+1) in (9.14) to vanish, which is the case for the nontrivial solution η OPT (k)= 1 [Φ  (net(k))] 2 x(k) 2 2 , (9.15) which is the step size of a normalised gradient descent (NNGD) algorithm for a non- linear FIR filter. Taking into account the bounds 3 on the values of higher derivatives of Φ, for a contractive activation function we may adjust the derived learning rate with a positive constant C,as η OPT (k)= 1 C +[Φ  (net(k))] 2 x(k) 2 2 . (9.16) The magnitude of the learning rate varies in time with the tap input power and the first derivative of the activation function, which provides a normalisation of the algorithm. Further discussion on the size and role of constant C in (9.16) can be found in Mandic and Krcmar (2001) and Krcmar and Mandic (2001). The adaptive learning rate from (9.15) degenerates into the learning rate of the NLMS algorithm for a linear activation function. A normalised backpropagation algorithm for a general feedforward neural network is given in Mandic and Chambers (2000f). Although the 3 For the logistic function, for instance, the second-order term in the Taylor series expansion is positive. 154 DERIVATION OF THE NORMALISED ALGORITHM 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Discrete time sample Speech signal (a) The input speech signal 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 Discrete time sample Squared prediction error (b) Standard RTRL algorithm Figure 9.4 Squared instantaneous prediction error for the RTRL and NRTRL algorithms with speech inputs A CLASS OF NORMALISED ALGORITHMS FOR TRAINING OF RNNs 155 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 Discrete time sample Squared prediction error (c) Normalised RTRL algorithm Figure 9.4 Cont. derivation of the normalised algorithm is simple, it assumes statistical independence between the weights, input vector, teaching signal and learning rate, which is often not the case in practical applications. Therefore, the optimal learning rate for practical applications should be chosen to be smaller than the one derived above. This is one of the reasons why there is a need to add a positive constant C to the denominator of (9.15). In Mandic (2000a), a simulation was undertaken on speech, a nonlinear and nonsta- tionary signal, for a nonlinear FIR filter with tap length N = 10, with η =0.3, C =1 and β = 4. The quantitative performance measure was the standard prediction gain, a logarithmic ratio between the expected signal and error variances R p = 10 log(ˆσ 2 s /ˆσ 2 e ). For this setting, the prediction gain for the LMS was 7.24 dB, 8.26 dB for the NLMS, 7.67 dB for a nonlinear GD and 9.28 dB for the NNGD algorithm, confirming the analysis from the previous section. We next compare the performances of FIR filters trained by LMS and NLMS, IIR filters trained by LMS, nonlinear FIR filters trained by NGD and NNGD and a NARMA recurrent perceptron trained by the RTRL. The order of FIR filters was N = 10. The input was a white noise sequence passed through an AR channel given by y(k)=1.79y(k − 1) − 1.85y(k − 2)+1.27y(k − 3) − 0.41y(k − 4) + ν(k), (9.17) where ν(k) denotes the white input noise. The resulting input signal was rescaled so as to fit within the range of the logistic and tanh activation function. A Monte Carlo simulation with 200 trials was undertaken for all the experiments. 156 A NORMALISED ALGORITHM FOR RNNs Figure 9.1 shows a comparison between convergence curves for the LMS, NLMS, 4 NGD (a standard nonlinear gradient descent) and NNGD algorithms for a coloured input from AR channel (9.17). The slope of the logistic function was β = 4, which partly coincides with the linear curve y = x. The NNGD algorithm for a feedfor- ward dynamical neuron clearly outperforms the other employed algorithms. The NGD algorithm also outperformed the LMS and NLMS algorithms. Figure 9.2 shows the convergence curves for a tanh activation function and the input from the same AR channel. The NNGD algorithm has consistently improved convergence performance over the LMS and NLMS algorithms. Convergence curves for LMS, NLMS, NNGD, IIR LMS and a NARMA(6,1) recur- rent perceptron for a correlated input (AR channel) and tanh activation function with β = 4 are shown in Figure 9.3. A NARMA recurrent perceptron outperformed all the other algorithms in simulations. This does not mean, however, that recurrent structures perform best in all practical applications. 9.5 A Normalised Algorithm for Online Adaptation of Recurrent Neural Networks An output error of a fully connected recurrent neural network can be expanded via a Taylor series expansion as (Mandic and Chambers 2000b) e(k +1)=e(k)+ N  i=1 M+N +1  j=1 ∂e(k) ∂w i,j (k) ∆w i,j (k) + 1 2! N  i=1 M+N +1  m=1 N  j=1 M+N +1  n=1 ∂ 2 e(k) ∂w i,m (k)∂w j,n (k) ∆w i,m (k)∆w j,n (k)+··· , (9.18) where M is the order of the input signal tap delay line and N is the number of neurons. This is a complicated expression and only the first two terms of (9.18) will be con- sidered. Due to the internal feedback in RNNs, the partial derivatives ∂e(k)/∂w i,j (k) are not straightforward to calculate (Appendix D). From (9.18), using an approach similar to the one explained for a simple feedforward neural filter and neglecting the higher-order terms in the Taylor series expansion gives e(k +1)=e(k) − η(k)e(k) N  i=1 M+N +1  j=1  ∂y 1 (k) ∂w i,j (k)  2 = e(k) − η(k)e(k) N  i=1 Π (i) 1 (k) 2 2 , (9.19) 4 For numerical stability, the learning rate for NLMS was chosen as µ(k)=µ 0 /( + x 2 2 ), where µ 0 < 1 is a positive constant and  is some small positive constant that prevents divergence for small x 2 . This explains the better performance of NNGD over NLMS for an input coming from a linear AR channel. A CLASS OF NORMALISED ALGORITHMS FOR TRAINING OF RNNs 157 0 100 200 300 400 500 600 700 800 900 1000 −22 −20 −18 −16 −14 −12 −10 −8 −6 −4 Number of iteration Averaged squared prediction error in dB NRTRL RTRL (a) Convergence comparison between RTRL and NRTRL 0 100 200 300 400 500 600 700 800 900 1000 −22 −20 −18 −16 −14 −12 −10 −8 −6 −4 Number of iteration Averaged squared prediction error in dB RTRL NRTRL (b) Convergence comparison between RTRL and NRTRL when RTRL fails Figure 9.5 Convergence comparison of averaged squared prediction error for a RTRL and NRTRL trained recurrent structure, tanh activation function with β = 2 and coloured input 158 A NORMALISED ALGORITHM FOR RNNs 0 100 200 300 400 500 600 700 800 900 1000 −26 −24 −22 −20 −18 −16 −14 −12 −10 −8 −6 Number of iteration Averaged squared prediction error in dB NLMS NARMA(6,1) Recurrent Perceptron (a) Convergence curves for NLMS for N = 10 and RTRL for a NARMA(4,1) recurrent perceptron for a nonlinear input (9.22), logistic activation function with β =4 0 100 200 300 400 500 600 700 800 900 1000 −35 −30 −25 −20 −15 −10 −5 Number of iteration Averaged squared prediction error in dB RTRL NRTRL (b) Convergence curves for RTRL and NRTRL, for a NARMA(10,2) recurrent perceptron, tanh activation function with β = 8 for a nonlinear input (9.23) Figure 9.6 Convergence of RTRL and NRTRL for nonlinear inputs [...]... a NARMA(10,2) recurrent perceptron The NRTRL outperformed RTRL for this case Simulations show that the performance of the NRTRL is highly dependent on the choice of the constant C in the denominator of the optimal learning rate Dependent on the choice of C, the NRTRL can have worse, similar or better performance than RTRL However, in most practical cases, C < 1 is a sufficiently good range for the NRTRL . NRTRL is highly dependent on the choice of the constant C in the denominator of the optimal learning rate. Dependent on the choice of C, the NRTRL can have. η(k)e(k) N  i=1 Π (i) 1 (k) 2 2 , (9.19) 4 For numerical stability, the learning rate for NLMS was chosen as µ(k)=µ 0 /( + x 2 2 ), where µ 0 < 1 is a positive constant

Ngày đăng: 26/01/2014, 13:20

TỪ KHÓA LIÊN QUAN