1. Trang chủ
  2. » Luận Văn - Báo Cáo

RBF Neurals Networks and a new algorithm for training RBF networks

22 270 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 22
Dung lượng 615 KB

Nội dung

RBF Neurals Networks and a new algorithm for training RBF networks

1 Content 1. Summary 2. Introduction about function regression 3. RBF Neurals Networks and a new algorithm for training RBF networks 4. Experiment 5. Conclusion 2 1. SUMMARY Gaussian radial basis function (RBF) networks are commonly used for interpolating multivariable functions. However, the way of choosing number of neurals in hidden layer and choosing the appropriate center of the RBFs to have a good interpolating networks is still open and attracted interest of researchers.This report proposes using equally spaced nodes as centers in hidden layer. After that, using k-nearest neighbour regression to interpolating function in the center and using a new algorithm to training RBF networks. results show that the generality of networks trained by this new algorithm is sensibly improved and the running time significantly reduced, especially when the number of nodes is large. 2. REGRESSTION FUNCTION 2.1.1 Introductions about regressing A set D in R n and f: D (⊂R n )→R m is a multivariable function in D. We only know a set T in D including N vectors: x 1 ,x 2 ….x N is f(x i ) = y i with i=1,2…,N and we must find f(x) of another x in D (x= x 1 ,…,x n ). We find a function )(x ϕ in D such that: ϕ (x i ) ≈ y i , ∀ i=1,…N. (1.) And using ϕ (x) instead of f(x). When m >1, the interpolation problem is equal with m problems of interpolating m functions of real multivariable. Therefore, we only need working with m =1. 2.1.2 K-nearest neighbour (k-NN) regression In this method, people choose a certain natural number k. Each Dx ∈ , x= x 1 ,…,x n we find )(x ϕ through f at k nearest nodes of x as follow: Denoting z 1 ,…,z k is k vectors in T which are nearest wit x (with d(u,v) is the distance of u,v in D), and then )(x ϕ is defined: 0 1 ( ) n j j j x x ϕ ρ ρ = = + ∑ (2) which i ρ is defined so that the sum of least square in the set z 1 ,…,z k is smallest. 3 Tức là: ( ) 2 2 0 1 1 1 1 1 ( ) ( ) ( ) 2 2 k k n i i i i j j i i j z f z z f z ϕ ρ ρ = = =   Σ = − = + −  ÷   ∑ ∑ ∑ smallest. We find the parameters i ρ by the system of equations: That means: (3) And (4) Solving the system (3,4), with each x we define a responsible parameters Pt and t to get ( )x ϕ as (2). 2.2 THE IDEA AND SOLUTION OF INTERPOLATING APPOROXIMATLY PROBLEM WITH WHITE NOISE DATA With training in equally spaced nodes, the HDH-1 phase can be apply in many applications which needs fast training time such as in computer graphics, pattern recognition. To get the maximum of advantage, Hoàng Xuân Huấn suggested an idea to use HDH-1 phase in solving the intepolating problem with noise and unequally spaced data. The idea is: Step 1: Base on the unequally spaced nodes and its measured value with white noise, using regression method, we create a new set of data with equally spaced nodes in a given web defined in the range of parameters of original unequally spaced nodes. Each value of new equally spaced nodes is noise reduction. Step 2: Using HDH-1 phase to training the RBF networks in the new data, we get a network which does not only interpolate appoximately function, but it also reduces the noise. 4 Figure 1: The web nodes base on original values of unequally spaced nodes The figure above describe the case of 2 dimensions data, the web of new equally spaced nodes (the red circles) which based on the range of original values of original nodes (the blue triangles). The value of each node (circles) is computed by using regression based on the values of k nearest original nodes (triangles). RBF networks will be training by HDH-1phase algorithm with the input data is new equally spaced nodes (circles) and the reduction value of each node. 5 2.3 The approximately multivariable function problem Approximating multivariable function problem is considered as a common, general problem, the interpolation aspect is a special situation. In the interpolating problem, the interpolating function must have the same value with the value of given nodes. When the number of nodes is large, defining the interpolating function ϕ become more complex, and we will accept the approximate value at each given nodes and choosing a simple function such that the error is best The given problem: Function )(xfy = measured in { } N k k x 1= belong to D in n R is Nkxfy kk 1);( =∀= with Dxxx k n kk ∈= ), ,( 1 and mk Ry ∈ . To approximate )(xf we needs a function with given form such that the error in each node is as good as possible. The chosen function is usually ), ,,,()( 21 k cccxx Φ= ϕ and the error is often defined following paramters k ccc , ,, 21 with least square method ∑ = − N i ii yx 1 2 )( ϕ with ∑ = −=− m j i j i j ii yxyx 1 2 ))(()( ϕϕ . Therefore, )(x ϕ is considerd as the best approximate function of )(xf with leas square method. The graph of function )(xy ϕ = does not needs go through every nodes as in interpolation. In many cases, this problem will be in local minimum. To avoid it, people usually use loop method with re-assigned parameters in each loop to get the global least square. 6 In some case, the number of nodes (N) is huge, to reduce the computation, instead of computing with Ni 1= people can use with Mi 1= and M<N to compute ∑ = − M i ii zfz 1 2 )()( ϕ least, with set { } M k k z 1= is the nearest of x. This method is local method and function ), ,,,( 21 k cccxΦ is chosen as linear function. 3. RBF NETWORKS AND QHDH TRAINING ALGORITHM RBF networks are networks with 3 layers (2 neural layers). Each neural in hidden layer is a non-linear function of distance from input vector X and center vector j C combined with neural j with basic radial j σ . The combination of input vector X and center vector j C creates a matrix of distance functions with responsible radial. People will use this matrix to compute each weighting parameters of neurals in networks. 3.1 Radial Basis Function 3.1.1 Interpolating multivariable function problem with RBF approach Considering the multivariable function mn RRDf →⊂ )(: given by nodes { } N k kk yx 1 , = );( mknk RyRx ∈∈ such that ( ) Nkyxf kk , ,1; == . We need to find function ϕ with given form that satisfies: 0 1 ),()( wvxhwx M k k k k +−= ∑ = σϕ with Nkyx kk , ,1;)( =∀= ϕ (3.1) With { } N k k x 1= is a set of n-dimension vectors (so-called interpolating nodes) and )( kk xfy = is the measured value of function f which need to be interpolated; real function ),( k k vxh σ − is called Radial Basic Function (RBF) with center v k , radial k σ and )( NM ≤ is the number of radial basic function using to define f ; w k an k σ is the parameters that needs finding. 3.1.2 Radial Basic Function Technique 7 Considering the interpolating problem with m = 1 and the interpolaing nodes is not too large we find function ϕ as follow: ∑ = += N k kk wxwx 1 0 )()( ϕϕ (3.2) With )(x k ϕ is radial basic function. There are many diffirent radial basic functions and the most widely used function is Gauss function. The following formula introduces the technique with Gauss RBF: Nkex k k vx k , 1)( 2 2 / =∀= −− σ ϕ (3.3) In (3.2) và (3.3) we have: • . is Euclide distance with 2 1 N i i u u = = ∑ • v k is center of RBF k ϕ . Centers are the interpolating k v = k x k ∀ , then M = N (more detail in chapter 3 of [13]). • The parameters w k and σ k need to be found such that ϕ satisfies interpolating conditions (3.1): i N k i kk i ywxwx =+= ∑ =1 0 )()( ϕϕ ; Ni , ,1=∀ (3.4) With each k, parameter σ k is used to control effective range of RBF k ϕ , when k k vx σ 3>− , ( ) k x ϕ is tiny and meaningless. Considering the square matrix NxN: NNik × =Φ )( , ϕ and 2 2 / , )( k ki xx i kik ex σ ϕϕ −− == (3.5) With given parameter σ k , Michelli [14] proved Φ is reversible matrix and positive if the x k is diffirent. Therefore, with given w 0, the system of equations (3.4) always have one root w 1 , …, w N . The sum of error square is defined by the formula (3.6) 8 ( ) ( ) 2 1 N i i i E x y ϕ = = − ∑ (3.6) The genaral approximation and the best approximation of radial basic functions is investigated in [22][31][54]. The interpolating function has advantages in sum of error square E which is always global minimum (page 98 in [13]). From the conclusion above, people suggests an algorithm to interpolating and approximate function based on sum of least square or solving the system of equations [49]. 3.1.3 Some radial basic functions Non-linear Radial Basic Function f can be used as some following functions: Gaussian Function: 2 2 )( )( r cx exf −− = ( 3.7) With Rc ∈ is center of RBF with radial is r. The value of RBF Gaussian increase when x is closer to center as the figure 3.1: Figure: 3.1. RBF Gaussian with r =1 and c = 0 9 Multiquadric Function: 2/122 ))(()( rcxxf +−= ( 3.8) Figure 3.2. RBF Multiquadric with r =1 and c = 0 Inverse Multiquadric Function: 2/122 ))(()( rcxxf +−= ( 3.9) Figure: 3.3. RBF Inverse Multiquadric with r =1 and c = 0 Cauchy Function: 10 r rcx xf 122 ))(( )( − +− = (3.10) Figure: 3.4. RBF Cauchy with r =1 and c = 0 3.2 RBF Network structure RBF neural networks structure is 3 layers (2 neural layers), transposing function is radial basic. The structrue includes: i) Input layer with n nodes for the input vector n Rx ∈ . ii) Hidden layer has M neurals; each neurals k has center v k . Output of the neural is responsible RBF ϕ k . iii) Output layer includes m neurals with output value. With transposing function in hidden layer is RBF and output of transposing function is linear. [...]... Buhmann, Radial Basis Functions: Theory and Implimentations, Cambridge University Press, 2004 4 M.J.D.Powell, “Radial basis function approximations to polynomials”, in: Proc Numerical analysis 1987, Dundee, UK, 1988, pp 223-241 5 D.S Bromhead and D Lowe, “Multivariable functional interpolation and adaptive networks , Complex Systems, vol 2, 1988, pp 321-355 6 J.Park and I.W Sandberg “Approximation and. .. IEEE Transactions on Neural Networks, Vol 20 , No 1, 2009 , p 124 - 138 12 Hoang Xuan Huan, Dang Thi Thu Hien and Huu Tue Huynh, A Novel Two-Phase Efficient Algorithm for Training Interpolation Radial Basis Function Networks, Signal Processing, Vol 87, Issue 11 November 2007, pp 2708–2717 13 Hoang Xuan Huan , Dang Thi Thu Hien and Huu Tue Huynh, An efficient algorithm for training interpolation RBF networks. .. in a geographically compact area Naturally, the geographical area included varies inversely with the population density We computed distances among the centroids of each block group as measured in latitude and longitude We excluded all the block groups reporting zero entries for the independent and dependent variables T he final data contained 20,640 observations on 9 variables The dependent variable... scattered data: distance matrices and conditionally positive definite functions”, Constr Approx 2 (1986), pp 11–22 17 M H Mousoun, Fundamental of Artificial Neural Networks, MIT Press, Cambridge, MA, 1995 22 G E Fasshauer and Jack G Zhang, “On Choosing “Optimal” Shape Parameters for RBF Approximation”, Numerical algorithms, Volume 45, Numbers 1-4, pp 345-368 19 O.Rudenko, O.Bezsonov, Function Approximation... Radial Basis Function Networks, Journal of Intelligent Learning Systems and Applications, Vol.3 No.1, February 2011, pp 17-25 20 Tomohiro Ando, Sadanori Konishi and Seiya Imoto, Nonlinear regression modeling via regularized radial basis function network, Journal of Statical Planning and Inference, Volume 138, Issue 11, 1 November 2008, pp 3616-3633 21 C.M Bishop, Pattern Recognition and Machine learning,... Theory and algorithm for engineers and scientist, Oxford University press, New York, 1997 10 M Bortman, M A Aladjem, Growing and Pruning Method for Radial Basis Function Networks, IEEE Transactions on Neural Networks, Vol 20 , No 6, 2009 , p 1039 – 1045 11 J.P.-F Sum, Chi-Sing Leung; K.I.-J Ho, On Objective Function, Regularizer, and Prediction Error of a Learning Algorithm for Dealing With Multiplicative... to the real data in the pattern recognition problem to test the effect of application 21 REFERENCES 1 R H Bartels, J C Beatty, Brian A Barsky, An Introduction on Splines for use in Computer Graphics & Geometric Modeling, Morgan Kaufmann, 1987 2 E Blanzieri, “Theoretical Interpretations and Applications of Radial Basis Function Networks , Technical Report DIT-03-023, Informatica e Telecomunicazioni,... Step 3 Applying QHDH to train RBF networks 17 So that, we could have RBF which is approximate f on D The procedure is demostrated in Figure 3 : Proceduce RBF network construction function approximation Choose k and landmarks equidistant grid {z } j M j =1 on B, j Calculate the approximate value { f ( z )} j =1 // by kNN method M Training RBF network approximation / / algorithm QHDH End Figure 3: Algorithm. .. one-phase iterative training RBF network with equidistant markers 4 Experiments Results We implement experiments comparing the approximation error due to data taken from the website: http://www.liaad.up.pt/~ltorgo/Regression/cal_housing.html Network will train to compare the error The effectiveness of the algorithm are compared with the method of Guang-Bin Huang GGAP and colleagues and demonstrate better... experimental results show that the error of the network under the new method proposed k-NN method better than the Guang-Bin Huang GGAP with increasing k 5 Conclusion We can see new method combines linear regression and k-NN algorithm for RBF network training QHDH have a multivariate function approximation network, its performance is experimentally demonstrated great promise In the future we will apply . latitude and longitude. We excluded all the block groups reporting zero entries for the independent and dependent variables. T he final data contained 20,640 observations on 9 variables. The. to have a good interpolating networks is still open and attracted interest of researchers.This report proposes using equally spaced nodes as centers in hidden layer. After that, using k-nearest

Ngày đăng: 12/04/2014, 15:40

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

w