Tài liệu Thesis "Neural Network Control" doc

163 732 0
Tài liệu Thesis "Neural Network Control" doc

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Thesis "Neural Network Control" Neural Network Control Daniel Eggert 24th February 2003 Technical University of Denmark Informatics and Mathematical Modelling Building 321, DK-2800 Lyngby, Denmark Phone +45 4525 3351, Fax +45 4588 2673 reception@imm.dtu.dk www.imm.dtu.dk IMM-THESIS: ISSN 1601-233X Abstract This thesis addresses two neural network based control systems The first is a neural network based predictive controller System identification and controller design are discussed The second is a direct neural network controller Parameter choice and training methods are discussed Both controllers are tested on two different plants Problems regarding implementations are discussed First the neural network based predictive controller is introduced as an extension to the generalised predictive controller (GPC) to allow control of non-linear plant The controller design includes the GPC parameters, but prediction is done explicitly by using a neural network model of the plant System identification is discussed Two control systems are constructed for two different plants: A coupled tank system and an inverse pendulum This shows how implementation aspects such as plant excitation during system identification are handled Limitations of the controller type are discussed and shown on the two implementations In the second part of this thesis, the direct neural network controller is discussed An output feedback controller is constructed around a neural network Controller parameters are determined using system simulations The control system is applied as a single-step ahead controller to two different plants One of them is a path-following problem in connection with a reversing trailer truck This system illustrates an approach with step-wise increasing controller complexity to handle the unstable control object The second plant is a coupled tank system Comparison is made with the first controller Both controllers are shown to work But for the neural network based predictive controller, construction of a neural network model of high accuracy is critical – especially when long prediction horizons are needed This limits application to plants that iii can be modelled to sufficient accuracy The direct neural network controller does not need a model Instead the controller is trained on simulation runs of the plant This requires careful selection of training scenarios, as these scenarios have impact on the performance of the controller Lyngby, den 24 februar 2003 Daniel Eggert iv Contents Introduction 1.1 1.2 Overview The Neural Network 1.2.1 The Two-Layer perceptron 1.2.2 Training Generalised Predictive Control 2.1.1 The Control Law for Linear Systems 2.1.2 Non-linear Case 2.1.3 Time Series Prediction with Neural Networks System Identification 2.2.1 Error Function 2.2.2 Error Back-propagation 2.2.3 Pre-processing and post-processing 2.2.4 Model order selection 2.2.5 Regularisation 2.2.6 Neural Network Training Implementation of the Coupled Tank System Controller 2.3.1 System Description 2.3.2 System Identification 2.3.3 Performance 2.3.4 Discussion Implementation of the Acrobot Controller 2.4.1 System Description 2.4.2 System Identification Neural Network Model Based Predictive Control 2.1 2.2 2.3 2.4 2 10 13 16 17 18 20 21 22 22 24 24 25 31 38 38 39 41 v 2.5 Direct Neural Network Control 3.1 3.2 3.3 3.4 2.4.3 Performance 2.4.4 Discussion and Improvements Chapter Discussion Controller Design 3.1.1 Model order selection 3.1.2 Neural Network Training Implementation of the Reversing Trailer Truck 3.2.1 System Description 3.2.2 Bezier Path Implementation 3.2.3 Training the Neural Network 3.2.4 Performance 3.2.5 Re-training the Neural Network 3.2.6 Performance, revisited 3.2.7 Discussion Implementation of the Coupled Tank System 3.3.1 Neural Network Training 3.3.2 Performance 3.3.3 Discussion Chapter Discussion 46 48 51 53 53 54 56 58 58 62 65 72 80 81 81 83 83 85 87 87 Conclusion 89 4.1 4.2 4.3 89 90 92 Neural Network Model Based Predictive Controller Direct Neural Network Controller Future Work A Matlab Source Code A.1 Neural Network Model Based Predictive Control A.1.1 wb.m A.1.2 runsim.m A.1.3 plant.m A.1.4 pcontrol.m A.1.5 plantmodel.m A.1.6 nnmodel.m vi 95 95 95 97 100 104 107 110 A.1.7 createtrainset.m A.1.8 sqwave.m A.1.9 humantime.m A.1.10 variations.m A.2 Direct Neural Network Control A.2.1 wb.m A.2.2 scalar2weights.m A.2.3 weights2scalar.m A.2.4 fmincost.m A.2.5 runsim.m A.2.6 lagnnout.m A.2.7 nnout.m A.2.8 plant.m A.2.9 sd2xy.m A.2.10 bezierinit.m A.2.11 bezierxy.m A.2.12 bezierxyd.m A.2.13 bezierxydd.m A.2.14 bezierlength.m A.2.15 beziercurvature.m 116 117 118 118 120 120 124 124 125 127 130 131 132 140 140 145 146 147 148 148 Bibliography 151 Index 153 vii viii Chapter Introduction The present thesis illustrates application of the feed-forward network to control systems with non-linear plants This thesis focuses on two conceptually different approaches to applying neural networks to control systems Many areas of control systems exist, in which neural networks can be applied, but the scope of this thesis limits the focus to the following two approaches The first application uses the neural network for system identification The resulting neural network plant model is then used in a predictive controller This is discussed in chapter The other control system uses neural networks in a very different way No plant model is created, but the neural network is used to directly calculate the control signal This is discussed in chapter Both chapters discuss theoretic aspects and then try to apply the control system to two separate plants Is is important to note that this thesis is not self-contained Many aspects are merely touched upon and others are not at all covered here Instead this thesis tries to give an overall picture of the two approaches and some of their forces and pitfalls Due to the limited scope of this thesis, no attempt has been made to discuss the issues of noise in conjunction with the described control systems We assume that the data sources are deterministic Likewise the stability of presented controllers will not be discussed either Finally, it is worth noting that the plants used throughout this thesis are merely mathematical models in form of ordinary differential equations Appendix A Matlab Source Code A.2.9 sd2xy.m function [ x , y ]= sd2xy ( s , d ) 10 % SD2XY Convert path− r e l a t i v e c o o r d i n a t e s i n t o c a r t e s i a n coord % % [ X , Y ] = SD2XY ( S , D ) C o n v e r t s the path− r e l a t i v e c o o r d i n a t e s % ( S , D ) i n t o the c a r t e s i a n c o o r d i n a t e s ( X , Y ) S i s the d i s t a n c e % along the path and D the p e r p e n d i c u l a r d i s t a n c e to the path % ( p o s i t i v e D correspond to r i g h t s i d e of path ) % The path has to be i n i t i a l i s e d be B E Z I E R I N I T debug = ; % f i n d c o o r d i n a t e of s [x , y ] = bezierxy ( s ); 15 % get tangent d i r e c t i o n [ xd , yd ] = b e z i e r x y d ( s ) ; % norm v e c t o r s q r t ( xd ˆ + yd ˆ ) = 20 l = s q r t ( xd ˆ + yd ˆ ) ; xd = xd / l ; yd = yd / l ; x = x − d ∗ yd ; y = y + d ∗ xd ; A.2.10 bezierinit.m f u n ct i o n b e z i e r i n i t ( N, o p t i o n ) BEZIERINIT I n i t i a l i s e bezier v a r i a b l e s 10 % % % % % % % % % % % % % B E Z I E R I N I T w i l l i n i t i a l i s e the g l o b a l v a r i a b l e s needed f o r the other b e z i e r f u n c t i o n s to work Data i s c a l c u l a t e d and s t o r e d to the f i l e ’ b e z i e r mat ’ I f i t e x i s t s upon f u n c t i o n c a l l , the f i l e w i l l be read i n s t e a d of re−c a l c u l a t i o n B E Z I E R I N I T ( N ) The o p t i o n a l parameter N s p e c i f i e s the average p o i n t s per b e z i e r s e c t i o n to c a l c u l a t e and d e f a u l t s to 0 Higher N y i e l d h i g h e r p r e c i s i o n NOTE : C u r r e n t implementation w i l l not i n c l u d e the l a s t p o i n t of the b e z i e r c u r v e 15 20 i f nargin < , curve = ; else curve = option ; end 140 A.2 Direct Neural Network Control i f nargin < , N= 0 ; end 25 clear clear global global g l o b a l BEZIER X g l o b a l BEZIER Y BEZIER X BEZIER Y BEZIER S BEZIER T BEZIER DSDT 30 35 40 45 50 55 60 65 70 % % % % % % b e z i e r courve parameters Each row c o n t a i n s [ p o s i t i o n d e r i v a t i v e ] f o r the given c u r v e p o i n t X ( n , : ) and Y ( n , : ) s p e c i f y the n ’ th c u r v e p o i n t and d e r i v a t i v e a t t h a t p o i n t U n i t i s meters C u r r e n t l y # i s used f o r t r a i n i n g and # f o r t e s t switch curve case X = [ 1.5 ; ; 3]; Y = [ 0.5 1.5 ; −1.5 ; 0]; c a s e % 45 − degree l i n e X = [ ; ]; Y = [ ; ]; case % circle a = 1.675; X = [ a ; ; − a ; −1 ; a ]; Y = [ 0 ; a ; ; −a ; 0 ] ; case % S X = [ −1.5 ; −1 ; 0; 0 ; 1 ; 1.5 ]∗15; Y = [ ; ; −2; −.5 −2 ; −.5 ; ]∗15; c a s e % ’ soft ’ turn X = [ ; ]; Y = [ 0.1 ; −0.1 ]; c a s e % ’ hat ’ a = 16.75; X = [ a ; 10 ; 20 a ; 30 a ; 40 ; 50 a ]; Y = [ 0 ; 10 a ; 20 ; 20 ; 10 − a ; 0 ]; c a s e % t i m e s S ( used f o r t r a i n i n g ) a = ; b= a / ; c = a ∗ ; d= a / ; X = [ −40 d ; −22 d ; −20 ; −20 ; −18 d ] ; Y = [ 0 ; 0 ; 18 − d ; −d ; 0 ]; X = [ X ; a ; 10 ; 10 ; 20 a ; 25 ; 30 b ; 35 ]; Y = [ Y ; 0 ; 10 a ; 20 a ; 30 ; 30 ; 30 ; 25 −b ]; X = [ X ; 35 ; 40 b ; 50 c ; 70 ; 90 c ]; Y = [ Y ; 15 − b ; 0 ; 0 ; c ∗ ; 0 ] ; case X = [ ; 19 ; 21 ; 38 ; 42 ; 60 ]; Y = [ ; ; − ; −2 ; 2 ; 2 ] ; case a = 28; b = a /2; c = b/2; 141 Appendix A Matlab Source Code 75 80 85 90 X = [ 0 ; ; 20 a ]; Y = [ −20 c ; −10 a ; a /10 ]; X = [ X ; 35 c ; 40 ; 0 ; 35 − c ] ; Y = [ Y ; 20 ; 15 − c ; −c ; −1 ] ; X = [ X ; 25 −b ; 15 ; 15 b / ] ; Y = [ Y ; ; −11 − b ; −20 − b ] ; case 10 a = 20; b = 40; c = 5; X = [ a ; 20 ; 23 c ]; Y = [ 0 ; 20 b ; 50 b ]; c a s e % t i m e s S ( used f o r t r a i n i n g ) a = ; b= a / ; c = a ∗ ; X = [ a ; 10 ; 10 ; 20 a ; 25 ; 30 b ; 35 ]; Y = [ 0 ; 10 a ; 20 a ; 30 ; 30 ; 30 ; 25 −b ]; X = [ X ; 35 ; 40 b ; 50 c ; 70 ; 90 c ]; Y = [ Y ; 15 − b ; 0 ; 0 ; c ∗ ; 0 ] ; o t h e r w i s e % x−a x i s X = [ 10 ; 20 10 ]; Y = [ 0 ; 0 ]; end % check i f data saved to f i l e 95 100 105 i s the same try load b e z i e r ; end % the s t I F l i n e i s a path f o r the second when dimensions don ’ t % agree I t w i l l t r i g g e r the second I F i f sum ( s i z e ( BEZIER X ) ˜ = s i z e ( X ) ) | sum ( s i z e ( BEZIER Y ) ˜ = s i z e ( Y ) ) , BEZIER X = nd ( s i z e ( X ) ) ; BEZIER Y = r and ( s i z e ( Y ) ) ; end i f sum ( sum ( BEZIER X ˜ = X ) ) | sum ( sum ( BEZIER Y ˜ = Y ) ) | ( s i z e ( BEZIER S , ) ˜ = N) % Data i n f i l e d i f f e r s % C a l c u l a t e new c u r v e data % copy b e z i e r c u r v e parameters and c l e a r old a r r a y s 110 115 BEZIER X = X ; BEZIER Y = Y ; c l e a r g l o b a l BEZIER S c l e a r g l o b a l BEZIER T c l e a r g l o b a l BEZIER DSDT c l e a r g l o b a l BEZIER A c l e a r g l o b a l BEZIER B g l o b a l BEZIER A BEZIER B g l o b a l BEZIER a BEZIER b g l o b a l BEZIER S BEZIER T BEZIER DSDT 142 A.2 Direct Neural Network Control 120 s e c t i o n s = s i z e ( BEZIER X , ) − ; options = optimset ; % C a l c u l a t e t o t a l path l e n g t h 125 130 135 140 145 S = 0; for s e c t i o n =1: sec tio ns , % get b e z i e r c u r v e parameters f o r t h i s s e c t i o n ); x0 = BEZIER X ( s e c t i o n , xd0 = BEZIER X ( s e c t i o n , ); x1 = BEZIER X ( s e c t i o n + , ) ; xd1 = BEZIER X ( s e c t i o n + , ) ; y0 = BEZIER Y ( s e c t i o n , ); yd0 = BEZIER Y ( s e c t i o n , ); y1 = BEZIER Y ( s e c t i o n + , ) ; yd1 = BEZIER Y ( s e c t i o n + , ) ; BEZIER a = [ x0 xd0 ( ∗ ( x1 −x0 ) − (2 ∗ xd0 + xd1 ) ) ( ∗ ( x0 −x1 ) + xd0 + xd1 ) ] ; BEZIER b = [ y0 yd0 ( ∗ ( y1 −y0 ) − (2 ∗ yd0 + yd1 ) ) ( ∗ ( y0 −y1 ) + yd0 + yd1 ) ] ; BEZIER A ( s e c t i o n , : ) = BEZIER a ; BEZIER B ( s e c t i o n , : ) = BEZIER b ; sectionLength ( s e c t i o n ) = pathLength ( ) ; S = S + sectionLength ( section ); end % s t a r t with s e c t i o n section = 1; 150 155 160 % get b e z i e r c u r v e parameters f o r t h i s s e c t i o n x0 = BEZIER X ( s e c t i o n , ); xd0 = BEZIER X ( s e c t i o n , ); x1 = BEZIER X ( s e c t i o n + , ) ; xd1 = BEZIER X ( s e c t i o n + , ) ; y0 = BEZIER Y ( s e c t i o n , ); yd0 = BEZIER Y ( s e c t i o n , ); y1 = BEZIER Y ( s e c t i o n + , ) ; yd1 = BEZIER Y ( s e c t i o n + , ) ; BEZIER a = [ x0 xd0 ( ∗ ( x1 −x0 ) − (2 ∗ xd0 + xd1 ) ) ( ∗ ( x0 −x1 ) + xd0 + xd1 ) ] ; BEZIER b = [ y0 yd0 ( ∗ ( y1 −y0 ) − (2 ∗ yd0 + yd1 ) ) ( ∗ ( y0 −y1 ) + yd0 + yd1 ) ] ; BEZIER A ( s e c t i o n , : ) = BEZIER a ; BEZIER B ( s e c t i o n , : ) = BEZIER b ; % Determine how often to update p r o g r e s s 165 nCount = f l o o r (N/ 0 ) ; i f nCount = = , nCount = ; end f o r n = :N, 143 Appendix A Matlab Source Code i f mod ( n , nCount ) = = , f p r i n t f ( [ ’ i n i t i a l i s i n g c o u r v e d a t a f o r s e g m e n t %d w i t h ’ ’%d s t e p s : %3 d%% done \ r ’ ] , s e c t i o n , N , round ( 0 ∗ ( n − ) / (N − ) ) ) ; end % p o s i t i o n along segment s = S ∗ ( n − ) / (N− ) ; BEZIER S ( n ) = s ; % check i f we entered the next s e c t i o n i f ( s > sum ( s e c t i o n L e n g t h ( : s e c t i o n ) ) ) & ( s e c t i o n < s e c t i o n s ) , section = section + 1; % get b e z i e r c u r v e parameters f o r t h i s s e c t i o n ); x0 = BEZIER X ( s e c t i o n , xd0 = BEZIER X ( s e c t i o n , ); x1 = BEZIER X ( s e c t i o n + , ) ; xd1 = BEZIER X ( s e c t i o n + , ) ; y0 = BEZIER Y ( s e c t i o n , ); yd0 = BEZIER Y ( s e c t i o n , ); y1 = BEZIER Y ( s e c t i o n + , ) ; yd1 = BEZIER Y ( s e c t i o n + , ) ; BEZIER a = [ x0 xd0 ( ∗ ( x1 −x0 ) − (2 ∗ xd0 + xd1 ) ) ( ∗ ( x0 −x1 ) + xd0 + xd1 ) ] ; BEZIER b = [ y0 yd0 ( ∗ ( y1 −y0 ) − (2 ∗ yd0 + yd1 ) ) ( ∗ ( y0 −y1 ) + yd0 + yd1 ) ] ; BEZIER A ( s e c t i o n , : ) = BEZIER a ; BEZIER B ( s e c t i o n , : ) = BEZIER b ; end % find corresponding t oldwarn = warning ( ’ o f f ’ ) ; t = ( s −sum ( s e c t i o n L e n g t h ( : ( s e c t i o n − ) ) ) ) / sectionLength ( section ); t = f z e r o ( @ d i s t a n c e , t0 , o p t i o n s , s −sum ( s e c t i o n L e n g t h ( : ( s e c t i o n − ) ) ) ) ; warning ( oldwarn ) ; BEZIER T ( n ) = t + ( s e c t i o n − ) ; % e s t i m a t e ds / dt if n == , elseif n == , BEZIER DSDT ( n − ) = ( BEZIER S ( n −1) − BEZIER S ( n ) ) / ( BEZIER T ( n −1) − BEZIER T ( n ) ) ; else BEZIER DSDT ( n − ) = ∗ ( BEZIER S ( n −1) − BEZIER S ( n ) ) / ( BEZIER T ( n −1) − BEZIER T ( n ) ) + ∗ ( BEZIER S ( n −1) − BEZIER S ( n − ) ) / ( BEZIER T ( n −1) − BEZIER T ( n − ) ) ; i f n = = N, BEZIER DSDT ( n ) = ( BEZIER S ( n) − BEZIER S ( n − ) ) / ( BEZIER T ( n) − BEZIER T ( n − ) ) ; 170 175 180 185 190 195 200 205 210 215 144 A.2 Direct Neural Network Control end end end 220 % s a v e v a r i a b l e s to f i l e f p r i n t f ( ’ saving to f i l e \ r ’ ) ; s a v e b e z i e r BEZIER X BEZIER Y BEZIER A BEZIER B BEZIER S BEZIER T BEZIER DSDT 225 fprintf ([ ’ ’ ’ \r ’ ]); % p o s t c l e a n −up 230 c l e a r g l o b a l BEZIER a c l e a r g l o b a l BEZIER b end 235 f u n c t i o n d= d i s t a n c e ( t , s ) d = p a t h L e n g t h ( t )− s ; 240 function s = pathLength ( t ) s = quad ( @speed , , t ) ; function sd = speed ( t ) % we ’ r e u s i n g some g l o b a l v a r i a b l e s here to speed t h i n g s up g l o b a l BEZIER a BEZIER b 245 sd = s q r t ( ( BEZIER a ( ) + BEZIER a ( ) ∗ ∗ t + BEZIER a ( ) ∗ ∗ t ˆ ) ˆ + ( BEZIER b ( ) + BEZIER b ( ) ∗ ∗ t + BEZIER b ( ) ∗ ∗ t ˆ ) ˆ ); A.2.11 bezierxy.m function [ x , y ] = bezierxy ( s ) BEZIERXY C a l c u l a t e x − and y−c o o r d i n a t e of c u r v e p o i n t % % % % % [ X , Y ] = BEZIERXY ( S ) R e t u r n s i n X the x−c o o r d i n a t e of the b e z i e r c u r v e i n i t i a l i s e d by B E Z I E R I N I T S i s the d i s t a n c e along the b e z i e r c u r v e from the beginning of the c u r v e to the p o i n t g l o b a l BEZIER S BEZIER T BEZIER DSDT g l o b a l BEZIER A BEZIER B 10 % f i n d the p o s i t i o n c l o s e s t to s i n the a r r a y s : % We assume B E Z I E R S ( ) = ; N = s i z e ( BEZIER S , ) ; S = BEZIER S (N ) ; 145 Appendix A Matlab Source Code 15 n = round ( s / S ∗ (N− ) ) + ; l i m i t A t = n < 1; n = n ∗(1 − l i m i t A t )+ l i m i t A t ; l i m i t A t = n>N; n = n ∗ ( − l i m i t A t ) +N ∗ l i m i t A t ; 20 % get e s t i m a t e f o r t d s d t = BEZIER DSDT ( n ) ; t = BEZIER T ( n ) + ( s − BEZIER S ( n ) ) / d s d t ; 25 30 35 % get s e c t i o n and l i m i t section limit = limitAt section limit = limitAt section = floor ( t )+1; 1; = s e c t i o n l i m i t ; = s e c t i o n ∗(1 − l i m i t A t )+ l i m i t ∗ l i m i t A t ; t = t −( s e c t i o n − ) ; T = [ ones ( s i z e ( t ) ) ; t ; t ˆ ; t ˆ ] ; x = sum ( BEZIER A ( s e c t i o n , : ) ∗ T ’ , ) ’ ; y = sum ( BEZIER B ( s e c t i o n , : ) ∗ T ’ , ) ’ ; A.2.12 bezierxyd.m f u n c t i o n [ xd , yd ] = b e z i e r x y d ( s ) BEZIERXYD C a l c u l a t e d e r i v a t i v e of c u r v e p o i n t % % % % % % % % [ XD , YD ] = BEZIERXY ( S ) R e t u r n s i n XD and YD the tangent of the b e z i e r c u r v e i n i t i a l i s e d by B E Z I E R I N I T S i s the d i s t a n c e along the b e z i e r c u r v e from the beginning of the c u r v e to the p o i n t XD and YD are the d e r i v a t e s with r e s p e c t to the parameter t , such t h a t XD = dx ( t ) / dt and YD = dy ( t ) / dt , where t i s d efi ned by s = \ i n t ˆ t \ s q r t {x ’ ˆ ( \ tau ) + y ’ ˆ ( \ tau ) } d\ tau 10 g l o b a l BEZIER S BEZIER T BEZIER DSDT g l o b a l BEZIER A BEZIER B 15 20 % f i n d the p o s i t i o n c l o s e s t to s i n the a r r a y s : % We assume B E Z I E R S ( ) = ; N = s i z e ( BEZIER S , ) ; S = BEZIER S (N ) ; n = round ( s / S ∗ (N− ) ) + ; l i m i t A t = n < 1; n = n ∗(1 − l i m i t A t )+ l i m i t A t ; l i m i t A t = n>N; n = n ∗ ( − l i m i t A t ) +N ∗ l i m i t A t ; 146 A.2 Direct Neural Network Control 25 % get e s t i m a t e f o r t d s d t = BEZIER DSDT ( n ) ; t = BEZIER T ( n ) + ( s − BEZIER S ( n ) ) / d s d t ; % get s e c t i o n and l i m i t 30 35 section limit = limitAt section limit = limitAt section = floor ( t )+1; 1; = s e c t i o n l i m i t ; = s e c t i o n ∗(1 − l i m i t A t )+ l i m i t ∗ l i m i t A t ; t = t −( s e c t i o n − ) ; T = [ z e r o s ( s i z e ( t ) ) ; ones ( s i z e ( t ) ) ; ∗ t ; ∗ t ˆ ] ; 40 xd = sum ( BEZIER A ( s e c t i o n , : ) ∗ T ’ , ) ’ ; yd = sum ( BEZIER B ( s e c t i o n , : ) ∗ T ’ , ) ’ ; A.2.13 bezierxydd.m f u n c t i o n [ xdd , ydd ] = b e z i e r x y d d ( s ) BEZIERXYDD C a l c u l a t e second d e r i v a t i v e of c u r v e p o i n t 10 % % % % % % % % % [ XDD, YDD ] = BEZIERXY ( S ) R e t u r n s i n XDD and YDD the second d e r i v a t i v e of the p o i n t on the b e z i e r c u r v e i n i t i a l i s e d by B E Z I E R I N I T S s p e c i f i e s the p o i n t and i s the d i s t a n c e along the b e z i e r c u r v e from the beginning of the c u r v e to the p o i n t XDD and YDD are the d e r i v a t e s with r e s p e c t to the parameter t , such t h a t XD = dx ( t ) / dt and YD = dy ( t ) / dt , where t i s de f ined by s = \ i n t ˆ t \ s q r t {x ’ ˆ ( \ tau ) + y ’ ˆ ( \ tau ) } d\ tau g l o b a l BEZIER S BEZIER T BEZIER DSDT g l o b a l BEZIER A BEZIER B 15 % f i n d the p o s i t i o n c l o s e s t to s i n the a r r a y s : % We assume B E Z I E R S ( ) = ; 20 N = s i z e ( BEZIER S , ) ; S = BEZIER S (N ) ; n = round ( s / S ∗ (N− ) ) + ; l i m i t A t = n < 1; n = n ∗(1 − l i m i t A t )+ l i m i t A t ; l i m i t A t = n>N; n = n ∗ ( − l i m i t A t ) +N ∗ l i m i t A t ; 25 % get e s t i m a t e f o r t d s d t = BEZIER DSDT ( n ) ; 147 Appendix A Matlab Source Code t = BEZIER T ( n ) + ( s − BEZIER S ( n ) ) / d s d t ; % get s e c t i o n and l i m i t 30 35 section limit = limitAt section limit = limitAt section = floor ( t )+1; 1; = s e c t i o n l i m i t ; = s e c t i o n ∗(1 − l i m i t A t )+ l i m i t ∗ l i m i t A t ; t = t −( s e c t i o n − ) ; T = [ z e r o s ( s i z e ( t ) ) ; z e r o s ( s i z e ( t ) ) ; ∗ ones ( s i z e ( t ) ) ; ∗ t ] ; 40 xdd = sum ( BEZIER A ( s e c t i o n , : ) ∗ T ’ , ) ’ ; ydd = sum ( BEZIER B ( s e c t i o n , : ) ∗ T ’ , ) ’ ; A.2.14 bezierlength.m function s = b e z i e r l e n g t h % BEZIERLENGTH Return t o t a l l e n g t h of b e z i e r path % % S = BEZIERLENGTH r e t u r n s i n S the t o t a l l e n g t h of the b e z i e r % path i n i t i a l i s e d by B E Z I E R I N I T g l o b a l BEZIER S s = BEZIER S ( end ) ; A.2.15 beziercurvature.m function [K] = b e z i e r c u r v a t u r e ( s ) BEZIERCURVATURE C a l c u l a t e x − and y−c o o r d i n a t e of c u r v e p o i n t % % % % % [ K ] = BEZIERCURVATURE ( S ) R e t u r n s i n K the c u r v a t u r e of the b e z i e r path a t d i s t a n c e s from the s t a r t i n g p o i n t The b e z i e r path must be i n i t i a l i s e d by B E Z I E R I N I T g l o b a l BEZIER S BEZIER T BEZIER DSDT g l o b a l BEZIER A BEZIER B 10 %[ xd , yd ] = b e z i e r x y d ( s ) ; %[ xdd , ydd ] = b e z i e r x y d d ( s ) ; 15 % f i n d the p o s i t i o n c l o s e s t to s i n the a r r a y s : % We assume B E Z I E R S ( ) = ; N = s i z e ( BEZIER S , ) ; S = BEZIER S (N ) ; n = round ( s / S ∗ (N− ) ) + ; 148 A.2 Direct Neural Network Control 20 25 l i m i t A t = n < 1; n = n ∗(1 − l i m i t A t )+ l i m i t A t ; l i m i t A t = n>N; n = n ∗ ( − l i m i t A t ) +N ∗ l i m i t A t ; % get e s t i m a t e f o r t d s d t = BEZIER DSDT ( n ) ; t = BEZIER T ( n ) + ( s − BEZIER S ( n ) ) / d s d t ; % get s e c t i o n and l i m i t 30 35 section limit = limitAt section limit = limitAt section = floor ( t )+1; 1; = s e c t i o n l i m i t ; = s e c t i o n ∗(1 − l i m i t A t )+ l i m i t ∗ l i m i t A t ; t = t −( s e c t i o n − ) ; 40 st = size ( t ); T = [ z e r o s ( s t ) ; ones ( s t ) ; ∗ t ; ∗ t ˆ ] ’ ; xd = sum ( BEZIER A ( s e c t i o n , : ) ∗ T , ) ; yd = sum ( BEZIER B ( s e c t i o n , : ) ∗ T , ) ; 45 T = [ z e r o s ( s t ) ; z e r o s ( s t ) ; ∗ ones ( s t ) ; ∗ t ] ’ ; xdd = sum ( BEZIER A ( s e c t i o n , : ) ∗ T , ) ; ydd = sum ( BEZIER B ( s e c t i o n , : ) ∗ T , ) ; % c a l c u l a t e and r e t u r n c u r v a t u r e 50 K = ( xd ∗ ydd −yd ∗ xdd ) / ( ( xd ˆ + yd ˆ ) ˆ ) ; 149 150 Bibliography [1] D Clarke, C Mohtadi, and P Tuffs, “Generalized predictive control part i the basic algorithm,” Automatica, vol 23, no 2, pp 137–148, 1987 [2] D Clarke, C Mohtadi, and P Tuffs, “Generalized predictive control - part ii extensions and interpretations,” Automatica, vol 23, no 2, pp 149160, 1987 [3] J Sjă berg, Some aspects of neural nets and related model structures for o nonlinear system identification,” Tech Rep CTH-TE-70, Chalmers University of Technology, 412 96 Gothenburg, Sweden, March 1998 [4] M Nørgaard, O Ravn, N K Poulsen, and L K Hansen, Neural Networks for Modelling and Control of Dynamic Systems Springer-Verlag, 2000 [5] C M Bishop, Neural Networks for Pattern Recognition Oxford University Press, 1995 [6] N K Poulsen, Stochastic Adaptive Control - Exercise IMM, DTU, march 2002 [7] L Skovgaard, “Regulering af ustabilt ulineært objekt,” Master’s thesis, Danmarks Tekniske Universitet, August 2000 [8] C Samson, “Control of chained systems application to path following and time-varying point-stabilization of mobile robots,” IEEE Transactions on Automatic Control, vol 40, pp 64–77, January 1995 151 152 Index acrobot, 38 activation, activation function, back propagation, 18 Bezier curve, 62 Bezier path, 62 bias hidden unit, control horizon, 8, 12 coupled tank system, 34 control weight, coordinate system (s,d), 58 trailer truck, 58 cost function generalised predictive control, parametric optimisation, 56 trailer truck, 69 costing horizon, coupled tank system, 34 costing weight coupled tank system, 34 coupled tank system, 24 credit assignment problem, 18 cross-correlation coefficients, 32 curse of dimensionality, 21 curv, see crvature60 curvature kinematic equations, 60 early stopping, 23 error function, 17 modified, 22 forced response, free response, generalised predictive control, hold out method, 23 Isidori, 41 kinematic equations acrobot, 39 trailer truck, 60 lag network model based predictive controller, 14 level change at random instances, 17 153 Levenberg-Marquardt algorithm, 18 linear controller using as start guess, 57 model order changing, 58 model order selection parametric optimisation, 54 predictive control, 21 multiple correlation coefficient, 31 Neural Network Toolbox, 23 over fitting, 24 model order selection, 21 regularisation, 22 physical parameters trailer truck, 60 prediction multi-step-ahead, 13 one-step-ahead, 13 pruning, 22 region of attraction, 40 regularisation, 22 spline, 62 start guess parametric optimisation, 57 system identification, 7, 13 input signal design, 26 tank system, see cupled tank system24 target vector, 17 training, neural network, 22 training data, 17 input signal design, 26 two layer perceptron, weight decay, 22 ... weights for neural network train network for 50 epochs end choose best of above networks train network using early stopping as termination Early Stopping It is important for the network to generalise... present thesis illustrates application of the feed-forward network to control systems with non-linear plants This thesis focuses on two conceptually different approaches to applying neural networks... in appendix A 1.2 The Neural Network Throughout this thesis we are using the two-layer feed-forward network with sigmoidal hidden units and linear output units This network structure is by far

Ngày đăng: 19/02/2014, 09:20

Từ khóa liên quan

Mục lục

  • Introduction

    • Overview

    • The Neural Network

      • The Two-Layer perceptron

      • Training

      • Neural Network Model Based Predictive Control

        • Generalised Predictive Control

          • The Control Law for Linear Systems

          • Non-linear Case

          • Time Series Prediction with Neural Networks

          • System Identification

            • Error Function

            • Error Back-propagation

            • Pre-processing and post-processing

            • Model order selection

            • Regularisation

            • Neural Network Training

            • Implementation of the Coupled Tank System Controller

              • System Description

              • System Identification

              • Performance

              • Discussion

              • Implementation of the Acrobot Controller

                • System Description

                • System Identification

                • Performance

                • Discussion and Improvements

Tài liệu cùng người dùng

Tài liệu liên quan