1. Trang chủ
  2. » Giáo Dục - Đào Tạo

OPTIMAL COMPUTING BUDGET ALLOCATION FOR SIMULATION BASED OPTIMIZATION AND COMPLEX DECISION MAKING

150 426 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 150
Dung lượng 1 MB

Nội dung

OPTIMAL COMPUTING BUDGET ALLOCATION FOR SIMULATION BASED OPTIMIZATION AND COMPLEX DECISION MAKING ZHANG SI NATIONAL UNIVERSITY OF SINGAPORE 2013 OPTIMAL COMPUTING BUDGET ALLOCATION FOR SIMULATION BASED OPTIMIZATION AND COMPLEX DECISION MAKING ZHANG SI (B.Eng., Nanjing University) A THESIS SUBMITTED FOR THE DEGREE OF DOCTOR OF PHILOSOPHY DEPARTMENT OF INDUSTRIAL AND SYSTEMS ENGINEERING NATIONAL UNIVERSITY OF SINGAPORE 2013 Declaration I hereby declare that the thesis is my original work and it has been written by me in its entirety I have duly acknowledged all the sources of information which have been used in the thesis This thesis has also not been submitted for any degree in any university previously Zhang Si Apr 2013 Acknowledgments I would like to express my deep gratitude to my supervisors, Associate Professor Lee Loo Hay and Associate Professor Chew Ek Peng for their very patient guidance and consistent encouragement to me throughout my research journey In addition, I am very grateful for the valuable advices and great support given by Professor Chen Chun-Hung Without their valuable and illuminating instructions, this thesis would not reach to its current state My Gratitude also goes to all the faculty members and stuffs in the Department of Industrial & Systems Engineering in National University of Singapore for providing me a friendly and helpful research atmosphere I also wish to thank my Oral Qualifying Examiners, Associate Professor Ng Szu Hui and Assistant Professor Kim Sujin, for their valuable comments and suggestions during the proposal of the thesis I would like to thank the Maritime Logistics and Supply Chain Research groups The seminars given by the members in the group broaden my knowledge view I learnt a lot from the group members especially from my seniors Nugroho Artadi Pujowidianto and Li Juxin, and the other fellow students working on simulation optimization, Xiao Hui, Li Haobin and Hu Xiang I am very grateful to my beloved family for their continuous support and love on me Their understanding, caring and encouragement accompany me for the whole study and research journey Finally, I would like to thank God who has given me the wisdom, perseverance, and strength to complete this thesis i Table of Contents Acknowledgments i Table of Contents ii Summary vi List of Tables vii List of Figures viii List of Symbols ix List of Abbreviations x Chapter Introduction 1.1 Overview of simulation optimization methods 1.2 Computing cost for simulation optimization 1.3 Objectives and Significance of the Study 1.4 Organization Chapter Literature Review 2.1 Ranking and Selection (R&S) 2.2 Optimal computing budget allocation (OCBA) 2.3 The application of OCBA 11 2.4 Summary of research gaps 12 Chapter Asymptotic Simulation Budget Allocation for Optimal Subset Selection 14 3.1 Introduction 14 3.2 Formulation for optimal subset selection problem 18 3.3 The approximated probability of correct selection 19 3.4 Derivation of the allocation rule OCBAm+ 21 ii 3.5 Sequential allocation procedure for OCBAm+ 27 3.6 Asymptotic convergence rate analysis on allocation rules 28 3.6.1 The framework for asymptotic convergence rate analysis on allocation rules 29 3.6.2 Asymptotic convergence rates for different allocation rules 30 3.7 Numerical experiments 33 3.7.1 The Base Experiment 33 3.7.2 Variants of the Base Experiment 35 3.7.3 Numerical Results for Simulation Optimization 38 3.8 Conclusions and comments 39 Chapter Efficient computing budget allocation for optimal subset selection with correlated sampling 41 4.1 Introduction 41 4.2 Problem formulation from the perspective of large deviation theory 43 4.3 Derivation of the allocation rules 45 4.3.1 Allocation rule for two alternatives 47 4.3.2 Allocation rule for best design selection (m=1) 49 4.3.3 Allocation rule for the optimal subset selection (m>1) 51 4.3.4 Sequential allocation procedure 53 4.4 Numerical Experiments 54 4.5 Conclusions 55 Chapter Particle Swarm Optimization with Optimal Computing Budget Allocation for Stochastic Optimization 57 5.1 Introduction 57 iii 5.2 Problem Setting 60 5.2.1 Basic Notations 60 5.2.2 Particle Swarm Optimization 61 5.3 PSOOCBA Formulation 63 5.3.1 Computing budget allocation for Standard PSO 65 5.3.2 Computing budget allocation for PSOe 72 5.4 Numerical Experiments 75 5.5 Conclusions 80 Chapter Enhancing the Efficiency of the Analytic hierarchy Process (AHP) by OCBA framework 81 6.1 Introduction 81 6.2 Formulation for expert allocation problem in AHP 84 6.3 Derivation of the allocation rule AHP_OCBA 87 6.4 Numerical experiments 91 6.4.1 The Base Experiment 91 6.4.2 Variants of the Base Experiment 92 6.5 Conclusions 94 Chapter Conclusions 96 References 99 Appendix A Proof of Lemma 3.1 105 Appendix B Proof of Lemma 3.2 106 Appendix C Proof of Lemma 3.3 108 Appendix D Proof of Proposition 3.1 110 iv Appendix E Illustration of simplified conditions in Remark 3.1 112 Appendix F Proof of Corollary 3.1 114 Appendix G Proof of Theorem 3.2 115 Appendix H Proof of Lemma 3.5 118 Appendix I Proof of Theorem 3.3 122 Appendix J Proof of Theorem 3.4 124 Appendix K Proof for Theorem 5.1 128 Appendix L Proof for Lemma 5.1 131 Appendix M Proof for Theorem 5.3 133 Appendix N Proof for Lemma 5.3 135 v Summary Optimal Computing Budget Allocation (OCBA) considers the problem how to get a best result based on the simulation output under a computing budget constraint It is not only an efficient ranking and selection procedure for simulation problems with finite candidate solutions but also an attractive concept of resource allocation under stochastic environment In this thesis, the framework of optimal computing budget allocation is studied in detail and improved from both theoretical aspect and practical aspect From the perspective of problem setting, we extend OCBA to optimal subset selection problem and optimization problem with correlation between designs From the perspective of OCBA application, we firstly explore the efficient way to use OCBA framework to help random search algorithms solving the simulation optimization problems with large solution space The computing budget allocation models are built for a popular search algorithm Particle Swarm Optimization (PSO) Two asymptotic allocation rules PSOs_OCBA and PSOe_OCBA are specifically developed for two versions of PSO to improve their efficiency on tackling simulation optimization problems The application of OCBA framework into complex decision making problems beyond simulation is also studied We use the decision making technique Analytic Hierarchy Process (AHP) as an example The resource allocation problem for AHP is modelled from the perspective of OCBA framework One specific approximated optimal allocation rule AHP_OCBA is derived for it to demonstrate the efficiency improvement on decision making techniques by applying OCBA The research work of this thesis may provide a more general and more efficient computing allocation scheme for optimization problems vi List of Tables Table 3.1.a The speed-up factor with different values of P{CS} in the Base Experiment 34 Table 3.1.b Theoretical convergence rates in the Base Experiment 34 Table 3.2 Parameter settings for different scenarios 35 Table 3.3.a Average computing budget required for reaching 90% P{CS} 36 Table 3.3.b Theoretical convergence rates in different scenarios 36 Table 4.1 Parameter settings for different scenarios 55 Table 4.2 The value of P{CS} after 1,000 replications 55 Table 5.1 Formulas and parameter settings of the tested functions 76 Table 6.1 Parameter settings for different scenarios 93 Table 6.2 The speed-up factor to attain P{CS}=90% in different scenarios 93 vii  ∑ +  i ≠ m ( µi − µ m )    ∑ +  i ≠ m +1 ( µi − µm +1 )2  ∑ i≠m ( µi − µ m ) ∑    ⋅  ( µm +1 − µ m )2 +     i ≠ m +1 ( µi − µm +1 ) ∑ i≠m ( µi − µ m )    ⋅  ( µm +1 − µ m )2 +     ∑ i ≠ m +1   < 2k ,   ( µi − µm +1 ) (I.3)   < 2k   Because the proofs of these two inequalities are similar, we only show the proof of (I.3) here Since µ1 < µ2 <  < µm < µm +1 <  < µk −1 < µk ( µm +1 − µm ) ≤ ( µm + − µm +1 , µm − µm −1 ) , and the inequalities below is true ( µm +1 − µm ) ∑ i≠m  ∑  i≠m  ( µi − µ m ) ( µi − µ m ) ∑ i≠m < k −1, ( µi − µ m ) (I.4)   < k −1   (I.5) Combining (I.4) and (I.5), we can get  ∑ +  i ≠ m ( µi − µm )2  = ( µm +1 − µm ) ∑ i ≠m ∑ i ≠m ( µi − µm ) ( µi − µm )    ⋅  ( µm +1 − µm ) +     + ( µm +1 − µm ) ∑ i ≠m ∑ (µ i ≠m ( µi − µm ) i − µm )      + ∑  ( µi − µm )2 i ≠m  ∑ (µ i ≠m i − µm )   + < 2k   □ 123 Appendix J Proof of Theorem 3.4 µi = When µi +1 −= d , for i 1, 2, , k − , and all designs have a common variance, the convergence rate obtained by OCBAm+ is i) If APCSm1 (α *1 ) ≥ APCSm2 (α *2 ) d2 L L ⋅ Gm( m +1) (α m , α m +1 ) = 2s 1 ∑ m−i + ∑ m−i i≠m ( i≠m ( ) ) ⋅  1 +   ∑ i≠m (m − i) ;    (J.1) ii) If APCSm1 (α *1 ) ≤ APCSm2 (α *2 ) d2 L L ⋅ Gm( m +1) (α m , α m +1 ) = 2s 1 + ∑ ∑ i ≠ m +1 ( m + − i ) i ≠ m +1 ( m + − i ) ⋅  1 +   ∑ i ≠ m +1 (m +1− i)     And the convergence rate obtained by OCBAm is { ( Gmk (α m , α k ) , G1( m +1) α1 , α ( m +1) d2 = 2⋅ 2s )}     2   ⋅ 1 + ,1 +  k 1 1    2 k − m −  +1 2 k − m −  2 m −  +1 2 m −   ∑      2 2 2      1 i =1    i−m−   2  (J.2) When m equals one, OCBAm+ goes to the OCBA In this situation, OCBAm+ is no worse than OCBAm For m greater than one, we show the proof when m ≥ and k ≥ m + a) A lower bound of the convergence rate for OCBAm+ when m ≥ and k ≥ m + 124 If APCSm1 (α *1 ) ≥ APCSm2 (α *2 ) , the asymptotic convergence rate for OCBAm+ is (J.1) When m ≥ and k ≥ m + , it is true that ∞ 1 1  × 1 + +  < ∑ < 2∑  i≠m ( m − i ) n  n= (J.3) The right part in this inequality (J.3) is a series whose sum is Riemann’s zeta function, denoted as ∞ ∞ ζ ( s ) = ∑ (1 n s ) When s = , ζ (= 4) ) ∑ (1 n= n =1 n =1 π4 90 ≈ 1.0823 So we have 1.466 < ∑ (m − i ) i ≠m < 1.472 (J.4) For i ≤ m − , (m − i ) 1 1 1  = < = −   2  m − i −1 m − i +1 ( m − i − 1) + ( m − i − 1) + ( m − i − 1) + ( m − i − 1) Therefore, it is true that m −3 ∑ (m − i) < m −3  1 1  5 ∑  m − i − − m − i +  =2 ⋅  − m − − m      = 1= i i (J.5) Similarly, k ∑ (i − m) < i= m+3  5 k  1 1  ∑3  i − ( m + 1) − i − m +  < ⋅  − k − m − k − m +    i=   m+   (J.6) 125 Combining (J.5) and (J.6), an upper bound of ∑ i≠m (m − i) < ∑ i≠m (m − i) is 5 1 1  ⋅ − − − −  + 2.5  m −1 m k − m k − m +1  (J.7) Consequently, L L Gm( m +1) (α m , α m +1 ) > 5 1 − − − in which a1 = ⋅  − 3 m −1 m k −m d2 1 ⋅ ⋅ , 2s a1 + 3.972 1.683 (J.8) d2 1 ⋅ ⋅ , 2s a2 + 3.972 1.683 (J.9)   k − m +1 Similarly, if APCSm1 (α *1 ) ≤ APCSm2 (α *2 ) , L L Gm( m +1) (α m , α m +1 ) >   − − in which a2 = ⋅  − −   m m +1 k − m −1 k − m  1 1 b) An upper bound of the convergence rate for OCBAm when m ≥ and k ≥ m +  1 11  1 Because m ≥ and k ≥ m + , there exist  k − m −  ≥ and  m −  ≥ So, 2 2       72 2   1 + ,1 + ≤ 1 1 1   61    2 k − m −  +1 2 k − m −  2 m −  +1 2 m −     2 2 2        (J.10) For each i ≤ m − , 126 1  i − m −  2  > 1 1  = −   + (m − i )  m − i m − i +  , (m − i ) and for each i ≥ m + , 1  i − m −  2  >  1 1 =   i − ( m + 1) − i − ( m − 1)       2  i − ( m + 1)  + i − ( m + 1)  Therefore, we have k m−2 ∑= ∑ = i   i − m −  2  = i 1   m + −i   + k ∑ = i m+3 1  i − m −  2  +8+ 5 1 1  80 > − − − − +  m m +1 k − m k − m +1 (J.11) Combing inequalities (J.10) and (J.11), we have { ( Gmk (α m , α k ) , G1( m +1) α1 , α ( m +1) = in which b )} < 2ds 2 ⋅ 72 d 1.1804 ⋅ < ⋅ 80 61 2s b + 8.88 , b+ (J.12) 15 1 1  − −  − −   m m +1 k − m k − m +1 c) The difference between OCBAm+’s convergence rate and OCBAm’s convergence rate when m ≥ and k ≥ m + Because a1 < b and a2 < b , we can get the following inequality by (J.8), (J.9), and (J.12) { ( L L Gm( m +1) (α m , α m +1 ) − Gmk (α m , α k ) , G1( m +1) α1 , α ( m +1) )} d d 1 1.1804 ⋅ ⋅ − 2⋅ 2s b + 3.972 1.683 2s b + 8.88  d2  0.987 − 0.987b >   2s  1.683 ( b + 3.972 )( b + 8.88 )  > 127 Since b < , it can be proved that { ( L L Gm( m +1) (α m , α m +1 ) − Gmk (α m , α k ) , G1( m +1) α1 , α ( m +1) )} > 2d s 2   0.987 − 0.987b   > 1.683 ( b + 3.972 )( b + 8.88 )     = For the case m ≥ and k m + and the cases that m equals and 3, the proof is easier and □ follows in similar fashion Appendix K Proof for Theorem 5.1 We firstly define four sets { ( } ) S A1 = αi I i f ( Pi t −1 ) ≤ Gig (αi , α g ) i : X i t ∈ S A and { ( } ) S A2 i : X i t ∈ S A and αi I i f ( Pi t −1 ) > Gig (αi , α g ) , { ( ) } { ( ) } S B1 = α j I j f ( Pj t −1 ) ≤ G jg (α j , α g ) , j : X j t ∈ S B and S B2 = α j I j f ( Pj t −1 ) > G jg (α j , α g ) j : X j t ∈ S B and Based on the definition, model (5.4) can be simplified as 128 max z s.t ( ) αi I i f ( Pi t −1 ) ≥ z, for X i ∈ S A 1 ( ) α j I j f ( Pj t −1 1 Gi1g αi2 , α g ≥ z, for X i2 ∈ S A2 1 ( ( ) ) ) ≥ z, for X j1 ∈ S B1 (K.1) G j2 g α j2 , α g ≥ z, for X j2 ∈ S B2 m ∑α i =1 i + αg = αi ≥ Let F be the Lagrangian functions of model (K.1) Then, we have ∑ F =z − X i1 ∈S A1 ∑ X j1 ∈S B1 ) ) ( ( (α I ( f ( P )) − z ) − λi αi I i f ( Pi t −1 ) − z − λj 1 1 j1 j1 t −1 j1    i =  m ν  −  ∑ αi + α g   −  ∑  ( ) ∑ λi Gi g (αi , α g ) − z − ∑ λ j G j g (α j , α g ) − z − X i2 ∈S A2 X j2 ∈S B2 i = = 1, , m or i g 2 ( 2 2 ) γ iα i The Karush-Kuhn-Tucker conditions are i The primal constraints: ( ) αi I i f ( Pi t −1 ) ≥ z, for X i ∈ S A 1 ( ) ( α j I j f ( Pj t −1 ) ≥ z, for X j ∈ S B 1 m ∑α i =1 ii i 1 ) ( ) Gi2 g αi2 , α g ≥ z, for X i2 ∈ S A2 1 G j2 g α j2 , α g ≥ z, for X j2 ∈ S B2 1, + α g =αi ≥ The dual constraints: λi ≥ , ν ≥ and γ i ≥ for all i = 1, , m, g iii Complementary slackness: 129 ( ( ) ) ( ) ) ( ) 0, λi αi I i f ( Pi t −1 ) − z = λi2 Gi2 g (αi2 , α g ) − z = 0, 1 ( 1 ( ) λ j α j I j f ( Pj t −1 ) − z = λ j2 G j2 g (α j2 , α g ) − z = 0, 0, 1  1  m 0, ν  ∑ α i + α g −  = γ iα i =  i =1 iv  Gradient of Lagrangian with respect to decision variables vanishes: ∇F = Based on condition (iv), the following equations can be obtained ( ( ∂αi1 I i1 f Pi1 t −1 λi ∂αi1 ( ∂Gi2 g αi2 , α g λi ∂αi2 ( ( ∂α j1 ( ∂G j2 g α j2 , α g λj ∂α j2 X i2 ∈S A2 ) −ν + γ ∂α j1 I j1 f Pj1 t −1 λj ∑ )) −ν + γ λi ( ∂Gi2 g αi2 , α g ∂α g = ∀X i1 ∈ S A1 i1 = ∀X i ∈ S A2 i2 )) −ν + γ ) −ν + γ )+ ∑ (K.2) j2 X j2 ∈S B2 j1 (K.3) = ∀X j1 ∈ S B1 (K.4) = ∀X j2 ∈ S B2 λj ( ∂G j2 g α j2 , α g ∂α g (K.5) ) −ν + γ g = (K.6) In stochastic situation, each solution has a noise and will be given no less than one sample to 1, , 1, , evaluate its performance That is, αi > , ∀i = m, g So we have γ i = ∀i = m, g Let λi > , ν > for all i = 1, , m, g Based on (iii), we have 130 ( ) ( ) = αi I i = Gi g (αi , α g ) α j I j = G j g (α j , α g ) , f ( P t −1i ) f ( Pj t −1 ) 1 2 1 2 v αi2 , α g ) ∂αi2 v ∂G j2 g α j2 , α g ) ∂α j2 and λi1 = = λ j1 ( ( v ∂αi1 I i1 f Pi1 ( ( t −1 ∀X i1 ∈ S A1 , λi2 = )) ∂αi1 )) ∂α j1 v ∂α j1 I j1 f Pj1 t −1 ∀X j1 ∈ S B1 , λ j2 = ∂Gi2 g ( ( ∀X i2 ∈ S A2 , ∀X j2 ∈ S B2 Substituting them into (K.6), the following equation can be obtained ∂Gi2 g ∂α g ∑ ∂G i2 i2 g ∂αi2 +∑ j2 ∂G j2 g ∂α g ∂G j2 g ∂α j2 = Therefore, if a solution satisfies the conditions in theorem 5.1, we can find the values of γ i , λi and ν such that it also satisfies the KKT conditions Because of the concavity of the maximization problem, the KKT condition is the sufficient and necessary condition for optimality Therefore, the rule satisfying Theorem 5.1 is an optimal allocation rule for model □ (5.4) Appendix L Proof for Lemma 5.1 When the performance of each particle follows a normal distribution, we can obtain the following equation based on large deviation theory ( ( I i1 f P t −1 i1 ( f ( P ) − f ( X )) )) = 2s t −1 i1 i1 i1 ∀X i1 ∈ S A1 (L.1) 131 ( f ( P ) − f ( X )) )) = 2s t −1 ( ( I j1 f P j1 t −1 j1 j1 ( f ( P ) − f ( X )) )= t −1 ( Gi2 g αi2 , α g g (L.2) ∀X i2 ∈ S A2 s g α g + s i2 αi (L.3) ( f ( P ) − f ( X )) )= t −1 ( ∀X j1 ∈ S B1 i2 G j2 g α j2 , α g j1 g ∀X j2 ∈ S B2 j2 s g αg + s α j j (L.4) For X i ∈ S A , 2 ) ( f ( P ) − f ( X )) (s α + s α ) ( t −1 ∂Gi2 g αi2 , α g = ∂αi2 g g i2 i2 g ∂Gi g (αi , α g ) s i2 ⋅ 2= and ∂α g αi 2 2 i2 ( f ( P ) − f ( X )) t −1 g (s g i2 α g + s i2 αi 2 ) ⋅ sg αg ( L.5) For X j ∈ S B 2 ( ) ( f ( P ) − f ( X )) (s α + s α ) ∂G j2 g α j2 , α g = ∂α j2 t −1 g g g j2 2 j2 j2 ∂G j g (α j , α g ) s2 j ⋅ = and ∂α g αj 2 ( f ( P ) − f ( X )) t −1 g (s g j2 αg + s α j j 2 ) 2 ⋅ sg αg (L.6) Substituting (L.5) and (L.6) into (b) in Theorem 5.1, 2 s g αg s α2 + ∑ 2g g = 2 j s j αj i αi ∑s i2 2 2 Hence, = sg αg αi2 α2 j +∑ ∑s j s i i j 2 2 2 Under the assumption 𝛼 𝑔 ≫ 𝛼 𝑖 , (L.3) and (L.4) can be simplified as 132 ( f ( P ) − f ( X )) )= t −1 ( Gi2 g αi2 , α g g , and G j g (α j , α g i2 s i2 αi 2 ( f ( P ) − f ( X )) )= t −1 g j2 s2 αj j 2 Substituting into (a) in Theorem 5.1 yields αi : αi : α j : α j = 2 s i2 s i2 : s2 j : s2 j : ( f ( X ) − f ( P )) ( f ( X ) − f ( P )) ( f ( X ) − f ( P )) ( f ( X ) − f ( P )) t −1 i1 i1 t −1 i2 t −1 g j1 j1 t −1 j2 g □ Appendix M Proof for Theorem 5.3 Being similar to the proof of Theorem 5.1, let F be the Lagrangian functions of model (5.8) Then, we have F =z − ∑ t X it ∈Se , X it ≠ X b λbi Gbi (α b , αi ) − z  −   ∑ X it ∈Se , X tj ∈Sne m   λij Gij (αi , α j ) − z  − ν  − ∑ αi  − ∑ γ iαi   = = 1, ,m  i  i Hence, the Karush-Kuhn-Tucker conditions are i The primal constraints: z ≤ Gbi (α b , αi ) , for X it ∈ Se , z ≤ Gij (αi , α j ) , for X it ∈ Se , X tj ∈ Sne , m ∑α i =1 i = , αi ≥ 133 ii The dual constraints: λbi ≥ 0, λij ≥ , ν ≥ and γ i ≥ for all X it ∈ Se , X tj ∈ Sne iii Complementary slackness: m   λbi Gbi (α b , αi ) − z  =λij Gij (αi , α j ) − z  =ν  − ∑ αi  = i =      i =1  γ iα iv Gradient of Lagrangian with respect to decision variables vanishes: ∇F = Based on condition (iv), the following equations can be obtained ∂Gbj (α b , α j ) ∂Gbi (α b , αi ) ∂F = ∑ λbi − − ∑ λbj +ν − γ b = t ∂α b ∂α b ∂α b X it ∈Se , X it ≠ X b X tj ∈Sne (M.1) ∂Gij (αi , α j ) ∂Gbi (α b , αi ) ∂F t 0, = λbi − − ∑ λij + ν − γ i = for X it ∈ Se , X it ≠ X b t ∂αi ∂αi ∂αi X j ∈Sne (M.2) ∂Gij (αi , α j ) ∂F = ∑ λij − + ν − γ j = , for X tj ∈ Sne ∂α j ∂α j X it ∈Se (M.3) In stochastic situation, each solution has a noise and will be given no less than one sample to 1, , evaluate its performance That is, αi > , ∀i = m, g So (M.1) to (M.3) can be simplified as follows ∑ t X it ∈Se , X it ≠ X b λbi ∂Gbj (α b , α j ) ∂Gbi (α b , αi ) + ∑ λbj = ν ∂α b ∂α b X tj ∈Sne 134 λbi ∂Gij (αi , α j ) ∂Gbi (α b , αi ) + ∑ λij = Xb; ν , for X it ∈ Se , X it ≠ t ∂αi ∂αi X tj ∈Sne ∑λ ∂Gij (αi , α j ) ∂α j ij X it ∈Se = ν , for X tj ∈ Sne Therefore, if we can find the non-negative values of λij and ν such that one allocation rule can satisfies the above conditions, the rule is an optimal allocation rule for model (8) □ Appendix N Proof for Lemma 5.3 Under the assumption of normality, Glynn and Juneja (2004) show that Gij (αi , α j ( f ( X ) − f ( X )) )= i i s i2 αi + s α j j For X bt , Gbi ( f ( X ) − f ( X )) (α , α ) = t b b Gbj t i s α b + s αi i b i , for X it ∈ Se , X it ≠ X bt ( f ( X ) − f ( X )) (α , α ) = t b b j t j s αb + s α j b j , for X tj ∈ Sne t t t t Because f ( X b ) − f ( X i ) < f ( X b ) − f ( X j ) and 𝛼 𝑖 ≫ 𝛼 𝑗 , we have Gbi (α b , αi ) < Gbj (α b , α j ) Hence, t t λbj = for X tj ∈ Sne Similarly, because max f ( X k ) < f ( X i ) X ∈S X ∈S t k e t i e and α k , αi  α j , we have 135 Gkj (α k , α j ) > Gij (αi , α j ) Gi' j (αi' , α j ) > Gij (α i , α j ) Hence, λkj = for X tj ∈ Sne In the same way, we can get the inequality that i i i for X tj ∈ Sne and X tj ∉ Sne So, λij = for X tj ∉ Sne ' Based on the above analysis, the condition (b) in Theorem 5.3 can be simplified as follows ∑λ bk t X k ∈Se ∂Gbk (α b , α k ) ∂Gbi (α b , αi ) + ∑ λbi = v ∂α b ∂α b X it ∈Se (N.1) λbk ∂Gbk (α b , α k ) = ν for X kt ∈ Se0 ∂α k (N.2) λbi ∂Gij (αi , α j ) ∂Gbi (α b , αi ) + ∑ λij = ν for X it ∈ Se i ∂αi ∂αi X tj ∈Sne (N.3) λij ∂Gij (αi , α j ) ∂α j { t i = ν for i X j ∈ Sne } (N.4) Substituting the expression of λij in (N.4) into (N.3), for X it ∈ Se1  λbi =ν −    ∂G (α , α ) ∂α ∑ ν ∂G (α , α ) ∂α ij i X tj ∈Sne i j i ij i j j      ∂Gbi (α b , αi )    ∂αi   (N.5) By (N.1), (N.2) and (N.5), we have  ∂G (α , α ) ∂α  ∂Gij (αi , α j ) ∂αi ∂Gbk (α b , α k ) ∂α b + ∑  bi b i ∑0 ∂G (α , α ) ∂α t  ∂G (α , α ) ∂αb 1 − ∑i ∂G (α , α ) ∂α  X t ∈S t X k ∈Se X i ∈Se bk b k k i  ij i j j j ne  bi b i ∂Gij (αi , α j ) = Because ∂αi ( f ( X ) − f ( X )) i (s i j αi + s α j ) j ⋅   =   s i2 , we have αi2 136 ∑ t X k ∈Se s b2 α b2 + ∑ s k2 α k2 X ∈S t i s α  s α   b b  − ∑ i2 i2   = 2  X t ∈S i s j α j   s i αi  e  j ne  Therefore, α= s b b ∑ t X k ∈Se α k2 + ∑ s k2 X ∈S t i α2 α2  j  i2 − ∑   s i X t ∈S i s j  e  j ne  Condition (a) in Theorem 5.3 also can be simplified as i = Gbi (α b , αi ) = Gij (αi , α j ) for X kt ∈ Se0 , X it ∈ Se , X tj ∈ Sne Gbk (α b , α k ) Under the assumption 𝛼 𝑏 ≫ 𝛼 𝑖 ≫ 𝛼 𝑗 for X it ∈ Se , X tj ∈ Sne , we have the following approximation result − f ( X )) − f ( X )) ( f ( X )= ( f ( X )= ( f ( X ) − f ( X ) ) t b t k t b s k2 α k t i t i s i2 αi t j i for X kt ∈ Se0 , X it ∈ Se , X tj ∈ Sne s αi j Therefore, α k : αi : α j = s k2 s i2 : s2 j : ( f ( X ) − f ( X )) ( f ( X ) − f ( X )) ( f ( X ) − f ( X )) t b t k t b t i t i t □ j 137 .. .OPTIMAL COMPUTING BUDGET ALLOCATION FOR SIMULATION BASED OPTIMIZATION AND COMPLEX DECISION MAKING ZHANG SI (B.Eng., Nanjing University) A THESIS SUBMITTED FOR THE DEGREE OF... Particle Swarm Optimization 61 5.3 PSOOCBA Formulation 63 5.3.1 Computing budget allocation for Standard PSO 65 5.3.2 Computing budget allocation for PSOe ... 35 3.7.3 Numerical Results for Simulation Optimization 38 3.8 Conclusions and comments 39 Chapter Efficient computing budget allocation for optimal subset selection with

Ngày đăng: 10/09/2015, 09:30