Application of multi objective optimisation to MRT systems optimisation

104 220 0
Application of multi objective optimisation to MRT systems optimisation

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

... discusses the application of multi- objective optimisation to MRT systems 4.2 Mathematical Definition Given a set of objective functions, the multiple -objective optimisation aims to find a solution... vector so that all the objectives can be optimised in a compromise 30 Chapter Literature Review of Multi- Objective Optimisation Approaches way Generally, the vector of F(x) in multi- objective optimisation. .. the entire global Pareto-optimal set through a one-time solution of the multi- objective optimisation problem, avoiding the need to solve a multitude of single -objective optimisation problems

Founded 1905 APPLICATION OF MULTI-OBJECTIVE OPTIMISATION TO MRT SYSTEMS BY TIAN LIFEN DEPARTMENT OF ELECTRICAL ENGINEERING A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENT FOR THE DEGREE OF MASTER OF ENGINEERING NATIONAL UNIVERSITY OF SINGAPORE 2004 Acknowledgements Acknowledgements The author genuinely appreciates the help from her supervisor, Professor C S Chang, who has provided invaluable guidance to the author while the author works on the projects and writes this thesis Sincere thanks and gratitude are also extended to Mr Seow Hung Cheng of Power System laboratory for his support throughout this research project This thesis is dedicated to the author’s family and friends They always have the unreserved understanding and great support to the author I Table of Contents Table of Contents ACKNOWLEDGEMENTS .I TABLE OF CONTENTS II LIST OF FIGURES V LIST OF TABLES VI LIST OF ABBREVIATIONS VII LIST OF PUBLICATIONS RELATED TO THIS THESIS VIII SUMMARY OF THE THESIS IX CHAPTER INTRODUCTION 1.1 BACKGROUND 1.2 MOTIVATION OF THE RESEARCH 1.2.1 Go-Circuit Optimisation 1.2.2 Return-Circuit Optimisation 1.3 OBJECTIVE AND SCOPE OF THE RESEARCH 1.4 MULTI-OBJECTIVE OPTIMISATION ALGORITHMS 1.5 ORGANISATION OF THE THESIS CHAPTER OUTLINE OF GO-CIRCUIT OPTIMISATION 11 2.1 INTRODUCTION 11 2.2 MATHEMATICAL MODEL 12 2.2.1 System Model 12 2.2.2 Objective Functions 13 2.2.3 Impact of Operational Timetable 15 2.3 LAYOUT OF THREE-STAGE SCHEME 16 2.4 SIMULATION OUTLINE 18 2.5 SUMMARY 19 CHAPTER OUTLINE OF RETURN-CIRCUIT OPTIMISATION 21 3.1 INTRODUCTION 21 3.2 MATHEMATICAL MODEL 22 3.2.1 Return-Circuit Model 22 3.2.2 Objective Functions 23 3.2.3 Impact of Earthing and Bonding Arrangements 24 3.3 LAYOUT OF TWO-STAGE SCHEME 25 3.4 SIMULATION OUTLINE 27 3.5 SUMMARY 28 II Table of Contents CHAPTER LITERATURE REVIEW OF MULTI-OBJECTIVE OPTIMISATION APPROACHES 30 4.1 INTRODUCTION 30 4.2 MATHEMATICAL DEFINITION 30 4.3 PREFERENCE STRUCTURE 32 4.4 REVIEW OF MULTI-OBJECTIVE OPTIMISATION METHODS 33 4.4.1 Traditional Approaches 34 4.4.2 Evolutionary Approaches 35 4.4.2.1 Vector Evaluated Genetic Algorithm 35 4.4.2.2 Pareto-Based Genetic Algorithm 36 4.4.2.3 Multi-Attribute Genetic Algorithm .37 4.4.3 Discussions 37 4.5 SUMMARY 39 CHAPTER EVOLUTIONARY MULTI-OBJECTIVE OPTIMISATION APPROACHES 40 5.1 INTRODUCTION 40 5.2 MULTI-OBJECTIVE GENETIC ALGORITHM 41 5.2.1 Selection Processing with Rank Assignment 41 5.2.2 Fitness Sharing 42 5.2.3 Variable Recombination Operators 43 5.2.4 Treatment of Preferred Priorities among Objectives 43 5.3 MULTI-OBJECTIVE PARTICLE SWARM ALGORITHM 45 5.3.1 Search Strategy 45 5.3.2 Rank-Based Selection 47 5.3.3 Weight Update 47 5.3.4 Pareto-Optimal Set Update 48 5.4 MULTI-OBJECTIVE DIFFERENTIAL EVOLUTION ALGORITHM 49 5.5 SUMMARY 49 CHAPTER RESULTS OF GO-CIRCUIT OPTIMISATION 50 6.1 OPTIMAL TRACTION-SUBSTATION PLACEMENTS 50 6.2 WORST-CASE SCENARIOS OF OPERATIONAL DEVIATIONS 53 6.3 PERFORMANCE CHECK FOR FAILURE CONDITIONS 56 6.4 SUMMARY 58 CHAPTER RESULTS OF RETURN-CIRCUIT OPTIMISATION 60 7.1 MULTI-OBJECTIVE OPTIMISATION FOR NORMAL CONDITIONS 60 7.2 PERFORMANCE CHECK FOR FAILURE CONDITIONS 68 III Table of Contents 7.3 SUMMARY 69 CHAPTER CONCLUSIONS 71 8.1 SUMMARY AND CONCLUSIONS 71 8.2 SUGGESTIONS FOR FUTURE WORK 74 REFERENCES 75 APPENDIX A EVOLUTIONARY ALGORITHMS 79 A.1 INTRODUCTION 79 A.2 GENETIC ALGORITHM 79 A.3 PARTICLE SWARM AND DIFFERENTIAL EVOLUTION ALGORITHM 80 A.3.1 Particle Swarm Algorithm 81 A.3.2 Differential Evolution Algorithm 82 APPENDIX B FLOWCHART OF PROPOSED EVOLUTIONARY MULTI-OBJECTIVE OPTIMISATION APPROACHES 84 B.1 FLOWCHART OF MULTI-OBJECTIVE GENETIC ALGORITHM 84 B.2 FLOWCHART OF MULTI-OBJECTIVE PARTICLE SWARM AND MULTI-OBJECTIVE DIFFERENTIAL EVOLUTION 85 APPENDIX C PRELIMINARY TESTING OF MULTI-OBJECTIVE OPTIMISATION ALGORITHMS 86 C.1 INTRODUCTION 86 C.2 CONCAVE PROBLEM 87 C.3 DISCONTINUOUS PROBLEM 89 C.4 SUMMARY 91 IV List of Figures List of Figures Figure 1-1: Schematic of Singapore MRT system Figure 2-1: Sectional network representation of double-track MRT system .13 Figure 2-2: Three-stage scheme for go-circuit optimisation .18 Figure 2-3: Flowchart for go-circuit simulation 19 Figure 3-1: Return-circuit model .22 Figure 3-2: Simple case study of touch voltage and stray current .25 Figure 3-3: Two-stage procedure for touch voltage and stray current 27 Figure 3-4: Flowchart of return-circuit simulation .28 Figure 4-1: Pareto front for bi-criterion minimisation problem 32 Figure 4-2: Nonconvex solution boundary .35 Figure 4-3: Outline of VEGA evolution results 36 Figure 4-4: Rank assignment for Pareto-based genetic algorithm .37 Figure 5-1: Rank assignments for different priorities among objectives .44 Figure 6-1: Optimised configurations 51 Figure 6-2: Energy consumption convergence curve 52 Figure 6-3: Load sharing convergence curve .52 Figure 6-4: Pareto-optimal set for Configuration 54 Figure 6-5: Pareto-optimal set for Configuration 55 Figure 7-1: Layout of study system 61 Figure 7-2: Touch voltage distribution with different earthing arrangement .63 Figure 7-3: Pareto-optimal sets for Configuration 65 Figure 7-4: Pareto-optimal sets for Configuration 66 Figure C-1: MOPS results for Test1 89 Figure C-2: MODE results for Test .89 Figure C-3: MOPS results for Test 91 Figure C-4: MODE results for Test .91 V List of Tables List of Tables Table 6-1: Improvement of optimised energy consumption and load sharing .53 Table 6-2: Parameter limits for bi-criterion optimisation 54 Table 6-3: Maximum deviation for energy consumption 55 Table 6-4: Maximum deviation for load sharing 55 Table 6-5: Performance check results for case 1.1 .57 Table 6-6: Performance check results for case 2.2 .58 Table 7-1: Typical arrangements of earthing and bonding 62 Table 7-2: Multi-objective optimisation of earthing & bonding for configuration 67 Table 7-3: Multi-objective optimisation of earthing & bonding for configuration 68 Table 7-4: Performance check for case 2.1 69 Table C-1: Non-dominated solution numbers for Test 88 Table C-2: Non-dominated solution numbers for Test 90 VI List of Abbreviations List of Abbreviations TSS: Traction Substation GA: Genetic Algorithm DE: Differential Evolution algorithm PS: Particle Swarm algorithm MOGA: Multi-Objective Genetic Algorithm MODE: Multi-Objective Differential Evolution algorithm MOPS: Multi-Objective Particle Swarm algorithm VII List of Publications Related to this Thesis List of Publications Related to this Thesis [1] C.S Chang and L Tian,” Worst-case identification of touch voltage and stray current of DC railway system using genetic algorithm”, IEE Proceedings, Electric Power Applications, Vol 146, No 5, 1999 VIII Summary of the Thesis Summary of the Thesis MRT system design can be formulated as a problem of three-stage optimisation In the first stage, the basic design of a MRT section is optimised by extensively searching through a large set of design alternatives Only the key or primary variables are optimised in this stage The second stage evaluates the worst-case performance of the basic design using secondary variables arising from operational deviations and other random variables The need for changing the basic design to cater for both the normal condition and failure conditions is ascertained and implemented in the third stage MRT supply networks can be divided into the traction substation, the go-circuit and the return-circuit At traction substations, AC supply voltage is stepped down and converted to DC Catenary wires or third rails are used in the go-circuit while running rails and return cables are the main components of return-circuit In this work, the gocircuit and return-circuit are each optimised with the procedure as outlined above Energy consumption and load sharing are two important issues in the go-circuit Energy consumption calculates the total energy consumed at all the traction substations, and load sharing measures the load distribution among all traction substations They are influenced by many factors and their optimisation cannot be obtained simultaneously In the proposed first-stage optimisation, a previously developed algorithm is incorporated for configuring the traction placements by optimising either energy consumption or load sharing During operation, train timetables deviate continually from the predefined train despatch frequency due to variations of train headway, synchronisation delay and dwell time This work focuses IX References [30] Fan Wang, “Genetic algorithm based harmonic evaluation and filter placement for a rapid transit system”, Master’s Thesis, National University of Singapore, 1999 78 Appendix A Evolutionary Algorithms Appendix A Evolutionary Algorithms A.1 Introduction Evolutionary Algorithms are a class of stochastic search and optimisation methods that include Genetic Algorithm (GA), evolutionary programming, Particle Swarm (PS) algorithm, Differential Evolution (DE) algorithm, and their variants They seldom require much auxiliary information to search for better candidates, and are robust and suitable for finding optima effectively with a small probability of falling in the local optima Increasing number of engineering optimisation problems have been successfully solved by evolutionary algorithms Evolutionary algorithms work on optimisation problems by maintaining a population of individuals and incorporating random variation and selection for iterations Each individual represents a potential solution to the problem at hand In each generation, a fitness value is first assigned to each offspring Depending on its fitness, each population member is then given a specific survival probability The evolution converges after some number of generations In most cases, the best individual represents a near-optimum (reasonable) solution A.2 Genetic Algorithm The Genetic Algorithm (GA) was motivated by ideas from natural genetics GA starts with a population of chromosomes, the abstract representations of candidate solutions 79 Appendix A Evolutionary Algorithms Evaluations of chromosomes in the current generation are based on the problemdependent fitness function Those chromosomes with higher fitness are selectively picked for reproduction By employing crossover and mutation, a low probability operator and a high probability operator respectively, information encoded in these selected chromosomes is recombined Successive populations are generated to form the subsequent generation In this way, GA attempts to find all the optima in the search space and realise the Darwinian notion of competitive evolution The key feature of the GA is its ability to exploit accumulating information about an initially unknown search space so as to bias subsequent search into the useful subspace Moreover, GA operates on several solutions at the same time, gathering information from the current points to direct subsequent search Its merit to maintain multiple solutions concurrently makes GA less susceptible to local optima or noises [19] A.3 Particle Swarm and Differential Evolution Algorithm Particle Swarm (PS) and Differential Evolution (DE), as novel population-based evolutionary computation techniques, are efficient to optimise continuous problems Instead of using genetic operators, PS and DE embody explicitly or incorporate implicitly three key operations of GA: crossover, mutation and fitness evaluation The fitness evaluation merely depends on the problem structure, but the crossover and mutation processes greatly influence the effectiveness of optimisation algorithms Both PS and DE replace GA’s traditional bit-inversion mutation scheme with a method that perturbs real-valued vectors with population-derived noise and make mutation an 80 Appendix A Evolutionary Algorithms adaptive procedure [28] Besides their good convergence properties and suitability for parallelism, PS and DE are simple to operate They work on only a few control variables that remain constant throughout the entire optimisation procedure A.3.1 Particle Swarm Algorithm The PS algorithm, introduced by Eberhart and Kennedy [27], was inspired from the simulation of social behaviour The particles in PS are represented as multidimensional points During the optimisation procedure, they evolve towards the global and local best positions That is, each particle’s searching trajectory is influenced by its own and the others’ exploring experiences Each particle in the population keeps a record of its current position, velocity, and its best position found so far Specifically, a particle is manipulated using the following equation: vi = w * vi + α * ( p i − xi ) + β * ( p g − xi ) xi = xi + vi (A-1) where xi is the position of the ith member of the population and vi is the current velocity of the individual The local best for the ith individual is denoted by pi and p g is the global best position α and β are random numbers in the range of [0,1] They reflect the degree of global best and local best to guide the particle’s subsequent search The inertia weight w is used here to control the impact of the previous history of velocities on the current velocity, thereby influencing the trade-off between global exploration and local exploration ability of the particle 81 Appendix A Evolutionary Algorithms At the beginning of the optimisation procedure, a population of particles is initialised with random position x and velocity v Each particle is then evaluated at the time step, i.e., each position vector xi is calculated to obtain the fitness value for the particle The best position vector for each particle is accordingly adjusted, and the fitness value of each particle is compared with the best fitness found so far in the population to identify the best-performing particle Particle swarm optimisation does not have crossover and mutation operation, but the concepts are characterised Unlike GA, an individual (particle) in the population does not explicitly exchange genetic information between a few randomly selected individuals There is no parts (chromosomes in GA) divided in each particle and its evolution is performed as a whole PS uses a highly directional mutation operation, which changes the velocity of the particles between the local best and global best In this way, any point in the search space is eventually accessed if there is enough iteration or sufficiently large velocity limit [28] A.3.2 Differential Evolution Algorithm Differential Evolution (DE) algorithm developed by Stron and Price is the best genetic type of algorithm for solving the real-valued test function suite of the first International Contest on Evolutionary Computation [26] The performance of DE is dependent on three variables: the population size NP, the mutation-scaling factor F and the crossover constant CR A new scheme is used in DE to generate variation vector in the population, which is totally different from other evolutionary algorithms except PS The two or four individuals in the population are 82 Appendix A Evolutionary Algorithms randomly selected and their difference vector is taken as the variation vector In this way, the direction and distance information is extracted from the population to generate random deviations, which results in the excellent convergence of the population For the ith individual in the population, the variation vector ∆xi is produced as in the form: ∆xi = x j + F * [( x p − x q ) + ( x m − x n )] (A-2) where j, p, q, m, n∈[0,NP-1] are randomly chosen integer and mutually different from the running index i F is a scaling factor∈[0,2] which controls the amplification of the variation vector In order to increase the potential diversity of the population, the crossover operation is also introduced to DE Assuming the ith individual takes the vector form of (xi1,xi2,…,xiD), the value of its jth element is then calculated as below after crossover x xij =  ij ∆xij if a random number > CR otherwise (A-3) where the crossover factor CR∈[0,1] is a control variable that effectively determines when a parameter should be mutated and thus helps the algorithm converge 83 Appendix B Flowchart of Proposed Evolutionary Multi-Objective Optimisation Approaches Appendix B Flowchart of Proposed Evolutionary MultiObjective Optimisation Approaches B.1 Flowchart of Multi-Objective Genetic Algorithm The pseudo description of the proposed Multi-Objective Genetic Algorithm is outlined as below Generate initial population P(0); Evaluate P(0); t:=0; repeat Generate P(t+1) using P(t) as follows { Assign ranks to the individuals in the P(t) as follows { rank_value:=1; repeat Find all the non-dominated ones among all the unvisited individuals; Assign their ranks to be rank_value; Set visit flag to the assigned individuals; rank_value:= rank_value+1; until the rank_value is equal to the set value or all the individuals are visited } if (different priorities among objectives) Modify the ranks in favour of the objective with preference; Select individuals for reproduction on basis of rank; Recombine the selected individuals employing crossover and mutation operators; } 84 Appendix B Flowchart of Proposed Evolutionary Multi-Objective Optimisation Approaches Evaluate P(t+1); t:=t+1; until termination condition is met B.2 Flowchart of Multi-Objective Particle Swarm and MultiObjective Differential Evolution The main steps of Multi-Objective Particle Swarm and Multi-Objective Differential Evolution are outlined as following Generate initial population P(0); Evaluate P(0); t:=0; repeat Generate P(t+1) using P(t) as follows { Assign ranks to the individuals in the P(t); Identify the compromise solutions using the predefined scalarising function; Update Pareto-optimal set; if(different priorities among objectives) Modify the ranks in favour of the objective with preference; Select individuals for reproduction on basis of rank; Recombine the selected individuals employing PS or DE; } Evaluate P(t+1); t:=t+1; until termination condition is met 85 Appendix C Preliminary Testing of Multi-objective Optimisation Algorithms Appendix C Preliminary Testing of Multi-objective Optimisation Algorithms C.1 Introduction Two numerical multi-objective optimisation problems are used here to test the performance of MOPS and MODE With the reference of Multi-Attribute Genetic Algorithm (MAGA) [22], the parameters of MOPS and MODE are varied to observe how these values affect the algorithm efficiency, measured in terms of the number of non-dominated solutions obtained and the tendency of the resultant trade-off curve There are several variants of DE that generate variation vectors during evolution procedure In order to explain MOPS and MODE in a uniform way, two variants of DE that make use of the global best individual are employed, namely MODE1 and MODE2 In MODE1, the variation vector places a perturbation at a location between a randomly chosen population member and the best population member: ∆xi = x r1 + F * ( xbest + x r − x r − x r ) (C-1) In MODE2, the variation vector is generated based on the best member, with a perturbation located among randomly chosen members ∆xi = xbest + F * ( x r1 + x r − x r − x r ) (C-2) 86 Appendix C Preliminary Testing of Multi-objective Optimisation Algorithms In equation (C-1) and (C-2), r1, r2, r3 and r4 are random integer numbers that differ from each other The subscript best represents the index of the best individual in the population The search direction of each individual in the population of MODE is affected by randomly selected individuals as well as by the global best The velocity limit of each dimension Vmax in MOPS determines how large steps through the feasible space each particle is allowed to take When these steps are constrained to be too small, individuals may be unable to jump out of poor regions When the steps are set to be big, individuals often speed past the target region but discover even better positions they set out for Further, individuals are able to escape from local optima with sufficiently large steps The mathematical experiments below demonstrate the effect of the steps each particle takes on the search procedure For simplicity, the inertia weight w for PS is set to be 0.9 at the start of the run and gradually reduced to 0.4 at the end in these two experiments C.2 Concave Problem The first problem is to minimise  f ( x , y ) = ( x + y )1 /  2 1/  f ( x, y ) = (( x − 0.5) + ( y − 0.5) ) where x∈(-5,10) The trade-off surface of this example is concave, which leads to potential difficulty for conventional multi-objective optimisation approaches In order to compare the algorithms meaningfully, the population of MOPS and MODE is set to 30 and allowed 87 Appendix C Preliminary Testing of Multi-objective Optimisation Algorithms to evolve for 100 generations The MAGA population is set to 30 and its iteration number is 197 During the evolution procedure, individuals that violate the constraint are treated as lethal ones and are discarded Table C-1 lists the number of nondominated solutions found by each algorithm at the end of evolution The results of different Vmax for MOPS in terms of f1 and f2 are shown in Figure C-1 whereas the performance of two variants of MODE (MODE1 and MODE2) is displayed in Figure C-2 Algorithm parameters for Test Number of obtained nondominated solutions MOPS MODE Vxmax=10.0, Vymax=10.0 Vxmax=5.0, Vymax=2.0 Vxmax=2.0, Vymax=5.0 11 Vxmax=5.0, Vymax=0.5 23 Vxmax=2.0, Vymax=0.5 37 Vxmax=0.5, Vymax=0.1 113 Vxmax=0.2, Vymax=0.05 163 MODE1 166 MODE2 512 MAGA 129 Table C-1: Non-dominated solution numbers for Test 88 Appendix C Preliminary Testing of Multi-objective Optimisation Algorithms 0.9 0.8 0.7 Vxmax=2.0 Vymax=0.5 f2 0.6 0.5 Vxmax=0.5 Vymax=0.1 0.4 0.3 Vxmax=0.2 Vymax=0.05 0.2 0.1 0.2 0.4 0.6 0.8 1.0 f1 Figure C-1: MOPS results for Test1 0.9 0.8 0.7 f2 0.6 MODE1 0.5 MODE2 0.4 0.3 0.2 0.1 0.2 0.4 0.6 0.8 1.0 f1 Figure C-2: MODE results for Test C.3 Discontinuous Problem The second test problem is to minimise 89 Appendix C Preliminary Testing of Multi-objective Optimisation Algorithms  − x    f ( x ) =  x −  4 − x   x −   f ( x ) = ( x − 5) x ≤1 1< x ≤ 3< x ≤ x>4 The trade-off curve for this problem is not continuous over the problem domain The population size of MOPS, MODE and MAGA is set to 100 The total iteration number is 200 for the first two algorithms and 631 for the last algorithm Similarly, the number of non-dominated solutions obtained for MOPS, MODE and MAGA is tabulated in Table C-2 and the performances of MOPS and MODE are shown in Figure C-3 and Figure C-4 Algorithm parameters for Test Number of obtained nondominated solutions MOPS MODE Vxmax=40.0 191 Vxmax=20.0 338 Vxmax=10.0 652 Vxmax=5.0 1142 MODE1 9129 MODE2 1431 MAGA 494 Table C-2: Non-dominated solution numbers for Test 90 Appendix C Preliminary Testing of Multi-objective Optimisation Algorithms 18.0 16.0 14.0 Vxmax=40.0 12.0 f Vxmax=20.0 10.0 Vxmax=10.0 8.0 Vxmax=5.0 6.0 4.0 2.0 0.0 -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 f1 Figure C-3: MOPS results for Test 18 16 14 f2 12 MODE1 10 MODE2 -1.5 -1 -0.5 0.5 1.5 f1 Figure C-4: MODE results for Test C.4 Summary As seen from the above optimisation results, Vmax does influence the performance of MOPS to a great extent With small Vmax, MOPS is liable to be trapped into local optima but can perform a fine grain search to improve upon the quality of obtained 91 Appendix C Preliminary Testing of Multi-objective Optimisation Algorithms non-dominated solutions, which is obvious from Test On the other hand, large value of Vmax accelerates MOPS to search the whole feasible space but the interested area is likely to be ignored In fact, any evolutionary algorithm faces with the same challenge of determining appropriate parameter values to solve specific problem As far as the two test problems in this section concerned, MODE outperforms MOPS since the distribution rather than the aggregation of non-dominated solutions is emphasised in multi-objective optimisation Nevertheless, compared with MAGA, experiment results favour MOPS and MODE In MAGA, the population evolution procedure is controlled by crossover and mutation operation, which may result in premature convergence The search procedures of MOPS and MODE, however, are guided by the compromise solutions so the exploration and exploit abilities are fully exerted 92

Ngày đăng: 30/09/2015, 13:49

Từ khóa liên quan

Tài liệu cùng người dùng

Tài liệu liên quan