1. Trang chủ
  2. » Ngoại Ngữ

Robust control systems with genetic algorithms (1)

220 4 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Robust Control Systems With Genetic Algorithms
Tác giả Mo Jamshidi, Renato A Krohling, Leandro Dos Santos Coelho, Peter J. Fleming
Trường học University of Texas
Chuyên ngành Control Systems
Thể loại book
Năm xuất bản 2003
Thành phố Austin
Định dạng
Số trang 220
Dung lượng 4,48 MB

Cấu trúc

  • 1.1 Introduction to genetic algorithms (18)
  • 1.2 Terms and definitions (19)
  • 1.3 Representation (21)
    • 1.3.1 Genetic algorithms with binary representation (21)
    • 1.3.2 Genetic algorithms with real representation (21)
  • 1.4 Fitness function (22)
  • 1.5 Genetic operators (24)
    • 1.5.1 Selection (24)
      • 1.5.1.1 Proportionate selection (24)
      • 1.5.1.2 Tournament selection (25)
    • 1.5.2 Crossover (25)
      • 1.5.2.1 Crossover for binary representation (26)
      • 1.5.2.2 Crossover for real representation (26)
    • 1.5.3 Mutation (27)
      • 1.5.3.1 Mutation for binary representation (27)
      • 1.5.3.2 Mutation for real representation (27)
  • 1.6 Genetic algorithms for optimization (28)
    • 1.6.1 Genetic algorithms at work (28)
    • 1.6.2 An optimization example (29)
  • 1.7 Genetic programming (30)
  • 1.8 Conclusions (32)
  • 2.1 Introduction to the control theory (36)
  • 2.2 Norms of signals and functions (39)
  • 2.3 Description of model uncertainty (40)
  • 2.4 Robust stability and disturbance rejection (42)
    • 2.4.1 Condition for robust stability (42)
    • 2.4.2 Condition for disturbance rejection (44)
  • 2.5 Controller design (46)
    • 2.5.1 Optimal controller design (47)
    • 2.5.2 Optimal robust controller design (49)
    • 2.5.3 Optimal disturbance rejection controller design (50)
  • 2.6 Optimization (50)
    • 2.6.1 The optimization problem (50)
    • 2.6.2 Constraint handling (51)
  • 2.7 Conclusions (53)
  • 3.1 Introduction to controller design using genetic algorithms (56)
  • 3.2 Design of optimal robust controller with fixed structure (56)
    • 3.2.1 Design method (58)
    • 3.2.2 Design example (59)
  • 3.3 Design of optimal disturbance rejection controller with fixed (65)
    • 3.3.1 Design method (67)
    • 3.3.2 Design example (68)
  • 3.4 Evaluation of the methods (75)
  • 3.5 Conclusions (76)
  • 4.1 Model-based predictive controllers (78)
    • 4.1.1 Basic concepts and algorithms (79)
    • 4.1.2 Generalized predictive control (80)
      • 4.1.2.1 Formulation and design of GPC (80)
      • 4.1.2.2 Overview of optimization of GPC design by (84)
  • 4.2 Variable structure control systems (85)
    • 4.2.1 Introduction (85)
    • 4.2.2 Basic concepts and controller design (88)
    • 4.2.3 Overview of optimization of variable structure control (91)
  • 5.1 Optimization of generalized predictive control design by (96)
    • 5.1.1 Design method (96)
    • 5.1.2 Design example (97)
    • 5.1.3 Simulation results (99)
      • 5.1.3.1 Case study 1: Adaptive GPC design without (99)
      • 5.1.3.2 Case study 2: Adaptive GPC design with (99)
  • 5.2 Optimization of quasi-sliding mode control design by genetic (102)
    • 5.2.1 Design method (102)
    • 5.2.2 Design example (103)
    • 5.2.3 Simulation results (105)
      • 5.2.3.1 Case study 1: Self-tuning quasi-sliding mode (105)
      • 5.2.3.2 Case study 2: Self-tuning quasi-sliding mode (108)
  • 5.3 Conclusions (112)
  • 6.1 Introduction (116)
  • 6.2 Fuzzy control (117)
  • 6.3 Genetic tuning of fuzzy control systems (118)
  • 6.4 Gas turbine engine control (120)
    • 6.4.1 Gas turbine engines — an overview (120)
    • 6.4.2 GTE types (121)
    • 6.4.3 The GTE control problem (122)
  • 6.5 Fuzzy control system design — example study (124)
    • 6.5.1 Problem formulation (124)
    • 6.5.2 Heuristic design of the fuzzy controllers (125)
    • 6.5.3 GA tuning of the fuzzy controllers (129)
  • 6.6 Applications of GAs for fuzzy control (131)
  • 7.1 Introduction (136)
  • 7.2 Hierarchical fuzzy control for a flexible robotic link (139)
    • 7.2.1 A mathematical model (139)
    • 7.2.2 Separation of spatial and temporal parameters (141)
    • 7.2.3 The second level of hierarchical controller (141)
      • 7.2.3.1 Line-curvature analysis (142)
      • 7.2.3.2 The rule base (142)
    • 7.2.4 The lower level of hierarchy (143)
  • 7.3 Genetic algorithms in knowledge enhancement (144)
    • 7.3.1 Interpretation function (144)
    • 7.3.2 Incorporating initial knowledge from one expert (145)
    • 7.3.3 Incorporating initial knowledge from several experts 130 (147)
  • 7.4 Implementation issues (147)
    • 7.4.1 Software aspects (147)
    • 7.4.2 Hardware aspects (148)
  • 7.5 Simulation (149)
  • 7.6 Conclusions (151)
  • 8.1 Introduction (154)
  • 8.2 Hierarchical fuzzy-behavior control (155)
    • 8.2.1 Behavior hierarchy (156)
  • 8.3 Coordination by behavior modulation (158)
    • 8.3.1 Related work (159)
  • 8.4 Genetic programming of fuzzy behaviors (160)
    • 8.4.1 Rule discovery (160)
  • 8.5 Evolution of coordination (161)
    • 8.5.1 Behavior fitness evaluation (161)
  • 8.6 Autonomous navigation results (162)
    • 8.6.1 Hand-derived behavior (163)
    • 8.6.2 Evolved behavior (165)
  • 8.7 Conclusions (167)
  • 9.1 Introduction (170)
  • 9.2 H-infinity design of robust control systems (171)
    • 9.2.1 Introduction to H-infinity design (171)
    • 9.2.2 Loop-shaping design procedure (172)
    • 9.2.3 H-infinity robust stabilization (174)
  • 9.3 Multiobjective optimization (175)
    • 9.3.1 Introduction to multiobjective optimization (175)
    • 9.3.2 Multiobjective genetic algorithms (176)
    • 9.3.3 Robust control system design: Incorporating (178)
  • 9.4 Case study: Robust control of a gasification plant (179)
    • 9.4.1 Plant model and design requirements (180)
    • 9.4.2 Problem formulation (181)
    • 9.4.3 Design using a hybrid H-infinity/multiobjective (182)
  • 9.5 Conclusions (186)
  • A.1 Introduction (188)
  • A.2 Classical sets (189)
  • A.3 Classical set operations (190)
  • A.4 Properties of classical sets (191)
  • A.5 Fuzzy sets and membership functions (192)
  • A.6 Fuzzy sets operations (194)
  • A.7 Properties of fuzzy sets (196)
  • A.8 Predicate logic (200)
  • A.9 Fuzzy logic (0)
  • A.10 Fuzzy control (0)
  • A.11 Basic definitions (0)
  • A.12 Conclusion (0)

Nội dung

Introduction to genetic algorithms

The evolutionary theory, rooted in Darwin's principle of natural selection and Mendel's genetics, explains the natural evolution of populations (Michalewicz, 1996) Genetic algorithms (GA), developed by Holland in 1975, leverage these principles to serve as optimization methods Operating on a population of potential solutions, known as individuals, GA evaluates each individual's fitness, which reflects its effectiveness in solving the optimization problem.

Genetic Algorithms (GAs) start with a randomly initialized population and evolve it through genetic operators such as selection, crossover, and mutation The selection process identifies the fittest individuals to advance to the next generation, while crossover combines genetic material from two individuals to produce new ones Mutation introduces random changes to an individual's genetic makeup This iterative application of genetic operators continues until a satisfactory solution to the optimization problem is achieved, typically defined by a predetermined stopping condition, such as reaching a specific number of generations Overall, GAs are characterized by their ability to iteratively refine solutions through these evolutionary processes.

• GA operate with a population of possible solutions (individuals) instead of a single individual Thus, the search is carried out in a parallel form.

Genetic Algorithms (GAs) excel at identifying optimal or near-optimal solutions within complex and expansive search spaces They are particularly effective for nonlinear optimization problems, accommodating constraints that can be represented in both discrete and continuous search environments.

2 Robust Control Systems with Genetic Algorithms

• GA examine many possible solutions at the same time So, there is a higher probability that the search converges to an optimal solution.

In Holland's classical Genetic Algorithm (GA) from 1975, individuals were represented using binary numbers or bit strings However, advancements have led to the development of new representations and genetic operators For optimization problems involving continuous variables, real number representation has proven to be more effective, allowing individuals to be represented directly as real numbers without the need for binary conversion This article will outline key terms and definitions related to these concepts.

Terms and definitions

This article introduces key terms related to Genetic Algorithms (GA), which mirror the principles of natural evolution but use distinct terminology For an in-depth understanding of GA, refer to works by Goldberg (1989), Michalewicz (1996), Mitchell (1996), Bọck (1996), and Fogel (1995).

For GA, population is a central term A population P consists of indi- viduals c i with i = 1,…,à:

The population size à can be modified during the optimization process.

In this work, however, it is kept constant.

An individual represents a potential solution to an optimization problem, where the objective function f(x) is a scalar-valued function dependent on an n-dimensional vector x This vector x comprises n variables, denoted as x_j, where j indicates each variable's position within the vector.

= 1,…,n, which represents a point in real space ℜ n The variables x j are called genes Thus, an individual c i consists of n genes:

In the initial formulation of Genetic Algorithms (GA), individuals were represented as binary numbers made up of bits (0s and 1s), utilizing both binary and Gray coding methods A binary-coded individual, referred to as a chromosome, contrasts with real representation, where an individual is defined as a vector of real numbers.

Fitness is a crucial concept that measures an individual's quality within a population, determined by the fitness function F(x) Genetic Algorithms (GA) aim to identify the fittest individuals by maximizing this fitness function, which often directly relates to the objective function in straightforward scenarios.

The average fitness F m of a population is determined as follows:

(1.3) The relative fitness p i of an individual c i is calculated by:

Genetic Algorithms (GAs) initiate optimization by selecting a random population of individuals, followed by the calculation of each individual's fitness The process then applies genetic operators such as selection, crossover, and mutation to generate new individuals, forming the subsequent population This transition from population \( P_g \) to \( P_{g+1} \) is referred to as a generation, with \( g \) indicating the generation number The population evolves over multiple generations until the problem is resolved, typically concluding when a predetermined maximum number of generations, \( g_{max} \), is reached.

Figure 1.1 Representation of the executed operations during a generation.

4 Robust Control Systems with Genetic Algorithms

Representation

Genetic algorithms with binary representation

Binary representation encompasses both binary coding and Gray coding, but this article focuses solely on binary coding to illustrate the principles of classical Genetic Algorithms (GA) For those interested in Gray coding, please consult Bethke (1981) for a detailed explanation.

The objective function is denoted as f(x), where the vector x comprises n variables, x_i, for i ranging from 1 to n Each variable x_i has specified lower and upper bounds, represented as x_i min and x_i max In the context of binary coding, the variable x_i is initially transformed into a normalized value, referred to as x_i norm.

In the range of 0 to 1, the normalized value \( x_i^{norm} \) is converted into a binary number \( c_i \) The number of bits needed for \( c_i \) is based on the desired accuracy, resulting in a binary representation consisting of \( m \) bits.

The encoding of a normalized number x i norm into the corresponding binary number c i occurs in accordance with the pseudo-code shown in Figure 1.2.

The decoding of a binary number c i into the correspondent variable x i occurs in accordance with the pseudo-code shown in Figure 1.3.

Genetic algorithms with real representation

For optimization problems involving continuous variables, using a real number representation simplifies the process, as each individual is represented by a vector of real numbers, with each element corresponding to a gene or characteristic This approach eliminates the need for coding or decoding, resulting in a more straightforward and efficient implementation The accuracy of this real representation is contingent upon the computer utilized Studies by Davis (1991), Wright (1991), and Michalewicz (1996) highlight the advantages of real representation over binary representation.

Fitness function

In Genetic Algorithms (GA), each individual represents a unique point in the search space, participating in a simulated evolutionary process During each generation, individuals with higher fitness levels, referred to as "good individuals," reproduce, while those with lower fitness, or "bad individuals," do not survive The fitness of each individual is determined by a fitness function, F(x), which is based on the objective function of the optimization problem being addressed For optimization problems focused on maximization, the fitness function is calculated according to established methods (Goldberg, 1989).

The constant C min serves as a crucial lower bound for fitness values, enabling the transformation of negative fitness scores into positive ones This adjustment is essential, as numerous selection methods necessitate nonnegative fitness to function effectively.

Figure 1.2 Pseudo-code for the binary coding.

Algorithm 1: binary coding while do if else end if end while return

Input: and and begin and norm norm x m c b b j b m j q j q x j m q b j q q b j q q j j c b i i j j i j j j j j j j i

6 Robust Control Systems with Genetic Algorithms

To address minimization problems using genetic algorithms (GAs), it is essential to modify the objective function, as GAs operate on the principle of maximizing fitness This transformation can be achieved by multiplying the objective function by -1, effectively converting the minimization problem into a maximization problem Consequently, the fitness function can be calculated accordingly.

The constant C max serves as an upper limit for fitness values, allowing for the conversion of negative fitness into positive fitness This article does not delve into the specifics of determining C min and C max or other fitness scaling methods, as the tournament selection method employed here permits the use of negative fitness Consequently, for maximization problems, C min can be effectively set to zero.

Figure 1.3 Pseudo-code for binary decodification.

Algorithm 2: binary coding begin while do if else end if end while return end de c b b j b m x j q j q j m b j q q q q j j x q x i i j j j j j q j j i j i j

For minimization problems, C max can be set to zero in Equation (1.8), resulting in

For constraint optimization problems, the fitness function becomes more complex (see next chapter).

Genetic operators

Selection

The selection process in genetic algorithms (GA) identifies the most fit individuals from a population to progress to the next generation, adhering to Darwin's principle of "survival of the fittest." This process involves comparing the fitness levels of individuals, determining which ones will advance based on their relative performance As a result, high-quality individuals are more likely to be selected for the next population, while less fit individuals have a significantly lower chance of moving forward.

Selection pressure is a crucial concept in genetic algorithms (GAs), referring to the extent to which fitter individuals are favored in the selection process According to Miller and Goldberg (1995), higher selection pressure increases the likelihood of selecting superior individuals, while excessively high selection pressure can result in premature convergence to a local optimum On the other hand, insufficient selection pressure may lead to slow convergence Therefore, the convergence rate of a GA is significantly influenced by the level of selection pressure applied.

GA is able to find optimal or suboptimal solutions under different selection pressures (Goldberg et al., 1993).

Theoretical studies on the efficiency of various selection methods are discussed in Blickle (1997) This article focuses on two key selection techniques: proportionate selection, utilized in classical genetic algorithms (GAs), and tournament selection, which will be employed in this research.

In proportionate selection, the likelihood of an individual c i advancing to the next generation is directly related to its relative fitness p i The expected number of offspring ξ for an individual c i is calculated by multiplying its relative fitness p i by the total population size.

Proportionate selection in a population can be effectively illustrated using a roulette wheel analogy In this model, each individual in the population is represented by a specific field on the wheel, with the size of each field reflecting the individual's fitness The likelihood of the wheel landing on a particular field corresponds to the relative fitness of that individual By spinning the roulette wheel a number of times equal to the population size, we can simulate the process of selection based on fitness levels.

Proportionate selection, developed originally by Holland for classical

Genetic Algorithms (GA) are effective solely for nonnegative fitness values, necessitating the implementation of a fitness scaling method for negative fitness, as highlighted by Goldberg (1989) Research by Blickle (1997) demonstrates that the tournament selection method yields superior performance in these scenarios.

In a tournament selection process, the fittest individual from a group of z individuals advances to the next generation, with this process repeated a specified number of times The tournament size, which determines the group size, can be increased to enhance selection pressure, leading to a winner with generally greater fitness in larger tournaments compared to smaller ones Often, this selection method is implemented as a binary tournament, where only two individuals compete A key advantage of this method is that it does not necessitate fitness scaling, allowing for the inclusion of negative fitness values.

Crossover

In the selection process, only copies of individuals are added to the new population, while crossover generates new genetic material by exchanging genes between two selected individuals This results in new offspring that replace the parents in the evolving population However, crossover can sometimes lead to the loss of valuable genetic traits To mitigate this risk, crossover is performed with a fixed probability, denoted as p_c, determined prior to the optimization process In each generation, a specified number of random pairs, calculated as p_c/2, undergo crossover to create new individuals.

The crossover operation involves generating a random number between zero and one; if this number is less than the crossover probability, two individuals are selected, and their chromosome pairs are split at a randomly determined crossover point This point influences the genetic composition of the resulting individuals Each pair of individuals has a new crossover point selected, and the application of the crossover operator depends on the representation used The crossover operator is further detailed for both binary and real representations.

In genetic algorithms, crossover occurs when two randomly selected individuals, represented as binary numbers, are split at a designated crossover point This process involves dividing each parent into two segments, which are then recombined to form new offspring.

After the crossover operation is realized at the crossover point i, two new individuals (offspring) result:

The new individual c1 new is created by combining the first segment of the old individual c1 with the second segment of the old individual c2 In contrast, the new individual c2 new is formed by merging the first segment of the old individual c2 with the second segment of the old individual c1 An example of this process is illustrated in Figure 1.4.

The analysis highlights a one-point crossover, while Syswerda (1989) explored the extension to two-point and multipoint crossover techniques Additionally, Syswerda (1993) examined the implications of utilizing multiple individuals in crossover operations The effectiveness of two-point or multipoint crossover, as well as the involvement of several individuals, is contingent upon the specific problem being addressed.

In real representation, individuals are represented by real numbers, and the application of arithmetical crossover techniques has proven effective in solving constrained nonlinear optimization problems, as noted by Michalewicz in 1996.

Figure 1.4 Crossover for binary representation. c 1 =[ c 1 1 , , c 1 2 , , , c 1 , i , c 1 , i + 1 , , c 1 , n ] c 2 =[ c 2 1 , , c 2 2 , , , c 2 , i , c 2 , i + 1 , , c 2 , n ] c 1 new =[ c 1 1 , , c 1 2 , , ,c 1 , i , c 2 , i + 1 , , c 2 , n ] c 2 new =[ c 2 1 , , c 2 2 , , , c 2 , i , c 1 , i + 1 , , c 1 , n ]

Crossover point c 2 be two individuals, who are to reproduce The two offspring c 1 new and c 2 new are produced as a linear combination of their parents c 1 and c 2 :

Mutation

The mutation process introduces random variations in an individual's genes, governed by a fixed mutation probability, denoted as p m During optimization, a random number between 0 and 1 is generated for each individual and compared to this mutation probability If the random number is less than p m, a gene mutation occurs, leading to genetic diversity within the population.

In binary representation, a single bit of a gene is randomly selected and inverted, changing a 0 to a 1 or a 1 to a 0 This process is illustrated in Figure 1.5, which provides an example of binary mutation.

The mutation operator, initially designed for binary representation, has evolved to include methods that facilitate gene modification in real representation These advanced techniques utilize a probability distribution defined across the potential values for each gene, allowing for the calculation of new gene values based on this distribution Essentially, the mutation operator introduces random changes to one or more genes of a chosen individual, enhancing genetic diversity.

In the context of genetic algorithms, an individual is represented as c i = [c i1, …, c ij, …, c in ], where c ij is the specific gene targeted for mutation The variable c ij is constrained within a defined range, expressed as c ij = [c ij,min, c ij,max], with c ij,min and c ij,max indicating the minimum and maximum limits, respectively Mutations of real numbers can be categorized into two primary types: uniform mutation and non-uniform mutation, as outlined by Michalewicz (1996).

Figure 1.5 Mutation for binary representation. c 1 new =λ c 1 + −(1 λ) c 2 c 2 new = −(1 λ) c 1 +λ c 2

1 Uniform mutation: The application of this operator results in an individual where is a random value (uni- form probability distribution) within the domain of c ij The mutation operator is applied with a probability p m

2 Nonuniform mutation: The application of this operator results in an individual where is a value calculated by:

In this context, h represents a randomly selected binary digit, either 0 or 1 The function ∆(g,y) produces a value within the range of [0,y], where the likelihood of ∆(g,y) starting at zero rises progressively with each generation number g.

In the interval [0, 1], a randomly generated number, r a, influences the search process, while a designer-selected parameter, b, controls the dependence on the generation number Consequently, the search strategy begins uniformly when the generation count, g, is low and becomes increasingly localized in subsequent generations.

Genetic algorithms for optimization

Genetic algorithms at work

A Genetic Algorithm (GA) is employed to tackle an unconstrained optimization problem, starting with the definition of a fitness function F(x) tailored to the specific issue Key parameters such as crossover probability (p c), mutation probability (p m), and population size (à) are established, followed by the random initialization of the initial population P 0 The first generation begins with the fitness evaluation F( c i ) for each individual in the population Through selection, a transition population is formed, which undergoes crossover operations based on the probability p c, resulting in a new population P≈ Subsequently, mutation is applied with probability p m to generate the next population, designated P g+1 This iterative process continues until the maximum generation number g max is reached, at which point the optimization concludes, and the fittest individual is identified as the solution to the problem.

Repeated application of genetic operators may lead to the fittest individual of a generation being overlooked or eliminated during the crossover process This can be represented mathematically, where the new individual, denoted as c_i_new, is formed by combining elements from the previous generation, but the optimal candidate, c_ij, might not be selected due to the inherent randomness of genetic algorithms.

 c A g, c c , h c A g, c c , h ij ijmax ij ij ij ijmin

To maintain genetic diversity and improve population fitness, it's crucial to ensure that the top-performing individual from the previous generation is preserved in the next generation, especially if the current best individual exhibits lower fitness This approach guarantees that only superior individuals replace the best ones, enhancing overall evolutionary progress.

An optimization example

To evaluate the effectiveness of Genetic Algorithms (GA), we utilize a well-established analytic function Traditional GA methods, which rely on individuals represented by bit strings and employ techniques such as proportionate selection, one-point crossover, and bit inversion mutation, may not always efficiently address optimization challenges (Mitchell, 1996) Therefore, we implement a GA that utilizes real representation along with enhanced genetic operators for improved performance.

• Tournament selection with tournament size z = 2

• Arithmetical crossover with crossover parameter λ = 0.5

• Mutation according to a uniform probability distribution

We use the Goldstein–Price function It is described by the following equation (De Jong, 1975):

Algorithm 3: genetic algorithms c begin while do end while return end

: and initialize: g g fitness calculate: selection: crossover: mutation: max max

The Goldstein–Price function has a single global minimum at the point x* = [0, –1]T, where the function value is fG(x*) = 3 The optimization of this function, as detailed in Equation (1.15), is performed using a genetic algorithm (GA), with each individual represented by the vector x = [x1, x2]T.

Based on Equation (1.10), the fitness for each individual is given by the following:

The GA parameters are population size à = 100, crossover probability p c = 0.3, mutation probability p m = 0.05, and maximum number of generations g max = 100.

The population is randomly initialized The minimization of the Gold- stein–Price function during the first 20 generations is shown in Figure 1.7.

The GA is able to find the global minimum of the Goldstein–Price function in

12 generations On the basis of this example, it can be shown that GA are suitable for finding the minimum of highly nonlinear functions.

The next chapters of this book will present new methods using GA to auto- matically design optimal robust, predictive, and variable structure controllers.

Genetic programming

Genetic programming (GP) is an extension of the GA for handling complex computational structures (Howard and D'angelo, 1995; Koza, 1992) The GP

Figure 1.7 Minimization of the Goldstein–Price function using a GA. f x x x x x x x x x x x x x x x x

Genetic Programming (GP) employs unique individual representations and genetic operators, utilizing an efficient data structure to generate symbolic expressions and perform symbolic regressions The problem-solving process in GP can be viewed as a search through various combinations of these symbolic expressions, which are encoded in a tree structure known as a computational program This structure is composed of nodes and can vary in size.

Genetic Programming (GP) utilizes fixed sets of symbols to optimize tree structures more effectively than relying solely on numerical parameters These symbols are categorized into two alphabets: the functional alphabet, which includes characters for arithmetic operations and mathematical functions such as +, -, *, /, sqrt, log, exp, ln, and logical operations like and and or; and the terminal alphabet, consisting of constants, numerical values, and domain-specific inputs The search space encompasses all possible compositions of functions that can be recursively generated from these alphabets Symbolic expressions, or S-expressions, from the LISP programming language provide a practical method for creating and manipulating these function and terminal compositions, as illustrated by the tree structure representation of the expression (x + y) ã 1nx.

The crossover operation in genetic programming involves randomly selecting and permuting sub-trees, while mutation operations typically include gene swapping under designer-imposed restrictions The optimization process through genetic programming can be outlined in several key steps (Bọck et al., 1997).

1 Randomly create a population of trees with uniform distribution, providing symbolic expressions

2 Evaluate each tree using the fitness function

4 Apply the crossover operator in a set of parent trees, chosen randomly

6 Repeat steps (2) to (5) until a stop criterion is satisfied

In the crossover operation, two trees with similar structures are selected, aligning with the techniques utilized by genetic algorithms (GA) It is essential that this operation maintains the integrity of the syntax within the symbolic expressions, ensuring the proper application of genetic principles.

Figure 1.8 Representation in a tree structure.

Operators are required to create an evaluable program, which involves randomly selecting a sub-tree from one parent tree and swapping it with a sub-tree from another parent tree This crossover process, illustrated by darker lines in Figure 1.9, allows the resulting trees to be added to the mating pool, facilitating the generation of offspring for the subsequent generation.

After the crossover operation is applied to the parents, the result is the creation of two offspring, as illustrated in Figure 1.10.

Mutation introduces alterations to a tree structure, resulting in a modified version that is passed on to the next generation within a population This process involves randomly changing a function, input, or constant within a symbolic expression (Bọck et al., 1997) For instance, in the application of the mutation operator on offspring 2, a terminal node with a value of 15 is transformed to a value of 21, illustrating the mutation's impact.

In Chapter 8, we present a fuzzy-GP approach to mobile robot navigation and control.

Conclusions

In this chapter, an overview of genetic algorithms (GA) and genetic pro- gramming (GP) was given The major definitions and terminology of genetic

Figure 1.9 Representation of the parent trees before crossover.

Figure 1.10 Representation of the parent trees after crossover.

Nine algorithms were introduced, detailing traditional binary and real representations, the fitness function, and the three genetic operators: selection, crossover, and mutation To demonstrate these concepts, we illustrated the application of Genetic Algorithms (GA) in solving optimization problems Additionally, we expanded on the concept of GA to Genetic Programming (GP), explaining the representation in tree form along with new genetic operators.

Bọck, T., Evolutionary Algorithms in Theory and Practice, Oxford University Press, Oxford, 1996.

Bọck , T., Fogel, D.B., and Michalewicz, Z., Handbook of Evolutionary Computation, Institute of Physics Publishing, Philadelphia and Oxford University Press, New York, Oxford, 1997.

Bethke, A.D., Genetic Algorithms as Function Optimization, Ph.D dissertation, Univer- sity of Michigan, 1981.

Blickle, T., Theory of Evolutionary Algorithms and Applications to System Synthesis, Ph.D. dissertation, Eidgenửssische Technische Hochschule, Zỹrich, 1997.

Davis, L (Ed.), Handbook of Genetic Algorithms, Van Nostrand Reinhold, New York, 1991.

De Jong, K.A., An Analysis of the Behavior of a Class of Genetic Adaptive Systems, Ph.D. dissertation, University of Michigan, 1975.

Fogel, D.B., Evolutionary Computation: Toward a New Philosophy of Machine Intelligence, IEEE Press, Piscataway, 1995.

Goldberg, D.E., Genetic Algorithms in Search, Optimization and Machine Learning, Ad- dison Wesley, Reading, 1989.

Goldberg, D.E., Deb, K., and Thierens, D., Toward a better understanding of mixing in genetic algorithms, Journal of the Society of Instrument and Control Engineers,

Holland, J.H., Adaptation in Natural and Artificial Systems, University of Michigan Press, Ann Arbor, 1975.

Howard, L.M and D'angelo, D.J., The GA-P: A genetic algorithm and genetic pro- gramming hybrid, IEEE Expert, 10, 3, 11–15, 1995.

Koza, J.R., Genetic Programming: On the Programming of Computers by Means of Natural

Michalewicz, Z., Genetic Algorithms + Data Structure = Evolution Programs, Springer- Verlag, Berlin, 1996.

Figure 1.11 Representation of an offspring tree after mutation.

Miller, B.L and Goldberg, D.E., Genetic Algorithms, Tournament Selection and the Effects of Noise, IlliGAL Report No 95006, Department of General Engineering, Uni- versity of Illinois, 1995.

Mitchell, M., An Introduction to Genetic Algorithms, MIT Press, Cambridge, 1996. Syswerda, G., Uniform crossover in genetic algorithms, in Proceedings of the 3rd

International Conference on Genetic Algorithms, Morgan Kaufmann Publishers,

Syswerda, G., Simulated crossover in genetic algorithms, in Foundations of Genetic

Algorithms, Whitley, L.D., Ed., Morgan Kaufmann Publishers, Los Altos, 1993, pp 239–255.

Wright, A.H., Genetic algorithms for real parameter optimization, in Foundations of

Genetic Algorithms, Rawlins, G., Ed., Morgan Kaufmann Publishers, Los Altos, pp 205-218, 1991.

Introduction to the control theory

Control theory focuses on developing methods for analyzing and designing control systems, which typically consist of a controller and a plant Examples of plants include electrical motors, machine tools, and airplanes The controller gathers information about the plant through sensors, processes this data, and calculates a control signal u(t) to influence the plant's dynamic behavior The primary objective of the control system is to maintain the controlled variable, y(t), within acceptable limits of the reference variable, r(t), despite any disturbances, δ(t), affecting the plant.

Designing a controller requires a fundamental understanding of the plant's model Typically, a mathematical representation of the plant's behavior is essential for effective controller design This model can be derived through the application of physical laws or through experimental identification methods.

To develop a mathematical model based on physical laws, one typically encounters nonlinear differential equations However, by linearizing the equations around a specific point, it is possible to derive a system of linear time-invariant differential equations with constant coefficients Utilizing the Laplace Transformation on these equations, with initial values set to zero, results in a plant model represented as a transfer function This approach allows for the application of linear systems techniques, such as those from Nyquist or Bode analysis.

20 Robust Control Systems with Genetic Algorithms

When input-output data from the plant is accessible, a mathematical model can be derived through experimental identification By utilizing a linear time-invariant model with a defined structure, the model's parameters can be established using the plant's measured data.

This article discusses a linear time-invariant single-input-single-output (SISO) system, characterized by a continuous time-invariant linear model that represents the plant The control system, illustrated in Figure 2.2, features one output variable known as the controlled variable Y(s), alongside the reference variable (set point) R(s), the input disturbance variable Du(s), and the output disturbance variable Dy(s).

D u (s) Disturbance at the plant input

D y (s) Disturbance at the plant output

C(s) Transfer function of the controller

G 0 (s) Nominal transfer function of the plant

The error E(s) for the control system shown in Figure 2.2 is given by

The open loop transfer function L(s) is defined as follows:

(2.2) The sensitivity function is defined as:

Figure 2.2 Block diagram of the control system.

Chapter two: Optimal robust control 21

The complementary sensitivity function is defined as follows:

When designing a control system, the foremost consideration is stability The stability of the closed-loop system can be assessed by analyzing the roots of the characteristic equation.

A control system is considered stable when all roots of its characteristic equation are located in the left half of the s-plane, indicating negative real parts and demonstrating absolute stability The Hurwitz test allows for the assessment of this absolute stability by analyzing the coefficients of the characteristic equation, eliminating the need to calculate the precise positions of the roots.

A simplified model of the plant is generally used in design or analysis of a control system The model usually contains errors The causes for such errors are as follows:

• Deviation between the real parameters and the modeled parameters

• Modifications of the plant parameters by means of age, environment conditions, and dependencies on the work point

• Errors by simplification of the model

Model error represents model uncertainty, which is crucial in controller design to ensure the stability of the control system This concept is referred to as robust stability When model uncertainty is taken into account during the design phase, it leads to the development of a robust controller (Mueller, 1996).

When designing robust controllers, it's essential to account for model uncertainty in the plant, which can be categorized into two types: structured and nonstructured Structured model uncertainty, also known as parametric model uncertainty, arises from changes in the plant's parameters and can be effectively described using interval methods In contrast, nonstructured model uncertainty typically stems from the nonlinearities within the plant or alterations to the operating point, and it is best characterized using H ∞ -theory.

Classical controller design methods rely on a nominal model of the plant, with the robustness of the control loop assessed through phase margin and gain margin parameters (Djaferis, 1995) Robust design techniques are essential for ensuring system stability and performance under varying conditions.

Robust control systems utilizing Genetic Algorithms controllers, grounded in H ∞ - theory, employ a variety of plant models, including a nominal model and considerations for model uncertainty Ensuring the stability of the control loop is essential, particularly in the presence of model uncertainty The H ∞ - norm serves as a critical tool for defining the conditions necessary for robust stability and effective disturbance rejection within the control system.

Norms of signals and functions

This subsection discusses the norms of signals and functions, which assess elements of a metric space through a real, positive number that quantifies their size (Mueller, 1996) Norms apply to vector-valued signals, real functions of time, and functions of the variable s in the Laplace Transformation In automatic control, the most commonly utilized norms are the Euclidean norm and the Maximum norm, also known as the Tschebyshev norm (Boyd and Barrat, 1991).

The L 2 -norm* of the signal v(t) is defined by (Doyle et al., 1992):

(2.6) and the L ∞ - norm is defined by the following (Doyle et al., 1992):

The L ∞ - norm of v(t) is the maximum amplitude of the signal v(t). The H 2 - norm** of the transfer function G(s)*** is defined by the following (Doyle et al., 1992):

(2.8) and the H ∞ - norm is defined by the following (Doyle et al., 1992):

The H ∞ - norm of G(s) represents the maximum amplitude on the Bode magnitude plot Figure 2.3 shows the magnitude plot for the following transfer function:

* L is an abbreviation for the mathematician Lebesgue (Boyd and Barrat, 1991).

** H is an abbreviation for the mathematician Hardy (Boyd and Barrat, 1991).

*** j stands for the imaginary number, i.e., j= v 2 v t dt

Chapter two: Optimal robust control 23

Norms are usually used in connection to optimization problems, i.e., minimization of the sensitivity function, e.g., min of ||S|| ∞ (Zames, 1981).

To ensure stability in the optimization problem, the norm to be optimized must maintain a stable transfer function This article will provide two key definitions, assuming that the rational transfer function G(s) is stable.

Definition 2.1 (Mueller, 1996): The transfer function G(s) is characterized as proper, if G(j∞) < ∞.

Definition 2.2 (Mueller, 1996): The transfer function G(s) is characterized as strictly proper, if G(j∞) = 0.

A relationship between the L 2 - norm and the H ∞ - norm is given by the following theorem:

Theorem 2.1 (Vidyasagar, 1985): If ||v|| 2 ≤∞, and Y(s) = G(s)V(s), where

G(s) is a stable transfer function without poles on the imaginary axis, then

A strong mathematical treatment of norms can be found in Vidyasagar

Description of model uncertainty

Robust stability in control systems can be achieved by considering plant uncertainty This involves utilizing identification methods for the plant model that incorporate uncertainties through L ∞ -norm or H ∞ -norm approaches These techniques are recognized as robust identification methods.

Figure 2.3 Bode magnitude plot of G(s).

This article discusses two methods for modeling plant behavior, as identified by Milanese et al (1996) These methods involve creating a mathematical model that aligns with experimental data using either the L ∞ - norm or the H ∞ - norm It specifically highlights explicit models for addressing nonstructured uncertainty, with the multiplicative and additive models being the most commonly utilized for this purpose.

By using the multiplicative model, the transfer function of the real (per- turbed) plant G(s) is described by the following (Doyle et al., 1992):

∆(s) Disturbance (perturbation) acting on the plant

W m (s) Weighting function that represents an upper bound of the multiplicative uncertainty

By using the additive model, the transfer function of the real plant G(s) is described by the following (Doyle et al., 1992):

(2.13) where W a (s) is the weighting function that represents an upper bound of the additive uncertainty.

Determination of uncertainty can be realized by means of experimental identification Let the plant be described by Equation (2.12) This equation can be transformed to:

Suppose that the perturbation that actuates upon the plant is unknown but bounded, i.e., ||∆(s)|| ∞ ≤ 1, then results with s = jw are as follows:

Thus, the model uncertainty is represented by the deviation between the normalized plant and one (unity).

The goal of robust identification methods is to establish an upper limit for the model uncertainty W m (s) This process involves assuming that the plant is stable and utilizing experimental data from the frequency domain to determine its transfer function The plant is stimulated using a sinusoidal input signal to facilitate this analysis.

In the experiment, different frequencies \( w_i \) (where \( i = 1,…,m \)) are analyzed, measuring the corresponding amplitude \( M_r(w_i) \) and phase \( \phi_r(w_i) \) for each frequency across \( n \) trials The nominal transfer function of the plant \( G_0(s) \) is established using the pairs \( (M_i, \phi_i) \), where \( M_i = |G_0(jw_i)| \) and \( \phi_i = \arg G_0(jw_i) \) Subsequently, a weighting function \( W_m(jw_i) \) is selected based on Equation (2.15) The calculated values indicate the deviation between the normalized plant and unity, allowing for the determination of an upper bound on model uncertainty, as outlined by Doyle et al (1992).

Tan and Li (1996) propose a method for determining upper bounds for additive and multiplicative uncertainty By applying Fourier Transformation, input-output values can be converted from the time domain to the frequency domain Additionally, robust identification techniques are explored in the works of Milanese et al (1996) and Smith and Dahleh (1994).

Robust stability and disturbance rejection

Condition for robust stability

Consider the control system shown in Figure 2.4 The controller is described by means of a transfer function with fixed structure C(s, k ) The vector k assigns the vector of the controller parameters:

Figure 2.4 Control system composed of a controller with fixed structure, and a plant with model uncertainty.

The plant is represented by a multiplicative model as outlined in Equation (2.12) It is assumed that the model uncertainty, denoted as W m (s), remains stable and bounded, with no cancellation of unstable poles from G 0 (s) during the formation of G(s).

The condition for robust stability is stated as follows (Doyle et al., 1992):

If the nominal control system (∆(s) = 0) is stable with the controller C(s, k ), then the controller C(s, k ) guarantees robust stability of the control system, if and only if the following condition is satisfied:

This condition for robust stability represents only a sufficient condition So, the robust stability of a control system can be evaluated by means of the H ∞ - norm

In the context of an additive model as outlined in Equation (2.13), robust stability is achieved under the assumption that the uncertainty W a (s) remains stable and bounded, as established by Doyle et al (1992).

If the nominal control system (∆(s) = 0) is stable with the controller C(s, k ), then the controller C(s, k ) guarantees robust stability of the control system, if and only if the following condition is satisfied:

The multiplicative model is commonly utilized in plant descriptions, and any additive model can be readily transformed into a multiplicative format This article will focus on the application of the multiplicative model.

Applying the definition of the H ∞ - norm, according to Equation (2.9), on the condition for robust stability, results in the following:

Thus, the condition for robust stability in the frequency domain is repre- sented by the following:

The function α(w, k ) in Equation (2.20) can also be expressed in the following form:

Both polynomials αz(w, k) and αn(w, k) consist solely of even powers of w, with coefficients αzj(k) and αni(k) being functions of k For robust stability, it is essential that the degree p of αz(w, k) is less than the degree q of αn(w, k) (p < q), ensuring that α(w, k) remains finite for w ≥ 0 and approaches zero as w approaches infinity Consequently, the product C(s, k) × G0(s) must be a strictly proper rational function, while Wm(s) should be a proper rational function.

Condition for disturbance rejection

Classical controller design methods, as noted by Åstrửm and Họgglund (1994), typically assume disturbances in deterministic signal forms such as step or sinusoidal functions In contrast, when utilizing the H ∞ -norm for disturbance analysis, the signal type can be arbitrary, provided that its amplitude remains bounded This article will outline the conditions necessary for effective disturbance rejection.

The control system illustrated in Figure 2.5 features a disturbance D y (s) affecting the plant output It utilizes a controller with a fixed structure represented by a rational transfer function C(s, k) The plant's behavior is characterized by its nominal transfer function G 0 (s).

Figure 2.5 Control system with disturbance acting on the plant output. max ( ( ))

28 Robust Control Systems with Genetic Algorithms

Let the reference signal R(s) = 0, then the relation of the controlled variable Y(s) to the disturbance at the output D y (s) can be described as follows:

Applying Theorem 2.1 to Equation (2.23) yields:

To ensure effective disturbance rejection, the maximum amplitude of the output variable y(t) resulting from the disturbance on the plant output dy(t) must remain within a predetermined upper limit, denoted as γ.

According to Chen et al (1995), the introduction of a weighting function

W d (s) that consists of low-pass filter into Equation (2.25) yields:

This condition for disturbance rejection represents only a sufficient condi- tion The disturbance rejection of a control system can be evaluated by using

Applying the definition of the H ∞ - norm according to Equation (2.9) on the condition for disturbance rejection results in the following:

So, the condition for disturbance rejection in the frequency domain is rep- resented by:

The function β(w, k ) in Equation (2.28) can also be expressed in the following form:

Both polynomials βz(w, k) and βn(w, k) consist solely of even powers of w, with coefficients βzj(k) and βni(k) being functions of k For effective disturbance rejection, the controller's structure must satisfy a specific condition: the degree p of βz(w, k) must be less than the degree q of βn(w, k) (i.e., p < q) This ensures that the function β(w, k) remains finite for w ≥ 0 and approaches zero as w approaches infinity Consequently, the product C(s, k) × G0(s) must be a strictly proper rational function, while Wd(s) must be a proper rational function.

Controller design

Optimal controller design

The control system depicted in Figure 2.6 features a controller characterized by the transfer function C(s, k) and a plant represented by the nominal transfer function G0(s) Ensuring effective tracking performance of the control system is crucial, and efforts will be made to optimize this aspect.

The performance index, the ISE J, is given by:

It can be described in the frequency domain by means of the Parseval The- orem (Jury, 1974):

(2.32) The error E(s) for the control system shown in Figure 2.6 is given by:

(2.33) The reference signal (set point) is an unit step function given by:

(2.34) The error E(s) can be expressed then as a rational function:

For the squared error J in Equation (2.32) to have a finite value, the degree m of the polynomial D(s) must be less than the degree n of the polynomial A(s) Additionally, it is permissible for n to be zero.

Introducing the error E(s) from Equation (2.35) into Equation (2.32) results in the following:

Equation (2.36) can be solved analytically by means of the Residue Theorem.

A disadvantage of this performance index is that its minimization can result in a tracking behavior with small overshoot but a long settling time.

An improvement of the tracking behavior can be obtained by using the integral of the time-weighted squared error (ITSE) (Westcott, 1954), which is given by:

(2.37) Using the Parseval Theorem (Jury, 1974):

The integral in Equation (2.37) can be analytically solved in the frequency domain by setting h(t) = e(t) and f(t) = te(t) Utilizing the Laplace Transformation results in H(s) = E(s), while the Differentiation Theorem leads to F(s) = –dE(s)/ds Consequently, the time-weighted squared error I in the frequency domain is established.

Introducing the error E(s) from Equation (2.35) into Equation (2.39) results in the following:

The Residue Theorem can be utilized to solve Equation (2.40) Closed-form solutions for J n, as outlined in Equation (2.36), and for I n, based on Equation (2.40), which rely on the coefficients a i (where i = 0,…n) and d j (where j = 0,…m), are provided in the works of Schneider (1966) and Westcott (1954) An illustrative example for n = 5 is presented in the subsequent chapter through Equations (3.9) and (3.10) A thorough analysis of these equations reveals that both the squared error and the time-weighted squared error exhibit nonlinear relationships with the coefficients in the numerator and denominator of Equation (2.35).

In stable control systems, the values of J n and I n remain positive, while in unstable systems, these values cannot be computed The coefficients a i (for i = 0,…n) and d j (for j = 0,…m) represent the controller parameters, allowing for the optimization of these parameters through the minimization of J n (k) or I n (k).

This article presents formulations for designing optimal robust controllers and optimal disturbance rejection controllers with fixed structures The designs focus on minimizing performance indices such as Integral Squared Error (ISE) and Integral Time-weighted Squared Error (ITSE), while ensuring robust stability and effective disturbance rejection.

Optimal robust controller design

In designing optimal robust controllers with fixed structure, both the track- ing behavior and the robust stability are considered The controller design is formulated as a constraint optimization problem, i.e.: or

( ) j j min ( ) subject to max ( ) k J n k wk w (α , ) 0 5 < 1

The optimization problem aims to minimize the performance index, either the Integral Squared Error (ISE) J n (k) or the Integral Time-weighted Squared Error (ITSE) I n (k), while adhering to the robust stability constraint max(α(w, k)) 0.5 < 1 The goal is to identify the optimal vector of controller parameters k* that achieves the lowest possible value for the performance index J n (k*) or I n (k*), ensuring that the robust stability condition max(α(w, k*)) 0.5 < 1 is met.

Optimal disturbance rejection controller design

In designing optimal disturbance rejection controllers with fixed structure, both the tracking behavior and the disturbance rejection are considered The controller design is formulated as a constraint optimization problem, i.e.: or

The optimization problem focuses on minimizing the performance index, either ISE J n (k) or ITSE I n (k), while ensuring the disturbance rejection constraint, max(β(w,k))^0.5 < γ, is met The goal is to identify the optimal vector of controller parameters, k*, that achieves the lowest value for the performance index, J n (k*) or I n (k*), while adhering to the specified disturbance rejection condition.

The design challenges associated with optimal robust controllers and optimal disturbance rejection controllers with fixed structures involve solving a constrained nonlinear optimization problem This topic will be explored in the following section, which will also include essential definitions related to optimization.

Optimization

The optimization problem

In general, the optimization problem is defined* as follows (Dixon and Szegử, 1978; Michalewicz, 1996):

In optimization problems, a limitation on a minimization problem does not necessarily imply a restriction, as maximization problems can be transformed into minimization problems This transformation is demonstrated through the equations: min( ) subject to max( ) k I n k wk w(α, ) 0 5 < 1 and min( ) subject to max( ) k J n k wk w(β, ) 0 5 < γ Additionally, the relationship between minimization and maximization is further illustrated by min( ) subject to max( ) k I n k wk w(β, ) 0 5 < γ and max( ) min[( )] x x x x.

The function f(x ) to be optimized is called the objective function To it applies:

The objective function f(x ) assigns to each vector x a scalar value The vector x consists of n variables:

The set S ⊆ ℜ n designates the search space and is an n-dimensional parallelepiped in ℜ n , which is defined by the lower and upper bounds of the variables:

In optimization problems, the feasible set Z, which is a subset of S, is defined by the constraints imposed on the variables A vector of variables x = [x1, x2, …, xn] T represents a point in ℜ n A point x that lies within S and meets the constraints is identified as a feasible point When no constraints are present, the optimization problem is termed unconstrained, leading to the conclusion that Z equals S.

A feasible point \( x^* \) is identified as a local minimum in an optimization problem if its function value \( f(x^*) \) is less than or equal to \( f(x) \) for all points \( x \) within a small distance \( \epsilon \) from \( x^* \) This means there exists a positive real number \( \epsilon > 0 \) such that for all \( x \) in the set \( Z \) where \( |x - x^*| < \epsilon \), the condition \( f(x^*) \leq f(x) \) holds true.

A feasible point x * is called a global point of minimum, and its function value f( x * ) is a global minimum for the optimization problem if and only if f( x * ) ≤ f( x ) for all x ∈ Z.

An objective function is classified as unimodal if it possesses only one local minimum, while it is termed multimodal if it has multiple local minima Global optimization methods are designed to identify both the optimal value of an objective function and the corresponding optimal variable values For instance, a one-dimensional multimodal objective function, illustrated in Figure 2.7, features two local minima alongside a global minimum within its search space Since controller design involves solving an optimization problem with constraints, it is essential to briefly discuss methods for managing these constraints.

Constraint handling

The search space S generally consists of two subspaces: the feasible Z and the unfeasible U Figure 2.8 shows a possible search space min ( ) subject to

(Michalewicz, 1995) In this text, no assumption will be made about the search space.

In unconstrained optimization, all individuals are deemed feasible, whereas in constrained optimization, it is essential to distinguish between feasible and infeasible individuals The primary objective is to identify the feasible global optimum There is no universally applicable method for managing constraints; however, penalty functions are among the most commonly employed techniques in the field (Homaifar et al., 1994; Joines and Houck, 1994; Michalewicz, 1995).

A constraint optimization problem can be converted into an unconstrained optimization problem by incorporating a penalty function that addresses constraint violations In this context, the penalty function, denoted as P(x), is zero when all constraints are satisfied; however, it takes on a positive value if any constraints are violated Typically, the penalty function is defined according to established methodologies (Kim and Myung, 1997; Michalewicz, 1995).

Figure 2.7 Example of a one-dimensional multimodal function.

Figure 2.8 Search space. x 1 x 1 * x 1 min x 1 max

Unfeasible Search Space U Feasible Search Space Z min ( ) ( ) x x x

M s is a positive penalty parameter g j + ( x ) = max(0,g j ( x )), for 1 ≤ j ≤ m h i ( x ), for m + 1 ≤ i ≤ r

Incorporating penalty functions to address constraints, unfeasible individuals experience a reduction in fitness The overall fitness function is influenced by both the objective function f(x) and the penalty function, ensuring a balanced evaluation of solutions.

The penalty function P(x) serves to quantify the extent of constraint violations in optimization problems It imposes penalties on unfeasible individuals, effectively guiding the optimization process towards the feasible subspace.

The fitness function F( x ) is defined as follows:

In optimization problems, the penalty value is zero when an individual lies within the feasible space Z, indicating no constraint violations However, if an individual exists in the search space S but fails to satisfy one or more constraints, a penalty function P(x) is incorporated into the objective function, leading to a diminished fitness score In cases where the objective function cannot be computed, the fitness is assigned a penalty parameter Mt.

Conclusions

This chapter presents the formulation of optimal robust controllers and optimal disturbance rejection controllers with fixed structures, introducing signal and function norms It discusses two primary models for plant uncertainty: additive and multiplicative Utilizing the H ∞ -norm, conditions for robust stability and disturbance rejection are established The design process is framed as a constrained optimization problem, focusing on minimizing a performance index—either the integral of the squared error or the integral of the time-weighted squared error—while adhering to robust stability or disturbance rejection constraints Additionally, relevant concepts from optimization theory are explained to support the controller design process.

 if i if ( ) does not exist x x x

Ackermann (1993) discusses robust control in systems characterized by uncertain physical parameters, providing essential insights into stability and performance Meanwhile, Åström and Hägglund (1994) focus on PID controllers, offering a comprehensive examination of their theory, design, and tuning methods, which are crucial for effective automation in various applications.

Bernstein, B.S and Haddad, W.M., LQG control with an H ∞ - performance bound: A Riccati equation approach, IEEE Transactions on Automatic Control, 34, 3, 293–305, 1994.

Bhattacharyya, S.P., Chapellat, H., and Keel, L.H., Robust Control: The Parametric

Approach, Prentice Hall, Englewood Cliffs, New Jersey, 1995.

Boyd, S.T and Barrat, C.H Linear Controller Design: Limit of Performance, Prentice Hall, Englewood Cliffs, New Jersey, 1991.

Chen, B.-S., Cheng, Y.-M., and Lee, C.-H., A genetic approach to mixed H 2 /H ∞ optimal PID control, IEEE Control Systems Magazine, 15, 5, 51–60, 1995.

Dixon, L.C.W and Szegử, G.P., Towards Global Optimization, North-Holland Publishing Company, Amsterdam, 1978.

Djaferis, T.E., Robust Control Design: A Polynomial Approach, Kluwer Academic, Dor- drecht, 1995.

Doyle, J.C., Francis, B.A., and Tannenbaum, A.R., Feedback Control Theory , Macmillan Publishing, New York, 1992.

Doyle, J.C et al., Mixed H 2 and H ∞ performance objectives II: Optimal control, IEEE

Homaifar, A., Qi, C.X., and Lai, S.H., Constrained optimization via genetic algo- rithms, Simulation, 62, 4, 242–254, 1994.

Joines and Houck (1994) explore the application of nonstationary penalty functions in addressing nonlinear constrained optimization problems using genetic algorithms Their research, presented at the Evolutionary Computation Conference during the IEEE World Congress on Computational Intelligence in Orlando, Florida, provides insights into enhancing optimization techniques The findings, detailed on pages 579-584, emphasize the effectiveness of these penalty functions in improving the performance of genetic algorithms in complex problem-solving scenarios.

In the realm of dynamic systems, Jury's work in 1974 laid the foundation for understanding inner stability Following this, Khargonekar and Rotea introduced a convex optimization approach to mixed H2/H∞ control in their 1991 IEEE Transactions article, enhancing control methodologies Additionally, Kim and Myung explored evolutionary programming techniques for constrained optimization problems in their 1997 study, contributing to the field of evolutionary computation.

Kwaakernaak, H., Robust control and H ∞ - optimization — A Tutorial Paper, Auto- matica, 29, 2, 255–273, 1993.

Michalewicz, Z., A survey of constraint handling techniques in evolutionary meth- ods, Proceedings of the 4th Annual Conference on Evolutionary Programming, MIT Press, Cambridge, pp 135–155, 1995.

Michalewicz, Z., Genetic Algorithms + Data Structure = Evolution Programs, Springer- Verlag, Berlin, 1996.

Milanese, M., Norton, J., Piet-Lahaniert, H., and Walter, E., Bounding Approaches to

System Identification, Plenum Publishing, New York, 1996.

Mueller, K., Entwurf robuster Regelungen, Teubner Verlag, Stuttgart, 1996.

Schneider, F., Geschlossene Formel zur Berechnung der quadratischen und der zeit- beschwerten quadratischen Regelflọche fỹr kontinuierliche und diskrete Sys- teme, Regelungstechnik, 14, 4, 159–166, 1966.

Smith, R.S and Dahleh, M., The Modeling of Uncertainty in Control Systems, Lecture Notes in Control and Information Sciences, Vol 192, Springer-Verlag, Berlin, 1994.

In the field of control theory, Snaizer (1994) presents an exact solution to general Single Input Single Output (SISO) mixed H2/H∞ problems through convex optimization techniques, as published in the IEEE Transactions on Automatic Control Additionally, Tan and Li explore L∞ identification and model reduction methodologies utilizing a learning genetic algorithm, showcased at the UKACC International Conference on Control These studies contribute significantly to advancements in optimization and model reduction strategies within automatic control systems.

Vidyasagar, M., Control System Design: A Factorization Approach, MIT Press, Cam- bridge, 1985.

In his 1954 paper, J.H Westcott introduced the minimum-moment-of-error-squared criterion as an innovative performance measure for servo mechanisms, which was published in the IEEE Proceedings in London Additionally, G.F Zames explored the concepts of feedback and optimal sensitivity through model reference transformations, multiplicative seminorms, and approximate inverses in his work featured in the IEEE Transactions.

Introduction to controller design using genetic algorithms

3.1 Introduction to controller design using genetic algorithms

The previous chapter presented the controller design as a constrained optimization problem, but solving this problem poses challenges due to the limitations of classical optimization methods, which rely on assumptions like differentiability and convexity To address these issues, we propose the use of genetic algorithms (GAs), known for their effectiveness in global optimization GAs have gained significant attention in automatic control applications, demonstrating successful results across various studies (Capponetto et al., 1994; Hunt, 1992; Patton and Liu, 1994; Porter II and Passino, 1994; Wang and Kwok, 1994; Krohling, 1997a, 1997b, 1997c, 1998; Krohling et al., 1997a; Krohling and Rey, 2001; Man et al., 1996, 1997; Onnen et al., 1997), with a more comprehensive list available in Alander (1995).

In this chapter, we show how GAs can be used to design optimal robust and optimal disturbance rejection controllers with fixed structure formulated as a constrained optimization problem.

Design of optimal robust controller with fixed structure

Design method

The method for design of optimal robust controller with fixed structure can be summarized as follows (Krohling, 1998):

• Given the plant with transfer function G 0 (s), the controller with fixed structure and transfer function C(s, k ), and the weighting function

W m (s), determine the error signal E(s) and the robust stability con- straint α(w, k ).

• Specify the lower and upper bounds of the controller parameters

• Set up GA_1 and GA_2 parameters: crossover probability, mutation probability, population size, and maximum number of generations

It is convenient to describe the method in the form of an algorithm.

, , α , if is not stable if max ( ( )) if max ( ( )) k k k i i i w w α α

42 Robust Control Systems with Genetic Algorithms

Step 1: Initialize the populations of GA_1 k i (i = 1,…,à1) and GA_2 w j (j

= 1,…,à2), and set the generation number of GA_1 to g 1 = 1, where g 1 denotes the number of generations for GA_1.

Step 2: For each individual k i of the GA_1 population, calculate the maximum value of α(w, k i ) using GA_2 If no individuals of the GA_1 satisfy the constraint max (α(w, k i )) 0.5 < 1, then a feasible solution is assumed to be nonexistent, and the algorithm stops In this case, a new controller structure has to be assumed.

Step 3: For each individual k i of GA_1, the penalty value is calculated by using Equation (3.4), and the fitness value is calculated by using

Step 4: Select individuals using tournament selection, and apply genetic operators (crossover and mutation) to the individuals of GA_1.

Step 5: For each individual k i of the GA_1, calculate max (α(w, k i )) 0.5 using GA_2 as follows:

Substep a: Initialize the gene of each individual w j (j = 1,…,à2) in the population and set the generation number to g 2 = 1, where g 2 indicates the number of generations for GA_2.

Substep b: Evaluate the fitness of each individual by using Equation

Substep c: Select individuals using tournament selection, and apply genetic operators (crossover and mutation).

Substep d: If the maximum number of generations of GA_2 is reached, stop and return the fitness of the best individual max (α(w, k i )) to GA_1, otherwise set g 2 = g 2 + 1, and go to Substep b.

Step 6: If the maximum number of generations of GA_1 is reached, stop.

Otherwise, set g 1 = g 1 + 1 and go to Step 3.

The optimal solution to the optimization problem is represented by the best individual k*, which is a vector of controller parameters Initially, this best individual is evaluated to determine if it meets the criteria for robust stability, specifically by checking the maximum condition.

If the condition (α(w, k i * )) 0.5 < 1 is not satisfied, the optimization process must be repeated, as this indicates an unfeasible solution Once a feasible solution is obtained, the controller undergoes testing Simulation studies are conducted where the control system is stimulated with a unit step input, allowing for performance evaluation of both the nominal plant and the plant affected by uncertainty.

Design example

To illustrate the method, a detailed design example is presented Consider the control system shown in Figure 3.2.

The plant is described by the following transfer function (Lo Bianco and

The controller C(s, k ) is described by the following transfer function (Lo Bianco and Piazzi, 1997):

The vector k of the controller parameter is given by k = [k 1,k 2,k 3,k 4,k 5] T The multiplicative uncertainty W m (s) is given by (Lo Bianco and Piazzi, 1997):

The error signal E(s), assuming the input signal is a unit step, is evaluated as follows:

(3.8) The squared error J 5 ( k ) is given by (Schneider, 1966; Westcott, 1954):

Figure 3.2 Control system with uncertaint plant.

The time-weighted squared error I 5 ( k ) is given (Schneider, 1966; Westcott, 1954):

The robust stability constraint (α(w, k )) 0.5 is calculated using the software tool Mathematica (Wolfram, 1988):

The controller parameter was searched in the following bounds (Lo Bianco and Piazzi, 1997):

The Genetic Algorithm (GA) method is applicable in controller design, with consistent parameters used across all simulations The crossover probabilities were set at p c1 = p c2 = 0.35, while the mutation probabilities were maintained at p m1 = p m2 = 0.02 Additionally, the population sizes for GA_1 and GA_2 were established at 100 and 50, respectively, along with a defined penalty constant.

M t = 1,000,000, penalty constant M s = 100, maximum number of generations for GA_1 g 1max = 100, and maximum number of generations for GA_2 g 2max

= 50 The values for crossover probability and mutation probability follow standard implementations in the literature (e.g., Michalewicz, 1996).

The convergence of the ISE performance index J 5

Figure 3.3 Convergence of the minimization of the ISE performance index J 5 (k) sub- ject to the robust stability constraint max (α(w ,k)) 0.5 < 1. α z (w, k )=0 0324 k k 1 2 5 4 + −( 0 0648 k k 1 2 5 2 +0 1296 k k k w 1 2 4 2 5 2 ) 2 + ( 0 0324k w 1 2 ) 4 a w w w k k k k k k k k k k k w k k k k k k k k k k k k n , [ ][ ( )

0 10 20 30 40 50 g 1 the results The minimum value J 5 ( k * ) = 0.1507 is achieved in 44 generations, and the corresponding best individual, i.e., the vector of controller parame- ters, yields k * = [1000.0, 14.518, 14.519, 1.0, 0.542] T

Figure 3.4 illustrates the calculation of the maximum robust stability constraint value for the optimal controller parameter vector k* The maximum value obtained is α(w, k*) 0.5 = 0.313, which is less than 1, indicating that k* is a feasible solution Thus, the robust stability condition is fulfilled.

The optimization of the ITSE performance index I 5 (k) under the constraint of disturbance rejection, max(α(w, k)) 0.5 < 1, is illustrated in Figure 3.5 using GA_1 to identify the best individual over the initial 50 generations A minimum value of I 5 (k*) = 0.0252 is achieved within 35 generations, with the optimal controller parameters represented by the vector k* = [1000.0, 15.328, 15.073, 1.0, 0.893] T.

Figure 3.4 Calculation of the maximum value of to the robust stability constraint max (α(w ,k * )) 0.5

Figure 3.5 Convergence of the minimization of the ITSE performance index J 5 (k) subject to the robust stability constraint max (α(w,k)) 0.5 < 1.

Figure 3.6 illustrates the calculation of the maximum robust stability constraint value for the optimal controller parameters vector k* The maximum value obtained is (α(w, k*))^0.5 = 0.341 Since this value is less than 1, it indicates that k* is a feasible solution, confirming that the robust stability condition is met.

The control system's performance, as illustrated in Figure 3.1, is evaluated through closed-loop step response tests for both the nominal plant and a plant with uncertainty To assess the effectiveness of the controller tuning, a unit step input is applied, maintaining a tolerance of ±2% around the set point amplitude for the controlled variable The tracking behavior, determined by minimizing the ISE performance index J 5 (k), is depicted in Figure 3.7 for both the nominal plant G 0 (s) and the uncertain plant G(s).

Figure 3.6 Calculation of the maximum value of to the robust stability constraint max (α(w ,k * )) 0.5

Figure 3.7 Unit step response for the plant with uncertainty [determined by the minimization of the ISE performance index J 5 ( k )]

Figure 3.8 illustrates the tracking behavior of the control system, which was optimized by minimizing the ITSE performance index I 5 (k), for both the nominal plant G 0 (s) and the plant with uncertainty G(s).

The closed-loop step response of the control system, optimized by minimizing the performance index ISE, exhibits an overshoot of approximately 17% within a ±2% tolerance of the set-point amplitude, with a settling time of about 5 seconds As illustrated in Figure 3.7, the closed-loop step response for the plant with uncertainty shows negligible differences compared to the nominal plant, indicating that the impact of uncertainty on the plant is minimal and reinforcing the effectiveness of the design method.

The closed-loop step response of the control system, designed to minimize the ITSE performance index, exhibits a 22% overshoot with a tolerance of ±2% around the set-point amplitude and a settling time of approximately 2.5 seconds As illustrated in Figure 3.8, the step response for the plant with uncertainty closely aligns with that of the nominal plant, indicating minimal influence from uncertainty on the plant output These results demonstrate the effectiveness of the proposed method in developing an optimal robust controller with a fixed structure, providing at least a satisfactory solution even if it is not the most optimal.

Design of optimal disturbance rejection controller with fixed

Design method

The method for design of optimal disturbance rejection controller with fixed structure can be summarized as follows (Krohling and Rey, 2001):

• Given the plant with transfer function G 0 (s), the controller with fixed structure and transfer function C(s, k ), and the weighting function

W d (s), determine the error signal E(s) and the disturbance rejection constraint β(w, k ).

• Specify the lower and upper bounds of the controller parameters

• Set up GA_1 and GA_2 parameters: crossover probability, mutation probability, population size, and maximum number of generations

It is more convenient to describe the method in the form of an algorithm.

, , β , if is not stable if max ( ( )) if max ( ( )) k k k i i i w w β γ β γ

Step 1: Initialize the populations of GA_1 k i (i = 1,…,à1) and GA_2 w j (j = 1,…,à2), and set the generation number of GA_1 to g 1 = 1, where g 1 denotes the number of generations for GA_1.

Step 2: For each individual k i of the GA_1 population, calculate the maximum value of β(w, k i ) using GA_2 If no individuals of the GA_1 satisfy the constraint max (β(w, k i )) 0.5 < γ, then a feasible solution is assumed to be nonexistent, and the algorithm stops In this case, a new controller structure has to be assumed.

Step 3: For each individual k i of GA_1 is calculated the penalty value by using Equation (3.15) and the fitness value by using Equations (3.12) or (3.13).

Step 4: Select individuals using tournament selection and apply genetic operators (crossover and mutation) to the individuals of GA_1.

Step 5: For each individual k i of the GA_1, calculate max (β(w, k i )) 0.5 using GA_2 as follows:

Substep a: Initialize the gene of each individual w j (j = 1,…,à2) in the population, and set the generation number to g 2 = 1, where g 2 indicates the number of generations for GA_2.

Substep b: Evaluate the fitness of each individual by using Equation

Substep c: Select individuals using tournament selection, and apply genetic operators (crossover and mutation).

Substep d: If the maximum number of generations of GA_2 is reached, stop, and return the fitness of the best individual max (β(w, k i )) to GA_1 Otherwise, set g 2 = g 2 + 1, and go to Substep b.

Step 6: If the maximum number of generations of GA_1 is reached, stop.

Otherwise set g 1 = g 1 + 1 and go to Step 3.

The optimal solution of the optimization problem is represented by the best individual k* To ensure feasibility, it is essential to verify that the disturbance rejection condition, max(β(w, k i)) 0.5 < γ, is satisfied If this condition is not met, the optimization process must be repeated Once confirmed, the controller undergoes testing through simulation studies, where the control system is stimulated with a unit step to evaluate performance for both the nominal plant and the plant affected by disturbances.

Design example

To illustrate the method, a detailed design example is presented Consider the control system shown in Figure 3.10.

The plant, a servomotor, is described by the following transfer function(Chen et al., 1995):

(3.16) The weighting function W d (s) is given by the following (Chen et al., 1995):

The disturbance is considered to be d y (t) = 0.1sint, and the disturbance attenuation level specified is γ = 0.1.

The controller C(s, k ) is described by the following transfer function (Chen et al., 1995):

(3.18) The vector k of the controller parameter is given by:

Because the plant, as described by Equation (3.16), already contains an integral action, a controller with integral action (k 2 ) to control this plant is not necessary.

The error signal E(s), assuming the input signal is a unit step, is evaluated as follows:

Figure 3.10 Control system with disturbance acting on the plant (From Krohling, R.A and Rey, J.P., IEEE Trans on Evolutionary Algorithms, 5, 1, 78–82, Feb 2001, ©1988/2001 IEEE With permission.)

(3.19) where d 0 = 0.5, d 1 = 1, d 2 = 0, a 0 = 0.5, a 1 = 1 + 0.8k 3 , a 2 = 0.8k 1 , a 3 = 0.8k 2 The squared error J 3 ( k ) is given by (Schneider, 1966; Westcott, 1954):

The time-weighted squared error I 3 ( k ) is given by (Schneider, 1966; West- cott, 1954):

The disturbance rejection constraint β(w, k ) 0.5 is calculated using the soft- ware tool Mathematica (Wolfram, 1988):

The controller parameter was searched in the following bounds (Chen et al., 1995):

The method GA has been applied to the design of the PID controller. The GA parameters were kept constant for all the simulations with crossover probability p c1 = p c2 = 0.35, mutation probability p m1 = p m2 = 0.02,

In this study, we define parameters for two genetic algorithms (GA_1 and GA_2) with specific population sizes, where GA_1 has a population of 100 and GA_2 has a population of 50 The penalty constants are set to M_t = 1,000,000 and M_s = 100 Additionally, GA_1 is limited to a maximum of 100 generations, while GA_2 is capped at 50 generations The crossover and mutation probabilities adhere to standard practices as outlined in the literature, notably by Michalewicz (1996).

The optimization of the ISE performance index J3(k) under the disturbance rejection constraint max(β(w, k*)) 0.5 < γ, utilizing GA_1, is illustrated in Figure 3.11 The minimum value of J3(k*) is recorded at 0.01083, achieved within 12 generations, with the optimal controller parameters represented by the vector k* = [29.988, 0.184, 30.0]T.

The optimal vector of controller parameters \( k^* \) was evaluated using GA_2, revealing a maximum disturbance rejection constraint value of \( (β(w, k^*))^{0.5} = 0.02376 \) Since this value is less than \( γ \), it indicates that \( k^* \) is a feasible individual, thereby satisfying the condition for effective disturbance rejection.

The convergence of the minimization of the ITSE performance index

In Figure 3.13, the optimization of I3(k) is presented, adhering to the disturbance rejection constraint max(β(w, k*)) 0.5 < γ, utilizing GA_1 to identify the best individual over the first 20 generations The minimum value of I3(k*) is achieved at 0.000668 within 11 generations, corresponding to the optimal controller parameter vector k* = [29.992, 0.00001, 28.3819]T This outcome demonstrates that the genetic algorithm effectively optimizes the parameters, confirming that k2 = ki = 0 is a suitable solution.

GA as an optimization method, because there is no need for an integral action to control the plant as explained previously.

The maximum value of the disturbance rejection constraint for the optimal vector of controller parameters k* is calculated using GA_2, yielding a result of (β(w, k*))^0.5 = 0.02460, as illustrated in Figure 3.14.

The convergence of the ISE performance index minimization, denoted as J3(k), under the disturbance rejection constraint max(β(w, k)) 0.5 < γ indicates that when this value is less than γ, k* is a feasible solution This confirms the satisfaction of the disturbance rejection condition Additionally, findings using the ITSE performance index demonstrate enhanced performance compared to those based on the ISE performance index (Chen et al., 1995).

The performance of the control system, illustrated in Figure 3.10, is evaluated through closed-loop step response tests under two scenarios: one without disturbance and another with disturbance affecting the plant's output To assess the effectiveness of the controller tuning, the plant is subjected to a unit step input, ensuring that the controlled variable remains within acceptable tolerance levels.

Figure 3.12 Calculation of the maximum value of the disturbance rejection constraint max (β(w, k * )) 0.5

The convergence of the ITSE performance index I3(k) is illustrated in Figure 3.13, highlighting the disturbance rejection constraint where max(β(w, k)) 0.5 is less than γ This analysis, derived from the work of Krohling and Rey in the IEEE Transactions on Evolutionary Algorithms, emphasizes the control system's tracking behavior achieved through the minimization of the ISE performance index, maintaining a set point amplitude within ±2%.

J 3 ( k ) is shown in Figure 3.15 for d y (t) = 0 and for a disturbance acting on the plant d y (t) = 0.1sint (Chen et al., 1995).

The step response of the control system with the controller parameters (vector k * ), which was obtained by minimizing the ISE performance index

J 3 ( k ), for d y (t) = 0 and for a unit step disturbance d y (t) = 1(t), is shown in Figure 3.16.

The step response of the control system, which was determined by the minimization of the time-weighted squared error I 3 ( k ), is shown in Figure 3.17 for d y (t) = 0 and for load disturbance d y (t) = 0.1sint.

Figure 3.14 Calculation of the maximum value of the disturbance rejection constraint max (β(w , k * )) 0.5 (From Krohling, R.A and Rey, J.P., IEEE Trans on Evolutionary

Algorithms, 5, 1, 78–82, Feb 2001, ©1988/2001 IEEE With permission.)

Figure 3.15 Unit step response with a sinusoidal disturbance.

The step response of the control system with the controller parameters (vector k * ), which was obtained by minimizing the ITSE performance index

The control system designed to minimize the performance index ISE demonstrates a closed-loop step response with no overshoot, achieving a settling time of approximately 4 seconds within a ±2% tolerance of the set-point amplitude, as illustrated in Figure 3.18 for the unit step disturbance d y (t) = 1(t).

Figures 3.15 and 3.16 illustrate that the closed-loop step response of the plant, when subjected to sinusoidal disturbance d y (t) = 0.1sint or unit step disturbance d y (t) = 1(t), shows minimal significant variation.

Figure 3.16 Unit step response with a unit step disturbance [determined by the minimization of the ISE performance index J 3 ( k )]

The unit step response with a sinusoidal disturbance demonstrates minimal impact on the plant output compared to the nominal case without disturbance This observation reinforces the effectiveness of the method employed, highlighting its robustness in maintaining performance despite external disturbances.

The designed control system, optimized by minimizing the ITSE performance index, demonstrates closed-loop step responses with no overshoot and a settling time of approximately 4 seconds, achieving a tolerance of ±2% around the set-point amplitude.

Figures 3.17 and 3.18 demonstrate that the closed-loop step response of the plant remains largely unaffected by both sinusoidal disturbances (d y (t) = 0.1sint) and unit step disturbances (d y (t) = 1(t)), showing minimal variation from the nominal case without disturbances This indicates that disturbances have a negligible impact on the plant's step response.

The effective step response of the control system, even with disturbances affecting the plant, is attributed to the low value of the disturbance rejection constraint, which significantly limits the impact of disturbances The findings demonstrate the success of the proposed method in designing an optimal fixed-structure disturbance rejection controller, specifically for PID systems.

The obtained results demonstrate the suitability of the developed meth- ods for the design of optimal disturbance rejection controllers If not optimal, at least a good solution was found.

Model-based predictive controllers

Variable structure control systems

Optimization of generalized predictive control design by

Optimization of quasi-sliding mode control design by genetic

Gas turbine engine control

Fuzzy control system design — example study

Hierarchical fuzzy control for a flexible robotic link

Genetic algorithms in knowledge enhancement

Implementation issues

Hierarchical fuzzy-behavior control

Coordination by behavior modulation

Genetic programming of fuzzy behaviors

Evolution of coordination

Autonomous navigation results

H-infinity design of robust control systems

Multiobjective optimization

Case study: Robust control of a gasification plant

Ngày đăng: 10/10/2022, 21:15

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

w