1. Trang chủ
  2. » Ngoại Ngữ

PENALTY METHODS IN GENETIC ALGORITHM FOR SOLVING NUMERICAL CONSTRAINED OPTIMIZATION PROBLEMS

68 29 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Penalty Methods In Genetic Algorithm For Solving Numerical Constrained Optimization Problems
Tác giả Mahmoud K.M. Aburub
Trường học Near East University
Chuyên ngành Computer Engineering
Thể loại thesis
Năm xuất bản 2012
Thành phố Nicosia
Định dạng
Số trang 68
Dung lượng 1,22 MB

Cấu trúc

  • PENALTY METHODS IN GENETIC ALGORITHM FOR SOLVING NUMERICAL CONSTRAINED OPTIMIZATION PROBLEMS

  • ABSTRACT

  • ÖZET

  • ACKNOWLEDGMENTS

  • LIST OF FIGURES

  • LIST OF TABLES

  • CHAPTER1 INTRODUCTION

    • 1.1. What is optimization?

    • 1.2. Thesis Overview

  • CHAPTER 2 GENETIC ALGORITHMS

    • 2.1. Overview

    • 2.3. Selection

    • 2.3.1. Roulette Wheel Selection

      • Figure 2.3.1.1 Roulette Wheel Selection Algorithms

    • 2.3.2. Linear Ranking Selection

    • 2.3.3. Tournament Selection

      • Figure 2.3.2.1 Linear ranking selection pseudo code

      • Figure 2.3.3.1 Basic tournament selection pseudo codes

    • 2.4. Crossover

      • Figure 2.4.1 Crossover (Recombination) algorithms

    • 2.5. Mutation

    • 2.6. Population Replacement

    • 2.7. Search Termination

    • 2.8. Solution Evaluation

    • 2.9. Summary

  • CHAPTER 3 CONSTRAINTS HANDLING METHODS

    • 3.1. Penalty Method

      • Table 3.1.1 Static vs. dynamic penalty

    • 3.2. Adaptive Penalty for Constraints Optimization

    • 3.3. Static Penalty for Constrained Optimization

      • Figure: 3.2.1 Adaptive Penalty Algorithms Pseudo Code

      • Figure 3.2.2 Static Penalty Algorithm Pseudo Code

    • 3.4. Stochastic Ranking for Constrained Optimization

    • 3.4.1. Stochastic Ranking using Bubble Sort like procedure

      • Figure 3.4.1.1 Stochastic Ranking Using Bubble Sort like Procedure

    • 3.5. Summary

  • CHAPTER 4 SIMULATION

    • 4.1. System Environment

      • Table 4.1.1 PC configurations

      • Table 4.1.2 GA and System parameters

      • Figure 4.1.1 System execution diagram

    • 4.2. Tested Problems

    • 4.3. Criteria for Assessment

      • Figure 4.3.1 Upper constraint

      • Figure 4.3.2 The function

      • Figure 4.3.3 Lower constraint

    • 4.4. No Free Lunch Theorem

    • 4.5. Summary

  • CHAPTER 5 EXPERIMENTAL RESULTS AND ANALYSIS

    • 5.1. Overview

      • Table 5.1.1 Number of variables and estimated ratio of feasible region

    • 5.2. Results Discussion

      • Table 5.2.1 Adaptive Penalty testing result

      • Table 5.2.2 Static Penalty testing result

      • Table 5.2.3 Stochastic Ranking testing result

    • 5.3. Result Comparison

      • Table 5.3.1 Algorithms Best Result Comparison

    • 5.4. Convergence Map

      • Table 5.4.1 Error achieved when FES equal to 5000, 50000 and 50000

      • Figure 5.4.1 Adaptive Penalty Convergence Map

      • Figure 5.4.2 Static Penalty Convergence Map

      • Figure 5.4.3 Stochastic Ranking Convergence Map

    • 5.5. Summary

  • CHAPTER 6 CONCLUSION REMARKS

    • 6.1. Conclusions

    • 6.2. Future Work

  • BIBLIOGRAPHY

  • ÖZGEÇMİŞ

  • APPENDIX

Nội dung

What is optimization?

Life is filled with challenges that drive innovation and environmental improvements In computer science, optimization is a crucial process used to solve complex problems For instance, to identify the maximum peak of a function, we must establish criteria for recognizing an optimum solution, whether it be a global or local optimum By applying constraints, we can guide algorithms toward feasible solutions, and introducing mixed constraint types, such as equality and inequality constraints, can further complicate the optimization process Ultimately, optimization can be defined as the systematic approach to finding the best solution within given parameters.

“optimization is to find an algorithm which solves a given class of problem” (Sivanandam & Deepa, 2008)

In mathematics, derivatives are utilized to identify optimal solutions; however, not all functions are continuous and differentiable Generally, non-linear programming aims to determine the variable x that optimizes the function f(x).

The objective function f(x) operates within a defined search space S, where the feasible region is represented by the set F, which is a subset of S Typically, S is characterized as an n-dimensional space within the global space ℝⁿ Each vector x has specific domain boundaries defined by l(i) ≤ xᵢ ≤ u(i) for 1 ≤ i ≤ n, and the feasible region is established through a series of constraints.

Inequality constraints, represented as g_i(x) ≤ 0, and equality constraints, expressed as h_j(x) = 0, play a crucial role in optimization problems Inequality constraints can be considered active when they equal zero, while equality constraints are inherently active throughout the entire search space Research has often concentrated on local optima, where a point x₀ is deemed a local optimum if there exists an ε such that for all points x in the ε-neighborhood of x₀, the function value f(x) is less than or equal to f(x₀).

Evolutionary algorithms serve as a global optimization method for complex objective functions, especially when traditional mathematical derivatives fall short due to hyperspace complexity or function discontinuities (Michalewicz & Schoenauer, 1996).

Evolutionary computing effectively addresses complex problems with strict feasible region boundaries, while genetic algorithms serve as a specialized optimization technique These algorithms utilize chromosomal representations that can be either continuous or discrete, making them suitable for intricate optimization tasks They remain unaffected by the complexity or shape of the objective function By incorporating constraint functions for infeasible chromosomes, genetic algorithms ensure that these individuals either become feasible or incur a cost for their infeasibility, while feasible chromosomes maintain their objective function values This approach not only enhances feasible solutions but also penalizes infeasible ones, regardless of the function's shape Additionally, genetic algorithms can circumvent discontinuity issues by leveraging constraint values.

In our study, we explored the use of penalty methods for managing unreliable chromosomes, focusing on static and dynamic penalties rather than a killing penalty We applied a specified value adjustment to infeasible individuals, while contrasting this with stochastic ranking, which merely organized individuals without altering the objective function Our findings indicated that static penalty emerged as the most effective optimization method, consistently offering opportunities to achieve global optimum solutions Additionally, we observed that various algorithms, while competing in the same environment, employed different strategies that led to distinct outcomes, highlighting the intriguing implications of the No Free Lunch Theorem on algorithm performance.

In a recent study, twelve benchmarks were evaluated using three algorithms: adaptive penalty, static penalty, and stochastic ranking While all three methods successfully addressed most problems, they fell into three categories: problems solved with optimal values, those with unsatisfied constraints, and those permanently unsolved The static penalty method excelled, achieving the highest number of problems solved, the best feasible rate, and a standard deviation closely resembling an identical distribution shape Stochastic ranking followed in performance but solved fewer problems, while adaptive penalty ranked lowest, matching stochastic in the number of problems solved—ten out of twelve These benchmarks were specifically chosen for their complexity, featuring various variables and constraints Future testing of additional algorithms aims to further assess their reliability in tackling these challenging scenarios, all designed to identify global optimum solutions in a complex hyperspace environment.

Thesis Overview

This thesis is organized in incremental method Starting with simple and moving to more complicate declaration depending on the issues.

CHAPTER 2: Discusses Genetic Algorithms framework, structure, and it basic operation.

In Chapter 3, we explore the use of various constraints and criteria in problem-solving, focusing on the penalty method as the central approach for managing these constraints We examine three distinct types of penalties: adaptive, static, and stochastic ranking Chapter 4 details the simulation process for tested problems, outlining our assessment and analysis methods for the results This chapter also provides pseudocode for the systems and a convergence map, concluding with a brief overview of the No Free Lunch theorem.

CHAPTER 5: Discusses result after making testing on selected problems from more than 12 benchmarks It’s illustrates diagrams for convergence graph

CHAPTER 6: Conclusion depending on results achieved and future work.

GENETIC ALGORITHMS

Overview

Genetic Algorithms (GA) are a key component of evolutionary computing and are widely recognized as an effective global search technique due to their ability to explore and exploit search spaces using performance measures Inspired by Charles Darwin's theory of "survival of the fittest," GA operates on the principle that individuals with superior traits have a better chance of survival and reproduction The fundamental elements of GA include chromosomes made up of genes, where each gene represents a factor in the phenotype with defined upper and lower bounds for fitness Gene length determines the range of possible solutions, with a gene length of n allowing for the representation of 2^(n-1) binary strings Each gene can have multiple alleles stored in a locus, collectively representing an individual Introduced by Holland in 1975 for nonlinear problem-solving, GA is problem-dependent, requiring accurate representation of alleles to ensure effective exploration of the search space, with allele values reflecting the genotype that directly influences the phenotype's fitness evaluation.

When a decision variable can take values between a and b, with b greater than a, and is represented by a binary string of length L, the precision can be determined using the equation provided by Reeves and Rowe (2002), where x denotes the best gene width.

An alternative approach for binary representation of individuals is gray coding, which minimizes the Hamming cliff to one compared to traditional binary representation The likelihood of at least one allele being present at each locus can be expressed using the equation provided by Sharda & Voò (2002).

Genetic Algorithms (GAs) involve essential operations including selection, crossover, mutation, population replacement, and fitness evaluation, as illustrated in the GA flowchart (Haupt & Haupt, 2004) A key component in this process is fitness, which serves as the primary criterion for evaluating solutions and is tailored to the specific problem at hand By translating genetic information from genotype to phenotype, we can formulate the objective function, determining whether individuals are deemed suitable for breeding Various methods exist for assessing the objective function and selection criteria, which can be categorized as ordinal-based, like linear ranking, or proportional-based, such as roulette wheel selection.

The Genetic Algorithm (GA), initially introduced by Holland in 1975, utilized a binary representation that emulated natural chromosome gene structures, making it straightforward to implement basic GA operations In our initial tests, we employed a fundamental binary-to-decimal conversion method to represent variables in decimal form.

15 for a given problem, we start from 0 and start to give discrete values in range by increment by one Then we calculate to represent number from 0 ( 0000 ) binary to 15 binary

( we need 4 bits With this method the results were terrible Basically three problems were highlighted.

A Number of bit needed: every variable had its own domain , which has lower and upper bound; for example, problem G 5(see page 34) let us tack sample of two variables, here we have a problem of mapping variable domain in binary level, revealing to smooth binary bit corresponding to every variable To represent x 1 , in trivial method we need 11 bit binary string to represent it On the other hand

Define cost function, cost, variables

Decode chromosomes Find cost for each chromosome

The challenge of representing variables in a binary format has led to a problem known as the Big Jump This issue arises when attempting to convert decimal values, such as 200, into binary strings while ensuring they remain within defined boundaries Additionally, managing internal discrepancies, or "off bits," complicates the representation of domain concepts.

The presence of numerous empty alleles can significantly alter the search space for infeasible solutions In both maximization and minimization problems, successive recombination operations will eventually result in a series of 0's or 1's after a certain number of iterations To represent the value of 200 using fewer bits, one must rely solely on the fully activated (on) bits, leading to a value that is less than expected.

The loss of valuable data points in search space, resulting in inefficient solutions, highlights the challenges of binary representation and domain satisfaction To address this, we can implement a temporary solution using uniform crossover, where the crossover probability is applied independently to each bit rather than the entire chromosome Additionally, we suggest a methodology that enables the construction and retrieval of variable values with reduced complexity and enhanced accuracy.

To optimize a function effectively, each variable must be assigned values from a specified domain For precise optimization, the domain should be structured according to the desired decimal precision, denoted by n The smallest boundary integer is represented as b_i, ensuring that the condition (b_i - a_i) × 10^n ≤ 2^(m_i - 1) is satisfied For instance, the domain can be defined as 0 ≤ D_v ≤ 3.

, then Suppose we need precisions with degree 2; then

(   2  m i  , to represent 300  100we need 9bits which implies the inequality will be according to Equation (1) Finally; in order to represent predictions with variable boundaries elsewhere (Goldberg, 1989).

B Imagine a more complicated scenario where we have a variable with a huge number and another variable with a tiny domain For example, the same problem G 5(see page 33) where 0 55  x 3  0 55 , the question is how to represent variable domain that has negative range? Let us predict scenario, if we use probability to be positive; or negative for corresponding set of bit in chromosome, then that will be imagination.

To effectively design a control matrix for both negative and positive variables, it is essential to initialize these variables within a defined domain range and assess their signs Utilizing a fixed variable range, which maintains a consistent sign throughout the search process, poses significant challenges in both implementation and mathematical validation Additionally, it is crucial to determine whether all variables, regardless of their bit count, require shifting; the answer is affirmative.

To simplify the process and minimize errors, it's essential to create a proportional number of bits Additionally, retrieving the objective function value from the chromosome requires multiple standard methods for extracting variable values, necessitating a more complex mapping of bits to ratios or real values Ultimately, the variables remain discrete and largely consistent throughout the runs and search processes.

C Re-construct binary string: after retrieving variables values and calculating objective function, we need to apply GA operation and penalty The question is how to retrieve specific variable value from penalty function? Which methodology will use to construct binary string from its corresponding variables value? We have designed more than one solution, but all unsuitable Mostly the left most binary string values almost zeroes; in contrast, numeric optimization method discussed before condition is to deal only with positive binary strings Then we find the inverse of the given penalty function And we have been trying it in simple method which is maximized The produced objective function loses too many points out of the original function The solution is to alter the value of penalized individual, corresponding to the same Equation(1).

Selection

Selection is a crucial process in breeding, where two parents are chosen from a population to create offspring with improved traits Key considerations include the number of individuals selected and whether the offspring will exhibit better characteristics The basic method of selection is random, exemplified by roulette wheel selection, which uses individual objective functions as probabilities for selection However, this raises additional questions, such as how many copies the selected individuals will produce for the next generation To address the limitations of basic selection methods, strategies like fitness scaling, balancing fitness pressure, and linear ranking selection have been developed Selection can be categorized into two types: proportional selection, which uses fitness ratios from the overall population, and ordinal-based selection, where fitness is determined by the individual’s rank In this study, we employed binary tournament selection due to its effectiveness and inclusivity, allowing even lower-ranked individuals to participate in the selection process Additionally, stochastic ranking can be considered for constraint optimization, though it is not comprehensive enough on its own, necessitating complementary methods for effective selection.

The Roulette Wheel selection method is a well-known approach used in Genetic Algorithms (GA), where elements are selected linearly from a mating pool based on their fitness The cumulative objective function is calculated, leading to the average fitness, which is then used to determine the probability of selecting each individual This method allows individuals to be chosen in proportion to their fitness levels, but it has its drawbacks, such as a dependence on individual objective functions that can lead to the dominance of the fittest individuals in the mating pool Consequently, this may hinder exploration in less feasible areas of the search space, even when those individuals could potentially offer viable solutions To mitigate this issue, techniques like fitness scaling are employed to reduce the influence of the fittest individuals during the search process.

Figure 2.3.1.1 Roulette Wheel Selection Algorithms

Linear ranking selection differs from proportion selection by focusing on the position of individuals sorted according to the specific problem In this method, the worst element is assigned the first position, and each individual's selection probability remains constant, as outlined in Equation (2) (Blickle & Thiele, 1997) This approach utilizes a linear function to determine the selection probabilities, particularly emphasizing the likelihood of selecting the worst individual.

(Blickle & Thiele, 1997), and the best will be

According to Blickle and Thiele (1997), the value of η⁻ must fall within the range of [0,1] The corresponding value of η⁺ can be calculated using the formula η⁺ = 2 - η⁻ The η⁻ value plays a crucial role in determining the probability of the least fit individual participating in the selection process, where N represents the population size and i denotes the index of the element.

 i i t b select i , fi od Return P * illustrates the pseudo code for linear ranking selection algorithm (Blickle & Thiele, 1997).

Unlike linear ranking selection, tournament selection has a sensitive selection pressure, population are isolated into two subsets N  { T lower , T upper } (Sharda & Voò,

In tournament selection, the tour length (T) is crucial as it determines how many individuals are compared to select N parents for the pool A notable drawback of this method is that, under high selection pressure, the best individual is consistently chosen, which may hinder genetic diversity Additionally, the likelihood of selecting the median string depends on the performance of the remaining individuals in its set, emphasizing the importance of maintaining a balanced selection process.

2002), where t is the tour lengthand selection pressure,  =2 t  1 (Sharda & Voò,

2002) Figure 3.2.3.1 shows the basic tournament selection pseudo code (Blickle & Thiele, 1997)

Input: the population P (τ) and the production rate of worst individual) and the production rate of worst individual

Output: the population after selection P    '

J ← sorted population according to fitness with worst individual at  the first position

S  1  ,where P i value is calculated in equation(2) od

Figure 2.3.2.1 Linear ranking selection pseudo code

Figure 2.3.3.1 Basic tournament selection pseudo codes

Crossover

Crossover is a production method that enhances the search process within a given search space by exploiting existing traits to generate potentially superior offspring This technique aims to combine the characteristics of parent individuals, although it does not create new traits In a genetic algorithm (GA), each individual is assigned a probability of crossover, which determines their inclusion in the mating pool The crossover probability (P_c) typically remains constant throughout the process, ranging from 0.5 to 1.0, as established by Goldberg in 1989 A uniform random generator is utilized to produce random values for selecting individuals for crossover, ensuring a dynamic and effective search for improved solutions.

Input: the population P (  ) the tournament size t 1 , 2 ,   , N 

J ' j  best individual out of τ) and the production rate of worst individual randomly picked individuals from od

In genetic algorithms, individuals are evaluated against a threshold value \( P_c \) to determine their inclusion in the mating pool Various crossover techniques exist, including single point, multipoint, uniform, and three-parent crossovers This study focuses on single point crossover, where two parents exchange genetic material at a randomly chosen point However, a key limitation of single point crossover is that the heads of the parents remain unchanged, potentially retaining useful solutions In contrast, multipoint crossover employs multiple random crossover points, allowing for a more effective exchange of genetic information Uniform crossover, on the other hand, applies a probability to each allele, enhancing the likelihood of locus value swaps For instance, in binary representation, a locus value of 1 results in the first individual's content being transferred to the second, and vice versa for a value of 0 Figure 2.4.1 illustrates the pseudo code for single crossover (Goldberg, 1989).

Input: two individuals randomly picked from mating pool

Mutation

Mutation plays a crucial role in evolutionary algorithms by preventing algorithms from getting stuck in local minima, as it enables exploration of the entire search space For instance, when attempting to maximize the function f(x) = x within the constrained interval [0, 7], the initial population may not yield the optimal solution By iteratively mutating specific loci, chromosomes can shift closer to the binary value (111) Although mutation occurs with a low probability for each allele, which contrasts with crossover, it remains a vital process The type of mutation employed varies based on the representation of the data; for example, mutations in real or integer data differ from those in binary representation In cases where individuals are represented in binary, mutations occur at the bit level, swapping 0s for 1s and vice versa Ultimately, the mutation probability is a key factor in this process.

 (Sharda & Voò, 2002), where n is chromosome length Sometimes, P m may be fixed, but the typical P m (0.05-0.5) (Goldberg,

1989), in our system we use the same values.

Population Replacement

There are many options for population replacement, but to summarize, we are going to describe two types:

1 After GA basic operations select only the best individuals with some preceding methods, where the entire parents and offspring’s are sharing the same probability to be selected.

2 Select only from the new created offspring and kill the entire parents, in another word, replacement method, where offspring inherits their parent.

Search Termination

Search termination criteria are essential in genetic algorithms (GA) due to their stochastic nature, which can potentially allow them to run indefinitely However, it is crucial to establish a stopping point to evaluate the solutions effectively These stopping conditions can be categorized into three types: time-dependent, iteration-dependent, and fitness-dependent.

1 Maximum generation: if we reach the maximum number of allowed iteration we need to stop the algorithms Sometimes we need to predict the specific number of needed iterations depending on the complexity of the problem For instance, our maximum function evaluation strategy (FES) is equal to 500000 Number of iterations is the most important and widely used criteria; it will be the primary stopping condition

2 Elapse time: starting time to end time sometimes can be used as a secondary stopping condition Problems are varied in complexity; sometimes, we can predict interval for stopping algorithm runs Meanwhile if the maximum generation number is reached then it must stop.

3 Minimal diversity: measuring difference between traits and fitness internally is a crucial operation Because traits are preserved and the solution will retain its value even after recombination process Then algorithms need to be stopped. Sometimes, this criteria tack more priority over number of iterations.

4 Best Individuals: if the minimum of fitness in the population dropped under the convergence value This will bring the search process to faster conclusions that guarantee at least one good solution (Sivanandam & Deepa, 2008).

5 Worst Individuals: the minimum fitness value for the worst individual can be less than convergence criteria In this case convergence value may not be obtained (Sivanandam & Deepa, 2008).

6 Sum of Fitness: the search considered to have satisfaction converged when the sum of fitness in the entire population is less than or equal to the convergence value on the population record This guarantees that logically all elements are in the same range (Sivanandam & Deepa, 2008)

Solution Evaluation

In each iteration, Genetic Algorithms (GA) improve and eliminate certain traits, prompting us to define what "best" truly means The concept of the best solution is often ambiguous, yet it is crucial to arrive at a definitive solution in the final generation, which may not always align with our initial expectations We can then either predict the number of iterations needed for further enhancement or utilize the best solution identified Additionally, the feasible region may be constrained, as seen in our tested cases In Chapter 3, we will explore how to formulate the best solution not merely as the minimum, but as one that satisfies all constraints to be deemed a valid solution.

Summary

In our study, we concentrated solely on standard Genetic Algorithm (GA) operations, selecting single point crossover, bitwise mutation, binary tournament selection, and population replacement with new offspring We implemented a rigorous evaluation strategy, assessing solutions not only based on their fitness but also by the number of satisfied constraints, making this dual approach critical for our evaluation process.

CONSTRAINTS HANDLING METHODS

Penalty Method

Evolutionary Algorithms (EA) utilize a penalty method to effectively address the challenges of constraint handling By transforming a constrained problem A into an unconstrained variant A* through the introduction of penalties, EA modifies the objective function based on the extent of constraint violations This approach, as noted by Coello (2000), employs two types of search directives: exterior and interior The exterior search begins in an infeasible region and aims to move individuals into a feasible area, while the interior search starts with small random values within a feasible region, using constraints to maintain boundaries The exterior method offers significant advantages, as initial solutions generated randomly are not required to be optimal In this study, we chose the exterior approach for its simplicity, allowing the algorithm to navigate the complex search space effectively The general formula for the exterior penalty is presented in Equation (3) (Davis).

1987), where  ( x ) the new objective function value, Where f (x ) is the objective function before applying penalty and it will be calculated according to the problem percepts,

In optimization problems, inequality constraints are represented as G_i, while equality constraints are denoted as L_j The penalty coefficients r_i and c_j play crucial roles in the formulation To convert each equality constraint into an inequality, a tolerance factor of ε = 0.0001 is introduced, ensuring that the condition |h_i(x)| - ε ≤ 0 is satisfied.

In the study by Liang et al (2006), the value of \( G_i \) is defined as \( G_i = \max[0, g_i(x)]^\beta \) (Coello, 2002), indicating that individuals meeting the summation criteria will not incur penalties and will retain their values Conversely, \( L_j \) is represented as \( L_j = h_j(x) \) (Coello, 2000) By calculating the absolute value and applying a tolerance threshold, we can effectively classify constraint satisfaction Typically, the value of \( \beta \) is set to either 1 or 2 (Coello, 2000).

The selection of optimal penalty factors in penalty function algorithms is crucial, as it directly impacts the solution's efficiency and consistency Choosing a penalty factor that is too high can lead to a rapid convergence towards the feasible region, limiting the algorithm's ability to explore potentially optimal infeasible solutions Conversely, a penalty factor that is too low may cause the algorithm to dwell on infeasible solutions, resulting in prolonged convergence times and the risk of being trapped in local minima This trade-off between convergence speed and solution quality has been a recurring topic in previous research and conferences.

This article explores an innovative approach to stochastic ranking using a bubble sort-like procedure, focusing on the balance between objective and penalty functions Optimizing penalty coefficients can enhance the performance of individuals in solving specific problems, enabling their inclusion in the mating pool Various methods are being examined to assess individuals based on their status within or outside the feasible region We must consider how to manage individuals located in the feasible region, including the appropriate level of pressure to apply Additionally, for those outside the feasible region, determining the optimal penalty factor is crucial for their adjustment and integration into feasible solutions.

Coello have been introduced some guide lines for heuristic on design penalty function (Richardson, Palmer, G., & M., 1989) and he was giving some recommendations, like;

1 Distance based penalty functions achieved better performance over constraints dependent.

2 If numbers of constraints are limited then numbers of feasible regions are limited too, which implies algorithms frequently will not have a solution.This was the case in study for case 1 and 10.

3 Penalized fitness function should be close to feasible region.

Many studies have been previously carried out All of them may be categorized into one of these basic methods, static penalty and dynamic penalty.

Static penalty methods maintain a constant penalty function and coefficient throughout iterations without feedback from the population, relying on historical data or educated guesses This approach can hinder the search process in its final stages, as the fixed penalty coefficient may prevent the algorithm from achieving a global optimum, similar to mutation probabilities in simple Genetic Algorithms (GAs) While a stable penalty coefficient can yield satisfactory solutions in some cases, it often leads to local minima, particularly in complex problems Homaifar, Lai, and Qi (1994) suggest a user-defined approach where multiple violation levels dictate corresponding penalty coefficients, which increase with the severity of the violation Michalewicz (1996) presents an evaluation equation for individuals that incorporates these penalty coefficients, highlighting the role of constraints in the optimization process.

 m i R i k g i x x f x fitness( ) ( ) 1 , max0, ( ) 2 (4) The fitness(x) is the objective function after applying penalty.

The method involves determining the number of violations, denoted as N, which is predefined by the user A significant drawback of this approach is its similarity to mutation in Genetic Algorithms (GA); the complexity increases as the number of violation levels rises, making it more challenging for the algorithm to find the optimal solution The penalty for violations is calculated using Equation (5) from Morales & Quezada (1998), where 's' represents the number of satisfied constraints and 'm' indicates the total number of constraints.

K K feasible is solution the if x f x fitness

K is a significant value, established at 1×10^9 (Morales & Quezada, 1998) A notable limitation of this approach is its failure to incorporate information regarding constraint violations, which could potentially guide search algorithms toward more optimal areas.

Dynamic Penalty introduces a contrasting approach to previous methods by utilizing information from the current iteration during the evaluation process Joines and Houck (1994) proposed a formula that allows for the dynamic assessment of individuals based on the generation number, enhancing the evaluation framework in evolutionary algorithms.

Where C ,  and  are predefined constants, and SVC function is calculated depending on constraints violation according to (Joines & Houck, 1994)

The value of D i  (x ), and D j (x ) is calculated according to the next equation (Joines & Houck, 1994). n otherwise i x g x x g

By using those equations penalty values will be reduced dramatically when it reaches more iteration

To effectively address infeasible solutions in optimization, it is crucial to begin with a minimal penalty factor, thereby enhancing the effectiveness of Evolutionary Algorithms (EA) in exploring more infeasible solutions Dynamic penalty serves as a broad framework encompassing various optimization techniques, including Dynamic Simulation Annealing and Adaptive Penalty methods.

From the previous section we can summarize the differences between static and dynamic penalty:

Table 3.1.1 Static vs dynamic penalty

Penalty function is constant Penalty function and coefficient are in dynamic changing depending on current iteration

Need priori information about number of constraints violation probability

Need to assign user defined values such as beta and alpha accurately

Hard to define penalty factor Hard to define penalty function

Adaptive penalty is an enhancement of dynamic penalty that utilizes current population data to determine a more precise penalty value for individuals This approach significantly directs the search towards feasible regions by incorporating feedback from the population By implementing adaptive penalty, the challenges associated with hill climbing in genetic algorithms (GA) are mitigated, as it prevents the algorithm from reverting to previous stages Unlike traditional methods, adaptive penalty ensures that the search never regresses to earlier regions The updated fitness after applying the penalty is illustrated in Equation (10) as proposed by Hadj-Alouane & Bean (1997), where λ(t) represents the population feedback used to adjust the penalty.

(  (10) function with respect to the current population number The value of )

(t is updated adaptively in every iteration by Equation (11) (Hadj- Alouane & Bean, 1997).

 otherwise t case if t case if t t

In optimization, both values of β1 and β2 must be greater than 1 and not equal to each other to prevent cycling (Coello, 2000) In CASE 1, if the best individual from the previous population lies within the feasible region, the penalty factor remains small, facilitating the search within feasible boundaries Conversely, in CASE 2, where the best individual is outside the feasible region, the penalty factor increases significantly to guide the search process back into feasible territory Additionally, the value of λ(t) remains unchanged regardless of whether the best individual is feasible or infeasible (Coello, 2000).

There are two basic disadvantages of adaptive penalty Firstly, how do we define the value of k Secondly, how do we define 1and

2values? Since, a misunderstanding of the attributes of the problem,will absolutely guide to inapplicable values of them, which implies unfair penalty for individuals.

Adaptive Penalty for Constraints Optimization

The technique of transforming constrained problems into unconstrained ones through penalty application is essential for managing the search process and maintaining feasibility Adaptive penalty, a variant of dynamic penalty, utilizes information from the current population to determine a precise penalty factor It is crucial that the search process remains within the established domain constraints According to the adaptive penalty general equation, three scenarios can arise: First, when a feasible solution exists, the penalty value (λ(t)) approaches zero, allowing the new penalty function (β1) to apply a minimal penalty to the entire population Second, in the absence of feasible solutions, a higher penalty (β2) is enforced to encourage movement towards feasible regions Lastly, when both feasible and infeasible solutions are present, the value of λ(t) is retained from the previous iteration, indicating a gradual adjustment of individuals towards the desired feasible region.

The third state serves a significant purpose, particularly in balancing the penalty function value when reached inadvertently Chapter 4's result tables will reveal that the adaptive method is not the most effective, with stochastic ranking outperforming other algorithms by prioritizing the penalty function over the objective function From a critical perspective, when utilizing phenotypes to determine the feasibility of individuals, it is essential to recognize that individuals are represented in genotype space Notably, the leftmost bit corresponds to the entire rightmost bit minus one, leading to challenges such as the Hamming cliff associated with adaptive penalties In summary, applying any measurement to genotype can result in difficulties due to the Hamming cliff, whereas evaluating phenotypes raises questions about the number of iterations required to convert an undesired phenotype into an acceptable one Figure 3.2.1 provides the pseudo code for the adaptive penalty approach.

Static Penalty for Constrained Optimization

Static penalty methods utilize a set of predefined penalty factors, which can be random or derived from initial data analysis, to influence the search process These factors can either lead to infeasible solutions or maintain proximity to optimal regions For instance, applying a 0.5 penalty factor while in a feasible region may result in moving towards a less feasible area In contrast, adaptive penalty methods adjust the penalty coefficients based on the current population, potentially causing the algorithm to converge on local minima In optimization problems with constraints, penalties are applied only to infeasible solutions, allowing feasible individuals to remain unchanged.

Input: population P before applying penalty, initial values of  (t ) ,1 and 2

Output: population after applying adaptive penalty

While population has more elements do x i =split Chromosome (next Element) attributes=retrieve attributes (n variables)

Add new chromosome into temp population P ' od

Figure: 3.2.1 Adaptive Penalty Algorithms Pseudo Code

After merging the two brands, we anticipate improved solutions compared to previous offerings However, the critical question remains: what constitutes the optimal value for penalty factors? Ultimately, determining the precise and ideal value requires multiple trials, as guessing is not a viable approach.

While population has more elements do x i =split current chromosome Attributes = retrieve attributes (List)

) (x f =function X (variable) Calculate constraints values

K K feasible is solution the if x f x fitness

) (x p =inverse equation Convert p (x ) to binary format Add new individual into P ' od

The analysis of the Static Penalty Algorithm, as depicted in Figure 3.2.2, demonstrates its superior performance compared to adaptive penalty methods Despite some limitations, it consistently reaches better optimal solutions and avoids being trapped in local optima Additionally, it satisfies a greater number of constraints across most tested problems, outperforming both adaptive and stochastic approaches In our study, the penalty values were determined based on Equation (5) for handling equality and inequality constraints.

Stochastic Ranking for Constrained Optimization

In the previous section, we explored the challenge of selecting an optimal penalty coefficient while ensuring that we navigate the feasible region without overlooking potential solutions in infeasible areas Controlling convergence speed is crucial, as algorithms focused on feasible regions may struggle to revert to infeasible ones due to penalized objective functions being confined within current boundaries This study adopts a stochastic ranking approach to balance objective and penalty functions, utilizing binary tournament selection to prioritize individuals based on feasibility While alternative selection methods, such as roulette wheel selection, are available, they raise concerns about the frequency of selecting feasible individuals and the treatment of infeasible ones Addressing these issues involves considering mating pool domination by top individuals, where ranking serves as a well-established solution For instance, genetic algorithms employ linear ranking to introduce selection pressure, impacting the overall search process, though this method is not applicable for handling constraints until a prior balancing of the penalty factor is implemented.

3.4.1 Stochastic Ranking using Bubble Sort like procedure

The method developed by Runarsson and Yao (2000) addresses penalty constraints in optimization, where identifying an optimal penalty coefficient, r_g, has proven challenging Disagreements exist regarding the balance between the objective function and penalty function, which can lead to improved penalty values Depending on the specific problem, elements must be arranged in either ascending or descending order By establishing relationships between adjacent individuals, they introduced a more advanced approach to ensure feasibility and computational success The critical penalty coefficient, r_i~, offers three arrangement options: individuals can be sorted by the objective function and its dominance, by the penalty function and its dominance, or if r_i~ equals zero, where no dominance exists To tackle these complexities, stochastic ranking with a bubble sort technique aims to balance the inequalities, determining the probability of an adjacent individual winning the comparison based on P_w.

In our study, we define the objective function as P = P + Φ(1 - objectives function), where PΦw represents the probability of winning based on the penalty function When both adjacent individuals are feasible, P w equals P fw We manually initialized the probability of feasibility in our testing problems to ensure a balance between the objective and penalty functions for subsequent population generation This probability was set to 0.475, as we aimed for P w to approximate 0.5.

When P f equals 0.5, it indicates that neither the objective nor the penalty influences the comparison To better understand the relationship between winning and losing in a specific number of comparisons, we can formulate an equation where n represents the total number of comparisons and K' denotes the total number of wins (Runarsson & Yao, 2000).

The bubble sort procedure follows the natural bubble sort algorithm, characterized by a time complexity of T = 2n (Runarsson & Yao, 2000) This article will later introduce an alternative method for performance comparison between algorithms, emphasizing the importance of the objective function's domain and points A notable limitation of the bubble sort algorithm is its inability to efficiently relocate population elements, as it lacks local selection capabilities and requires an additional selection method for recovery To address this issue, we implemented binary tournament selection in our study, which enhances selection pressure and allows infeasible candidates to contribute to the next generation.

Algorithm: Stochastic Ranking using bubble sort like procedure

If ( f ( I j )  f ( I j  1 ) then Swap ( I j , I j  1 ) fi Else

If (  ( I j )   ( I j  1 ) then Swap ( I j , I j  1 ) fi fi od od

Figure 3.4.1.1 Stochastic Ranking Using Bubble Sort like Procedure

Runarsson and Yao developed an algorithm aimed at maximization, but they effectively addressed minimization problems by utilizing the complement method They proposed a value of U to be generated from a uniform distribution within the range of (0, 1) In this study, uniformity is not a concern, as the Java Virtual Machine inherently provides a uniform distribution random generator Additionally, they recommended a fixed value of P f at 0.475, which they found to be particularly effective for the problems at hand Consequently, we employed the same problems to evaluate our algorithms.

Summary

The penalty method is fundamental in transforming constrained problems into unconstrained ones, with a focus on dynamic, static, and adaptive penalties While both penalty methods and stochastic ranking modify the objective function, they differ in approach; the former adjusts values directly, whereas the latter ranks individuals without altering the function These methods compete and exhibit distinct behaviors, which will be explored in the upcoming chapter Each algorithm presents unique drawbacks and advantages, offering solutions with varying effectiveness and constraint numbers Additionally, the No Free Lunch theorem highlights that each algorithm operates differently within the same search space, emphasizing their diverse performances.

SIMULATION

System Environment

The three algorithms were evaluated on the same machine using identical parameters, yet they produced varying results, highlighting the necessity for consistent parameters to ensure a fair comparison The machine properties are detailed in Table 4.1.1, while the system architecture is designed in a pyramidal structure with the Genetic Algorithm (GA) forming its foundation, as depicted in Figure 4.1.1.

System: Microsoft Windows CPU: Intel(R) Atom ™ CPU N270

Adaptive Penalty, Static Penalty and

The code was developed in Java, where complex data structures are used; hence, constant pointers were used instead of user defined ones, as these are the only ones that

Table 4.1.2 GA and System parameters

Representation Binary Number of iterations 5000

Selection Binary tournament selection Number of individuals 100

Replacement Offspring’s inherits their parents

Independent runs 30 exists in Java Figure 4.1.1 illustrates the overall system execution criteria, where every method has its own function, but the rest will remain the same

The study involved a fixed population of 100 individuals and a maximum function evaluation score (FES) of 500,000 Each problem was assessed through 30 independent runs, with three checkpoints established at 5,000, 50,000, and 500,000 evaluations to illustrate the system's dynamics (Liang et al.).

2006), where statistical functions are applied to given data such as, mean, median, and standard deviation Finally, Table 4.1.1.2 shows the GA parameters and the system overall overview.

Define cost function, cost, variables

Decode chromosomes Find cost for each chromosome

Tested Problems

9 g 9 (x)2x 8  x 9 x 12 0 where: a 0x i 1 i1,2,3,4,5,6,7,8,9,13 b 0x i 100 i10,11,12 with six constraints are active (g 1 ,g 2 ,g 3 ,g 7 ,g 8 and g 9 ).

 n x g n i i where: a n20 and 0x i 10 i1,2,,n constraint g 1 is closed to be active.

 x x x x x x g where: a 78 x 1  102 b 33 x 2  45 c 27  x i  45 i  3 , 4 , 5 two constraints are active ( g 1 and g 6).

8 g 8 3x 1 6x 2 12(x 9  8) 2  7x 10 0 where: a  10x i 10 i 1,2,,10 six constraints are active ( g 1 , g 2 , g 3 , g 4 , g 5 and g 6).

6 g 6 x 3 x 8 1250000x 3 x 5  2500x 5 0 where: a 100x 1 10000 b 1000  x i  10000 i  2 , 3 c 10x i 1000 i4,5,,8 all constraints are active ( g 1 , g 2 and g 3).

Criteria for Assessment

In complex hyperspaces, constraints can sometimes be overly restrictive, resulting in a smaller feasible region To evaluate whether a solution meets all constraints, a primary condition is established, which can be more challenging than in other studies that allow for some constraints to be unmet This stringent approach often yields more reliable results, as demonstrated by studies that assign fixed values to assess constraint satisfaction, typically involving three specific conditions.

1 Constraints values greater than one.

3 Finally constraints values greater than 0.0001

Overall they give more chances for algorithms to do well, but on another hand they give a poor solution.

Certain problems exhibit spherical constraints, where the application of one constraint can lead to the discovery of another, depending on the distance from the sphere's origin Problem six serves as a clear illustration of this concept, with Figures 4.3.1 to 4.3.3 depicting 3D graphs of the function and its constraints: the upper constraint, the function itself, and the lower constraints, respectively These figures are organized in a consistent graphical order, highlighting the complexity of addressing constraints in hyperspace, where problems may extend to 20 or more dimensions A significant challenge arises in determining the limits of given constraints to assess their satisfaction, compounded by the lack of software capable of visualizing beyond three dimensions while simultaneously illustrating the constraints alongside the function.

From these graphs we can imagine how complicated problem it will be, if we have a new variable or if we have more than 3D problems

  runs total runs feasible rate feasible #

In our study, we identified a reliable classifier for determining preferences between two distinct algorithms We found that the consistency and feasibility rates varied across different problems and algorithms, indicating that not all issues were resolved with the same level of effectiveness.

The quality of results can be defined in various ways, depending on the specific outcomes sought by researchers In this study, we concentrate on the minimum value produced by the algorithm and the number of constraints involved.

No Free Lunch Theorem

Over the past few decades, numerous optimization algorithms, including black box methods like evaluation computing and neural networks, have been developed These algorithms are capable of effectively exploring the search space while requiring minimal information about the specific optimization problem they are applied to.

Evolutionary algorithms simulate natural selection processes, making it crucial to examine their relationship with optimization problems The optimization community often employs oracle-based analysis, which evaluates algorithm performance based on the number of function evaluations needed to reach a solution However, this approach has its drawbacks, as not all functions are designed to avoid revisiting previous points Some algorithms repeatedly explore the same areas due to a lack of memory regarding past searches By incorporating memory mechanisms, these inefficient functions can be improved, enhancing their overall effectiveness.

The No Free Lunch (NFL) theorem highlights the challenges of combinatorial optimization, particularly when dealing with large and finite search spaces In this context, the search space X and the cost value space Y complicate algorithmic analysis and filtering The objective function f, which maps the search space to the cost values, is influenced by both time and size, emphasizing the complexity of optimizing in such scenarios.

In the realm of algorithm analysis, it is established that any two distinct algorithms, A and B, can have varying performance outcomes; however, on average, algorithm A can effectively match or even outperform algorithm B, despite instances where B may excel The No Free Lunch (NFL) theorem emphasizes the importance of evaluating algorithms beyond mere iteration counts, highlighting the need for a comprehensive analysis of their performance across different scenarios.

When addressing a specific problem, utilizing function application alongside machine learning can offer insights into performance, as opposed to traditional optimization methods, which are reliant on specific algorithms The No Free Lunch (NFL) theorem allows for the assessment of algorithm A's performance across a range of problems For example, consider the implementation of a simulated annealing function aimed at either minimizing or maximizing a given objective.

According to oracle-based optimization performance measurement, we will evaluate the execution time of the function x f by meticulously analyzing each line of code The strength of the No Free Lunch (NFL) theorem lies in its ability to create a classifier that effectively maps inputs to outputs, allowing for generalizations regarding the performance of algorithms across various problem classes In essence, NFL predicts whether algorithm A will outperform algorithm B in terms of accuracy and performance within the same problem class, although it does not guarantee this outcome across all classes.

Wolpert and Macready (1996) addressed the challenge of algorithm comparison by focusing on the number of distinct function evaluations, specifically counting the unique calls to the oracle base They noted that the oracle base can be assessed using various criteria; for instance, in minimization problems, the criterion may involve identifying the lowest value of the cost function to evaluate performance effectively Ultimately, they proposed a time-ordered sample set, denoted as m "sample" represented by d {d(1), d(2), , d(m)}, where each point indicates the moment the algorithm generates a pair of appointment and cost.

An optimization algorithm maps a previously visited set of points to a single new point, establishing a crucial link between the algorithms and their cost functions as highlighted by the No Free Lunch (NFL) theorem This theorem asserts that an algorithm that excels in one category of problems will inevitably underperform in other problem classes, as demonstrated by Wolpert and Macready (1996).

Summary

We have established the foundational framework and parameters for our system to enhance the reliability of our results By outlining our assessment criteria, we aim to achieve more dependable solutions Additionally, our findings are supported by the No Free Lunch theorem, which allows us to effectively compare different algorithms.

EXPERIMENTAL RESULTS AND ANALYSIS

Overview

In a controlled environment, twelve problems were evaluated using three different approaches: adaptive penalty, static penalty, and stochastic ranking Each method commenced with a stochastic population and proceeded with the search based on its specific strategy.

Table 5.1.1 Number of variables and estimated ratio of feasible region

The fitness function involves calculating both the objective and penalty functions, particularly in the context of the two-penalty method In contrast, stochastic ranking utilizes these functions solely for sorting without altering the solution's fitness The objective function is derived from benchmark values, while the penalty function represents the constraints associated with the problem Table 5.1.1 illustrates the suite of functions and their corresponding feasible region ratios, where ρ denotes the estimated ratio between the feasible region (F) and the search space (S), and n indicates the number of dimensions (Liang et al., 2006).

Results Discussion

This article discusses three distinct penalty methods used in optimization: statistical, adaptive, and stochastic ranking penalties The study fixed the population size at 100 individuals and conducted 5000 iterations across twelve optimization problems, previously published in various studies The problems were categorized based on their solvability, with some being inapplicable or requiring different parameters for optimal solutions Notably, problem G7 faced challenges due to insufficient parameter values While some solutions were feasible, others were not, and the performance of the three methods varied The equality constraints were reformulated into inequality constraints by applying a tolerance factor of ε = 0.0001, and the absolute values of these constraints were incorporated into the penalty functions A total of 30 runs were performed for each case, monitoring three checkpoints to assess the algorithms' dynamics The optimization utilized a Genetic Algorithm (GA) as the foundational system, enhanced by the three penalty methods, and key metrics such as optimum solution, worst case, standard deviation, mean, median, and feasible rate were recorded Some problems exhibited significant standard deviations, potentially attributed to the GA encoding strategy.

Table 5.2.1 shows adaptive penalty testing result Problems G 2, G 7, G 8, G 10 and G 12 were had an infeasible solution On the other hand, problems G 11 achieved good results.

In a comparison of algorithm performance, Problem G 1 achieved a score of -7.404016358, outperforming the static algorithm's -9.275285357 Although Problem G 4 satisfied all constraints, it fell short of the known optimum of -30281.26967, exhibiting a higher standard deviation than the static algorithm Problem G 11 demonstrated excellent performance with a score of 0.7514802, matching the static algorithm while showing good dynamics, a mean of 0.755743176, a standard deviation of 0.003979412, and a 100% feasibility rate Problems G 1, G 3, G 4, G 5, G 6, and G 9 had solutions, but these were higher than the known optimums for those issues.

In conclusion adaptive penalty had two unsolved problems, but achieved a high feasible rate for problems it was able to solve, but only problem G 9with low feasible

Table 5.2.1 Adaptive Penalty testing result

G 1 G 2 G 3 G 4 best -7.404016358 Infeasible -0.931253421 -30281.26967 median -6.297534029 Infeasible -0.842859046 -30005.56388 mean -6.363586238 Infeasible -0.841116058 -30021.59748

STD 0.399727213 Infeasible 0.036221653 144.5448427 worst -5.633156321 Infeasible -0.779364494 -29661.57945 feasible rate 100.0000% Infeasible 100.0000% 100.0000% x 

G 5 G 6 G 7 G 8 best 5556.480063 -6182.583956 Infeasible Infeasible median 6280.456975 -4691.12557 Infeasible Infeasible mean 6233.58929 -4452.394377 Infeasible Infeasible

STD 341.3571198 1634.529669 Infeasible Infeasible worst 6762.328196 -1671.319568 Infeasible Infeasible feasible rate 100.0000% 100.0000% Infeasible Infeasible x 

G 9 G 10 G 11 G 12 best 1080.145469 Infeasible 0.7514802 Infeasible median 112767.8806 Infeasible 0.755755479 Infeasible mean 1043658.872 Infeasible 0.755743176 Infeasible

STD 2293734.51 Infeasible 0.003979412 Infeasible worst 8526739.99 Infeasible 0.763475586 Infeasible feasible rate 76.6667% Infeasible 100.0000% Infeasible x 

-0.717647059, 0.51372549 rate equal to 76.6667%, that could be for some factor that could influence the search process In Table 5.2.2 we will calculate some of this problem was solved, which assess

Table 5.2.2 presents the results of static penalty testing, revealing that problems G2, G8, and G12 resulted in infeasible solutions While previous studies suggested that problem G7 was also infeasible, they lacked supporting evidence In contrast, the results for G11 demonstrated a favorable solution, comparable to that achieved through adaptive penalty methods.

Compared to table 5.2.1 problem G 1 achieved better value rather than adaptive penalty; however, it is still not the best known result It was -9.275285357 Problem

The results indicate that Problem G4 yielded a value of -30214.60354, which was less effective than the adaptive solution In contrast, Problem G11 achieved a value of 0.7514802 with both a more accurate mean and median of 0.7514802, alongside an improved standard deviation of 4.51681E-16 Problems G7 and G10 were addressed using a static penalty, but this approach resulted in subpar outcomes and a low feasibility rate.

In conclusion, the problems in G7 remain unsolved due to their limited feasible region, as indicated in Table 5.2.1 The static penalty method outperforms both the adaptive penalty and stochastic ranking methods, demonstrating a higher number of solved methods when compared to the data in Tables 5.2.1 and 5.2.3 Ultimately, the static penalty approach yields superior solutions and improved dynamics over the other algorithms.

Table 5.2.3 presents the outcomes of the stochastic ranking algorithm, indicating that problems G2, G7, G8, G10, and G12 yield infeasible solutions, as none of the constraints are satisfied, similar to previous methods In contrast, problem G11 demonstrates a viable solution.

Table 5.2.2 Static Penalty testing result

G 1 G 2 G 3 G 4 best -9.275285357 Infeasible -0.933743404 -30214.60354 median -6.593175853 Infeasible -0.908843579 -30214.32218 mean -6.598004029 Infeasible -0.910669567 -30212.26356

STD 0.995000984 Infeasible 0.008940373 11.32879254 worst -4.11792712 Infeasible -0.893903685 -30152.28217 feasible rate 100.0000% Infeasible 100.0000% 100.0000% x 

G 5 G 6 G 7 G 8 best 5189.629255 -6335.307512 3299.438956 Infeasible median 5576.881075 -5394.596871 3299.438956 Infeasible mean 5560.748445 -5690.293922 3299.438956 Infeasible

STD 207.4249675 443.5371583 0 Infeasible worst 5965.711937 -5359.73563 3299.438956 Infeasible feasible rate 100.0000% 100.0000% 3.3333% Infeasible x 

G 9 G 10 G 11 G 12 best 733.3841424 12673.14241 0.7514802 Infeasible median 835.4437699 20123.15371 0.7514802 Infeasible mean 841.6365757 19184.00428 0.7514802 Infeasible

STD 60.64115782 4519.533281 4.51681E-16 Infeasible worst 974.1190833 28414.08512 0.7514802 Infeasible feasible rate 100.0000% 63.3333% 100.0000% Infeasible x 

Stochastic ranking demonstrates a lower level of dynamics in solving Problem G 11 compared to tables 5.2.1 and 5.2.2, while maintaining the same feasible rate Although stochastic ranking has underperformed relative to precedence methods, its effectiveness may improve with the use of different parameters or representations Problem G 1 exhibits the worst performance among the methods analyzed, with Problems G 4, G 5, and G 9 showing similar trends Notably, Problem G 5 has a particularly low feasible rate of just 6.6667%.

In conclusion, the stochastic ranking method faces two primary issues: its poor performance in terms of standard deviation and a lower success rate compared to alternative methods To improve its effectiveness, enhancements were made to the two algorithms by eliminating penalty factors and focusing solely on complementary criteria.

Table 5.2.3 Stochastic Ranking testing result

G 1 G 2 G 3 G 4 best -2.191417933 Infeasible -0.931253421 -30178.98389 median -1.069096014 Infeasible -0.796794371 -29639.23488 mean -1.157421311 Infeasible -0.800529345 -29608.7686

STD 0.429445372 Infeasible 0.049871505 318.0689861 worst -0.503937008 Infeasible -0.722094899 -28714.93996 feasible rate 100.0000% Infeasible 100.0000% 86.6667% x 

G 5 G 6 G 7 G 8 best 6475.340174 -6182.275629 Infeasible Infeasible median 6790.845141 -6181.034864 Infeasible Infeasible mean 452.7230094 -5053.570525 Infeasible Infeasible

STD 446.1914042 1892.998669 Infeasible Infeasible worst 7106.350109 -1400.92641 Infeasible Infeasible feasible rate 6.6667% 23.3333% Infeasible Infeasible x 

G 9 G 10 G 11 G 12 best 1774.973383 Infeasible 0.751910804 Infeasible median 109144.2174 Infeasible 0.783098808 Infeasible mean 1291656.182 Infeasible 0.792522876 Infeasible

STD 2335055.026 Infeasible 0.033428282 Infeasible worst 7782401.812 Infeasible 0.885890042 Infeasible feasible rate 43.3333% Infeasible 100.0000% Infeasible x 

Result Comparison

Table 5.3.1 shows the set of algorithms and there best values, we can recognize from the table that, problems G 2, G 7, G 8, G 10and G 12 where not solved by adaptive penalty and stochastic ranking

Table 5.3.1 Algorithms Best Result Comparison

Table 5.3.1 indicates that the static penalty method achieved the highest consistency with the most solved problems In contrast, both the adaptive penalty and stochastic ranking methods encountered two unsolved issues The data reveals that no single method can address all problems, and each algorithm has its unique strengths and optimal solutions tailored to specific challenges.

In conclusion, this comparison is enhancing the No Free Lunch theorem, when it says no algorithm is professional for all problems.

Convergence Map

We established three checkpoints at 5,000, 50,000, and 500,000, representing the maximum number of FES All test cases are expected to exhibit a consistent convergence pattern due to two key factors First, we initiate the process with a stochastic population.

Table 5.4.1 Error achieved when FES equal to 5000, 50000 and 50000 adaptive static stochastic

STD 0.003979412 4.51681E-16 0.033428282 worst 0.763475586 0.7514802 0.885890042 and starting to make corresponding method operations, and then we will get enhancement for the given solution; or at least it will retain the best known solution we have in hand Secondly, according to No Free lunch Theorem, we will have some algorithms that have the ability to solve a given class of problem; however, the given algorithm will behave the same for this set.

Table 5.4.1 is describing the error rate with respect to individual FES records for problem G 11, where C is the number of violated constraints and V is the mean value of violation

We can recognize distinct differences in the result between On the other hand, they were varied in standard deviation.

In our analysis of problem G 11, we found that only one constraint needed to be satisfied for the solution to be deemed feasible The three algorithms employed demonstrated similar performance, all yielding feasible solutions As detailed in Table 5.4.1, we recorded the error values for FES set at 5000, 50000, and 500000 (Liang et al., 2006) These checkpoints were established to examine the algorithms' dynamics and to explore their internal navigation and convergence behavior.

The results indicate that the static penalty achieved the highest performance, with a value of 0.7514802 at the first checkpoint, maintaining this level up to the maximum function evaluations (FES) It also excelled in mean, median, and worst-case metrics, exhibiting an enhanced standard deviation of 4.51681E-16 In contrast, the stochastic ranking recorded the highest best value but had the poorest standard deviation Meanwhile, the adaptive penalty demonstrated a moderate performance, starting at 0.003979412 at the first checkpoint and maintaining this value through 500,000 FES, while showing a decreasing standard deviation.

The standard deviation provides us with information about the convergence of algorithms, and the ability of the algorithm to solve the problem coherently However,

The Adaptive Penalty Convergence Map, illustrated in Figures 5.4.1, 5.4.2, and 5.4.3, showcases the effectiveness of the algorithm with an enhanced standard deviation, ensuring that the dynamics remain within the feasible region These figures highlight the convergence map, demonstrating the algorithm's development and the optimization of the objective function in relation to the number of iterations.

Figure 5.4.2 Static Penalty Convergence Map

Figure 5.4.1 illustrates the convergence of the adaptive penalty, which achieved its optimal performance in the 18th generation, although it did not align with the logarithmic function shape In comparison, the static penalty, shown in Figure 5.4.2, reached its best performance in the 24th iteration, displaying a more favorable logarithmic shape and a lower minimum value Lastly, Figure 5.4.3 presents the stochastic ranking convergence graph, which closely resembles the virtual function shape and converged effectively by the 4th iteration.

In conclusion, stochastic ranking emerged as the most effective method based on the function shape and the number of iterations; however, it yielded the least favorable results in terms of achieving the best value.

Figure 5.4.3 Stochastic Ranking Convergence Map

Summary

In summary, it is essential to recognize that no penalty method—whether static, dynamic, or stochastic ranking—offers a comprehensive solution Each approach possesses its unique strengths and weaknesses, highlighting the complexity of achieving a flawless ranking system.

The analysis revealed that the static penalty method successfully addressed nine problems, while adaptive and stochastic methods resolved seven, albeit in varying sequences Notably, ranking individuals significantly enhanced the search process, particularly for problem G 5, demonstrating an equivalent effect to that of the penalty method This algorithm, implemented through Genetic Algorithms (GA) with individuals encoded as binary strings, showed variability in results not due to GA's inherent limitations, but rather because the problems were complex and difficult to solve, given the small feasible region within a vast search space For instance, problem G 5 featured variables ranging from 0 to 1200, while others varied between -0.5 and 0.5 Additionally, each algorithm's unique criteria contributed to the differing outcomes, a phenomenon supported by the No Free Lunch Theorem.

CONCLUSION REMARKS

Ngày đăng: 18/10/2022, 14:38

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

w