Problemstatement
Optimization is one of the classical topics in mathematics that influ- ences most areas of social life.In fact, finding the optimal solution to aproblemisattheheartofanydecision-makingissues.Decision- makingusuallyr e q u i r e s c h o o s i n g b e t w e e n d i f f e r e n t a l t e r n a t i v e s Theo p t i m a l p l a n ist h e m o s t r e a s o n a b l e , c o s t a n d r e s o u r c e s a v i n g b u t h i g h l y e f f e c t i v e [ 18].
Single-objective optimization problems (single-problems) [21] involveoptimizingoneobjectivefunction,hencethetaskofseekingfortheoptimalsolutio niscalledsingle-objectiveoptimization(SOO)[22].Characteristicsof the problem might be unconstrained or constrained, single or multiplevariable, linear or non-linear, etc.Based on these characteristics, manymethods are proposed to solve the problems, to name as a few: gradient,simplexornon- simplex,localorglobalsearchmethods,etc[18,72,73].In some tough cases, gradient-based and heuristic-based search techniquesare applied to approximating the solutions.Deterministic and stochasticsearch principles allow optimization algorithms to find global solutionsmore reliably.Evolutionary algorithms (EAs) and simulated annealingimitatingofnatureandphysicphenomenaarethosealgorithms
Car-buying is an example of a problem that involves more than 2objective functions (Figure0.1) A decision for it is made basing on twoobjectives( o r t w o c r i t e r i a ) :
1 few hundred thousand of dollars If the cost is the only objective, solution(1)- tenthousandistheoptimalchoice.I f thisisoptimalforallbuyers,asa result, they will purchase this car and no manufacturers seem to producemore expensive cars Barring some exceptions, cheap cars are likely to beless comfortable than expensive one For the rich, the comfort is often theobjective of decision- making.As a result, they may choose comfortablecars.T h e middleclassmaychoosethecarinbetween[52].
Problemslikebuyingacar,inwhichseveralobjectivesaresimultane- ouslyconsidered,arecalledmulti-objectiveoptimizationproblems(multi-problems), the task of finding one or more optimal solutions formulti-problems is known as multi-objective optimization (MOO) [22] The ob-jectives are often in conflict with each other It implies that one objectivecannot be improved without deterioration of at least another one Anotherissue is that different objectives are measured by different units In otherwords, these objectives are non-commensurable or have no common stan- dardofmeasurement[21,65].Figure0.1reveals4buyers’solutions.Whilethe cost axis is given in thousands of dollars, the comfort axis is given bypercentages.The space composed by these axes is objective space (orcriterion space).These axes present objectives 3of multi-problems [21,22].
C om fo rt relation is required to compare solutions in the objective space in order toguide the search of optimal solutions The relation “less than or equal to”(or “greater than or equal to”) is used for minimization (or maximization)single-problems.Therefore, the result of the search is a single optimalsolutionw h i c h i s l e s s t h a n o r e q u a l t o ( g r e a t e r t h a n o r e q u a l t o ) a n y other solutions In contrast, solutions are compared in objective functionspace,whichiscomposedofvectorsequivalenttoobjectivefunctions.Theco mmon preference relation used isPareto dominance relationinstead of“less than or equal to” relation.As a result, a set of solutions representsdifferent trade- offs among objectives.Those solutions are called Paretooptimal solutions or Pareto optimal set (PS) The curve formed by joiningthesesolutionsisknownasParetooptimalfront(PF).SolutionsA ,B,C,1,2whichareexamplesshowninFigure0.1arepartsofPF[22].
InMOO,progressingtowardsPScertainlyisanimportantgoal;main-taining a diverse set of those solutions is also essential Note that, inSOO,there is only one search space which is called the decision variable space.An algorithm works in this space by accepting or rejecting solutions basedon their objective function value.InMOO, in addition to the decisionvariable space, there also exists the objective space.Although these twospaces are related to a unique mapping between them, the mapping is of-tennon- linearandthepropertiesofthetwosearchspacesarenotsimilartoeacho ther[22].
Moreover, there exist many types ofmulti-problems [65] They aredivided into constrained and unconstrained categories They are also cat-egorizedaslinearornon-linearaswellasconvexornon- convex.Differentoptimization methods are proposed for solving a specific type ofmulti-problems
[22] Combining all objectives into one objective then utilizingexistingSOOis the first method Optimizing one objective at a time andconsideringt heo th e ra sc o ns tr ai n e d th en al so u sin g th e existing
Motivation
4 the second method The third well-known method, which is of stochas- tica n d s i m u l a t e n a t u r e o r p h y s i c m o d e l s , i s u s i n g E A s.E A sf o r s o l v - ingmulti-problems, are called multi-objective evolutionary algorithms(multi-algorithms) or evolutionary multi-objective optimization (EMO)algorithms.Non- do minatedsort in g gen et i ca l go rit h m II(NSGA-II)
[ 27]isa n e x a m p l e o fm u l t i - a l g o r i t h ms.M u l t i - a l g o r i t h msw o r k w i t h a s e t of solutions and evolve for a number of generations in a single run. Theyuse Pareto dominance relation in comparing solutions to ranking them, thegood solutions are retained and used for the combination in order to createnewonesintargetingclosertoPS.Theresultisalsoasetofsolutions. Almostallmulti-algorithmssuccessfullysolvemulti- problemswithas m a l l n u m b e r o f o b j e c t i v e s s u c h a s 2 , b i - o b j e c t i v e o r 3 , t r i - o b j e c t i v e Asmulti-problems have more than three, they are considered as many-objective optimization problems (many-problem).When solvingmany-problems,multi-algorithms face several difficulties [45,50,56,100] Thoseare deterioration of the search ability, dimensionality and visualization ofPF, and evaluation of the performance of algorithms The detailed will beshowninthenextsection.
Althoughmulti-algorithms have unsealed a remarkable performancein numerous problems with 2 or 3 objectives, the performance of Paretodominance basedmulti-algorithms which reduce severely when solvingmany- problems,thatis,thenumberofobjectivesincreases[50,56].Th ereexist four major challenges facingmulti-algorithms in terms ofmany- problemswithhighnumberofobjectives:
(1) Deterioration of the search ability: Using Pareto-dominance basedrelationtodeterminewhichsolutionsarebetterfornextgeneration,isinef- ficientsincealargeportionofpopulationbecomesequallygoodwhensolv-ingmany- problems[25],atevenearlystagesasshowninFigure0.2[36].
(2) Dimensionality:In order to representPF, the size of populationshould be enlarged exponentially, as an increase by one objective, in gen-eral, will cause the its dimension to increase by one. Thus, ifNpointsare needed for representing aPFof 2-objective problems, in an adequatemanner, there should be approximatelyN m−1 points to represent aPFofm-objectiveone[25,56].
(3) Visualization: Once aPSis obtained, decision makers choosethe final solution.At this stage, visualization of solutions becomes veryimportant.E v e n thoughsomemethodshavebeenproposed,ther eisstilla need of a straightforward intuitive method for presenting solutions inspaces of more than 3 dimensions.Therefore, the decision makers stillhavedifficultyoptingforthefinalsolution[5,12,25].
(4) Performance evaluations: If distance based performance measuressuch as generational distance (GD) [86] and inverted generational distance(IGD) [20] are used, a huge number of solutions inPSare required toachievemorereliableresults.Ifthehypervolumeindicator(HV)isutilized,itrequires alargecomputationalcomplexity[7,12].
P a re to n o n d o m in a t e d s o lu ti o n s (% ) P a re to n o n d o m in a t e d s o lu ti o n s (% ) lems, numerous many-objective evolutionaryalgorithms (many- algorithms)areproposed.Amany- a lg o ri t h mcanbeoneofthefivecategories.Thefirstcategoryusesrelaxedd ominancebasedtechniquetodifferentiateamongnon- dominatedsolutionsandenhancestheselectionpressuretowardsPF[24,30,93].
The second one progresses the performance by diminishing the ad- verse impact of diversity maintenance [2,58] The third category utilizesobjective values or objective ranks of individual in aggregating in orderto compare solutions [48,55] Since evaluating the approximation set byusing indicator values, the fourth category employs these indicator valuesas well to guide the search [3,6,37] The fifth category directs the searchby using a set of reference solutions which are used to measure the qualityofsolutions[23,89].
Althoughmany-algorithmsarepowerfulfordealingwithmany-prob-lems, they still have to face “the curse of dimensionality” Numerous prac-tical optimization problems can be easily listed on a large of number ofobjectives (often more than ten), as numerous distinctive objectives or cri-teria are often of practitioners’ interest [7].In most instances, it is notentirely sure whether the chosen objectives are all in conflict with eachother or not For example, then maximization offunctionalityobjectiveandperformanceobjective of the car-buying problem are two conflict ob-jectives.
Aq u e s t i o n a r i s i n g i s w h e t h e r a l l o b j e c t i v e s a r e a c t u a l l y n e c e s s a r y or if some may be omitted without−or with only slightly−changing theproblem characteristics.Methods for objective reduction are helpful forboth decision making and search.On the one hand, the decision makerswould have to consider fewer objective values per solution, and the num-ber of non-dominated solutions is likely to decrease On the other hand,thesearchalgorithmsmayworkmoreefficientlyandrequirelessco mpu-
Aimandobjectivesofthestudy
Aimofthestudy
Objectivesofthestudy
In order to achieve the aforementioned aim, the study is designed tofulfillthefollowingobjectives:
• Investigating the efficiency of combiningmany-algorithms with anobjectivedimensionalityreduction(ODR)insolvingmany-problems.
Forthisobjective,thelinearprincipalcomponentanalysis(L-PCA)is used to reduce dimensionality of the non-dominated sets that aregenerated bymany-algorithms. First, in order to examine howmulti-algorithms/many-algorithms affect the performance of anODR, thethesis designsexperiments to compare the performance of anODRwhen beingcombined with eithermulti-algorithms ormany-algo- rithms.Second,tocheckwhetheramany-algorithmcanobtainad- vantageswhenitcombineswithanODR,thethesisdesignsexperi- mentstocomparetheperformanceofamany- algorithmintegratedwithanODRagainstmany-algorithmalone.
• Proposing methods that can eliminate the redundant objectives whensolvingmany-problemsbyusingmachinelearningalgorithms.
Thefirstproposedmethodusesmany-algorithmstogeneratethecom- pletePF.Theobjectivesinmany- problemsareregardedasobjects(orpoints)inspace.P a r t i t i o n i n g AroundMedoids(PAM)algorithmisemployed to cluster these objects, then retains one object per cluster.Objects,whichareretained,areessentialobjectivesoftheproble m.
The second one takes advantage of Pareto corner search evolutionaryalgorithm (PCSEA) to generate only several parts ofPF,then usesmachine learning algorithms such as linear principal component anal-ysis ork-meansorDBSCAN4clustering algorithms to keep essentialobjectivesanddiscardredundantones.
Researchquestions
In order to achieve the aforementioned objectives, the three followingresearchquestionswereproposed:
(1) Whatareimpactsofusingmulti-al gor i t hms/many- al gori t hmsincombinationwithanODRforsolvingmany-problems?
First, the thesis makes contributions to validating the performanceobjective reduction strongly depends on whichmulti-algorithms/many- algorithms generate non-dominated solution sets It shows thatmany-al- gorithmsgivebetterresultsthanmulti- algorithmswhencombiningwithanODR It also reveals that the combination giving better results in thecaseofmany- algorithmsdoalone,aswellas,itdemonstratesthatcom-bining with anODRto remove redundant objectives can significantly im-prove the performance ofmany-algorithms This contribution has beenpublishedin[J1] (listedonp.113).
Second,thethesisproposesacompletePF- basedORA.Thealgorithmutilizesnon-dominatedsetgeneratedbymany- algorithmsthenusesPAM
Contributions
The second one takes advantage of Pareto corner search evolutionaryalgorithm (PCSEA) to generate only several parts ofPF, then usesmachine learning algorithms such as linear principal component anal-ysis ork-meansorDBSCAN4clustering algorithms to keep essentialobjectivesanddiscardredundantones.
In order to achieve the aforementioned objectives, the three followingresearchquestionswereproposed:
(1) Whatareimpactsofusingmulti-al gor i t hms/many- al gori t hmsincombinationwithanODRforsolvingmany-problems?
First, the thesis makes contributions to validating the performanceobjective reduction strongly depends on whichmulti-algorithms/many- algorithms generate non-dominated solution sets It shows thatmany-al- gorithmsgivebetterresultsthanmulti- algorithmswhencombiningwithanODR It also reveals that the combination giving better results in thecaseofmany- algorithmsdoalone,aswellas,itdemonstratesthatcom-bining with anODRto remove redundant objectives can significantly im-prove the performance ofmany-algorithms This contribution has beenpublishedin[J1] (listedonp.113).
Second,thethesisproposesacompletePF- basedORA.Thealgorithmutilizesnon-dominatedsetgeneratedbymany- algorithmsthenusesPAM
Structureofthethesis
ethesis algorithm for removing redundant objectives inmany-problems.To de- termine the number of clusters, the algorithm uses silhouette index [75].Thiscontributionhasbeenpublishedin[J2](listedonp.113).
Third, the thesis proposes two partialPF-basedORAs.The algo-rithms employPCSEAfor obtaining a partialPF, namely only “corner”solutions ofPF Based on these solutions, machine learning algorithms arethen used to remove the redundant objectives The results show that theintegrationbetweenPCSEAandODRsgivesbetterperformanceinfindingthe essential objectives and eliminate redundant ones for problems with alarge number of objectives than otherORAs This contribution has beenpublishedin[J3], [C1]and[C2](listedonp.113).
• Chapter1.This chapter is devoted to surveying and investigatingsome background literature and related works First, the backgroundincludesthedefinitionofoptimizationproblemsandmethodsforsol v-ing these problems.It also presents evolutionary algorithms (EAs)along with their classification and advantages.Multi-problems to-gether with approaches to solving them, methods for presenting non- dominated solutions are be introduced in this chapter.Three ap- proaches included in the chapter are incorporating all objectives intoone, then optimizing this derivative objective, optimizing one objec-tive by one, and usingEAs Three clustering algorithms and principalcomponentanalysis(PCA)arealsomentioned.
Besides, several related works are reviewed in this chapter.Many- problems with difficulties and approaches to tackle these difficultiesare also reviewed Accordingly, objective reduction is an approach toreducingoralleviatingthedifficulties.In reality, thereexistproblems
4 oneobjectiveatatime withm a n y o b j e c t i v e s b u t i t i s n o t k n o w n w h e t h e r a l l o f o b j e c t i v e s are essential or not In stead of solving all objectives of the problem,ORAsa r e u s e d t o e l i m i n a t e t h e r e d u n d a n t o b j e c t i v e s 5 Asa r e s u l t , the problems become solvable bymulti-algorithms or be solved moreefficientlybymany-algorithms.
Therea r e s e v e r a l w a y s t o c l a s s i f y O R A s,s u c h a s f e a t u r e s e l e c t i o n or feature extraction, online or offline, Pareto dominance based orcorrelationbased,withouterrororallowingerrorobjectivereductions.This thesis divides objective reductions into the completePF-basedand the partialPF-based objective reductions.Those algorithms arepresentedandanalyzedinthischapter.
• Chapter2 First, the method (in section2.1on p.51) investigates theimpact ofmany-algorithms with anORAand vice versa.Second,CORa l g o r i t h m ( i n s e c t i o n 2 2 o n p 63)u t i l i z e sm a n y - a l g o r i t h msto generate the completePFs, then employs a clustering algorithm(namelyPAM) to cluster them into partitions In which, the numberof clusters is automatically determined by using silhouette index andafter clustering, each partition keeps one objective and discards theotherones.
• Chapter3.The partial PF-based objective reduction algorithmsuti- lizePCSEAto generate a partialPFwhich is then used for objectivereduction.L-PCAand two clustering algorithms; namely,k- meansandDBSCANareusedforobjectivereduction.
This chapter provides background and some related works.First,background includes overview of optimization, evolutionary algorithms(EAs), multi-objective optimization, and several machine learning algo-rithms being utilized in this thesis.Second,the related work sectionpresents many-objective optimization and objective reduction algorithms(ORAs)forsolvingproblemswhichcontainredundantobjectiv es.
Optimization is central to many areas including decision making,whether in planning or in manufacturing Different alternatives are underconsideration, then the “best” solution is chosen The degree of goodnessoft h e a l t e r n a t i v e s i s d e p i c t e d b y a n o b j e c t i v e f u n c t i o n
M a n y m e t h o d s are developed for solving optimization problems [18,72,73].However,the characteristics of the problem, which are linear/nonlinear or continu-ous/ discrete et al., strongly affect the effectiveness of the method.Withmany advantages [1,22],EAs are popularly applied for solving practicaloptimizationproblems.
In real life, problems are usually involved with at least two objectiveswhichareinconflictswithothers.Car- buying(onpage2)isanexampleofthis type It is clear that there is no solution which satisfies all objectives.Those problems are calledmulti-problems To date, many multi- objectiveevolutionaryalgorithms(multi- algorithms)areintroduced.B e s i d e s intro-
Background
Optimization
Definition1.1( O p t i m i z a t i o n p r o b l e m ):[ 18] A general optimization prob-lemisdefinedas minimizef( x) subjecttox∈ Ω (1.1)
The functionf(x):R n →R,which is a real-value,needs to beminimizedintermsoftheobjectivefunctionorcostfunction.Thevectorxis ann-vector of independent variables:x= [x 1 , x 2 , x n ] T ∈R n Thevariablesx 1 , x 2 , x n are often considered as decision variables, andΩisasubsetofR n s p a c eandcalledaconstraintsetorfeasibleset.
Problem1.1is also called the mono-objective optimization problem orgeneral single-objective optimization one (single-problem) [21,78].T h e r e arealsoproblemsthatrequiremaximizationoftheobjectivefuncti on,however,theycanberepresentedequivalentlyintheminimizationform
1 seesub-section1.2.1.1onp.28fordetail asmaximizingisequivalenttominimizing−f( x).T h e r e f o r e , attenti onispaidtominimizationproblemswithoutanylossofgenerality[18].
There are several ways to classify the optimization problems.First,they can be categorized in terms of constraints If there are no constraints,they are called an unconstrained optimization problems.In contrast, ifthere are any constraints, they called equality-constrained or inequality-constrained ones [94] Second, optimization problems can also be dividedbased on the actual function forms of objectives as well as their constraintfunctions.I f allconstraintsandobjectivefunctionsarelinear,theproblems are linear, too.On the contrary, they become nonlinear [72,94].Third,depending on the values permitted for the design variables, optimizationproblemscanbeclassifiedasintegerorreal-valuedones[72].
Various methods have been proposed for dealing with different kindsofoptimizationproblems.Th eycanbecategorizedasdeterministicornon- deterministic algorithms In general, deterministic algorithms follow morerigorous procedures repeated as the same path every time and providingthe same solution in different runs.Most of these algorithms need thegradient information of the objective function, constraints and a suitableinitialpoint[18,72].
In contrast to deterministic algorithms, nondeterministic or stochasticalgorithmsexhibitsomerandomnessandmightproducedifferentsolutionsin different runs.The advantage of these algorithms is that they exploreseveralregionsofthesearchspaceatthesametime,andhavethea bilityto escape from local optima and reach global optima.Therefore, thesealgorithms are more capable of handling complex problems, such as mu-ti-modalorNP- hardones[78].
The stochastic algorithms can be divided into two groups.The firstgroup,w h i c h i s n e i g h b o r h o o d b a s e d , u s e s a s i n g l e i n i t i a l s o l u t i o n a n d guides the search through the search space for the optimum Simulated an- nealingandTabusearcharetwoexamplesbelongingtothefirstgroup[78].The second group, which is population based, uses a population of initialsolutions and guides the search according to relative efficiencies of the ob-served functional values.EAs and particle swarm optimization are placedinthesecondgroup[78].EAswillbedescribedinmoredetailsinanumberoffoll owingparagraphs.
Evolutionary algorithmsEvolutionary computing (EC) is a research areawithin computer science It draws inspirations from the process of naturalevolution [31] It is a subclass of biology-based algorithms which is con-sideredasnature- inspiringcomputing[94].ThealgorithmsinvolvedinECaretermedEAs[31]. AnEAis inspired by the mechanism of biological evolution, such asreproduction, mutation, recombination, and selection A set of candidatesolutions is created randomly to maximize the quality function.Thenqualityf u n c t i o n i n t h e f o r m o f a b s t r a c t f i t n e s s v a l u e i s a p p l i e d I n t h e nextg e n e r a t i o n , s o m e b e t t e r c a n d i d a t e s w i l l b e s e l e c t e d o n t h e b a s i s o f thefitnessvalue.Thisisachievedbyapplyingtechniqu esofrecombinationand/or mutation to them.Recombination is represented by the binaryoperator This operator can be applied to two or more selected candidatesknown as parents that generate new offspring On the contrary, mutation isappliedtoonlyonecandidatewhichgivesbirthtoonenewchild.Af t e rthisrecombinationo rmutationisexecuted,itgeneratesasetofnewcandidateson the bases of their fitness.This is an iterative process, which can becontinueduntilsufficientlygoodqualitycandidateisfound.
EAs have widely been used in applications solving many complexproblems thanks to three main advantages over classical search and opti- mizationtechniques.First,EAsareconceptuallysimpleandflexible[1].
They are relatively easy to use and implement [22,19] Second,EAs canbe applied to any problems that can be formulated as function optimiza-tion tasks [22,19] Third, sinceEAs are population based techniques, it ispossibleforthemtomanageasetofsolutionsatatimeinasinglerun[19].Hence, they can use parallel processing power to gain a computationallyquickoverallsearch[22].
Multi-objectiveoptimization
Definition1.2:A multi-problemisdemonstratedasfollows[65]: minimizef(x)= [f 1(x),f 2(x), ,f M ( x)] T subjecttox∈ Ω (1.2) where there areMobjective functions 2 , which are mapping ofR n toR,andareinconflictwitheachotherandneedtobeminimizedsimultane- ously.V e c t o rx=(x 1 ,x 2 , ,x n ) T ist h e d e c i s i o n o n e , a n dΩ i sa s u b s e t ofR n space and called decision variable space (or decision space). Theimage of the feasible region by(Z=f(Ω))is called a feasible objectiveregion It is a subset ofR M space, and is called the objective space El- ementsofZarecalledobjective(function)vectorswhicharedenotedbyf(x)=[f 1( x), f 2(x), ,f M ( x)] T , wheref i (x)for alli=1, , Mare ob- jective(function)values.Amappingtakesplacebetweenann- dimensionaldecisionvectorandanM- dimensionalobjectivev e c t o r i s s h o w n i n F i g - ure1.1asanexample. The word “minimize” implies minimizing all objective functions (ob- jectives for short) at the same time In the event of no conflicting betweenobjectives, then a solution can be found when each objective accomplishesits optimum In this case, no special methods are required To avoid suchtrivialcases,itissupposedthatitisnecessarytohypothesizethatthereis
Figure1.1:M a p p i n g betweendecisionspaceandobjectiveone[52] no single solution to reach each objective In other words, the objectivesare,atleast,partlyinconflictwithorareincommensurable 3[ 65 ].
Definition1 3 ( C o n f l i c t ) :T w o o b j e c t i v e siandjare conflicting if there existat least two solutions where the one has a betteri th objective value and aworsej th objectivevaluethantheotherandviceversa[71].
Definition1 4( P a r e t o d o m i n a n c e [ 52]):A s o l u t i o n x 1 is said to dominatetheothersolutionx 2 ,ifbothconditions1and2aretrue:
If any the above conditions is violated, solutionx 1does not dominate solu-tionx 2 Ifx 1dominate x 2 ,itisalsotowriteanyoffollowing:
Definition1.5(Non-dominated set of solutions):Among a setP, the non- dominatedsetP
Definition1.6(Pareto optimality):A solution x∈Ωis said to be
In other words, this definition says thatx ∗is Pareto optimal in theevent that there does not exist a feasible vectorxwhich would diminishsome objective without causing a simultaneous increase in at least oneotherobjective.
Definition1.7 (Pareto Optimal Set):For a givenmulti-problem,f(x), theParetoOptimalSet,P,isdefinedasP={x∈Ω|ơ∃x ′ ∈Ω:f(x ′ )≤f(x)}[21]
Definition1.8 (Pareto Front):For a givenmulti-problem,f(x), and
ThePF ∗ is also known as true Pareto,PF true or theoretical Paretofront [95].It can be computationally expensive and is often infeasibleor even impossible to havePF ∗ [68,103].In practice,optimizers oftengenerate a non-dominated set which is “close to”PF ∗ T h e n o n - d o m i n a t e d set is also called approximation set, and visualized asPF know , non- truePareto,approximationfront[95],orapproximationofPF[79].
Figure1 2 s h o w s i n o b j e c t i v e s p a c e a p o p u l a t i o nP of1 1 s o l u t i o n s foram u l t i - p r o b l e mw h i c h h a s t w o o b j e c t i v e s n e e d t o b e m i n i m i z e d f 2 ( x )( m in im iz e )
In which, solutionIdominates solutionK,solutionKis dominated bysolutionsC, D, E, F,I, H.SolutionsJ, K, Gare non-dominated.N o n - dominated solution set ofPincludesA, B, C, D, E, FandG.They arenon- dominated solution set of the populationPforming approximationfront(thebluedashcurve).TheredsolidcurveistruePareto. Three main differences between single-objective optimization (SOO)and multi-objective optimization (MOO) are preference relation, the num-ber of goals, and the number of search spaces.First, preference relation isused to compare solutions in objective space in order to guide the search,and eventually, converge to optimal solutions While inSOO, “less thanor equal to” relation is usually used (total order); inMOO, the solutionsare compared in an objective space composed of vectors (partial order),therefore one common preference relation is Pareto dominance relation
[52].Second,whileinSOO,thereisonegoaltosearchforanoptimum;inMOO,besides progressing towardsPFis certainly an important goal, maintaininga diverse set of solutions in the non-dominated front is also essential [52].Third, inSOO, there is only one search space-the decision variable space.An algorithm works in this space by accepting or rejecting the solutionbased on their objective function value.
InMOO, in addition to the deci- sionvariablespace,therealsoexiststheobjectivespace[52].
In the literature, for dealing withmulti-problems, there exist gen- erally three approaches.The first approach combines the objectives byforming aggregate objective function based on the weighted sum of theobjectives and converts themulti-problemsintosingle- ones[65].Thensingle- problemcanbetackledbyusingexistingmethodsandtheories.Th e second approach optimizes one objective at a time and considers others asconstraints.Thefirstapproachandthe second o n ecanbe consid ere dasatraditionalorclassicalmethod[52].T h e thirdapproachisevoluti onary method.I t mimicstheevolutionaryprocess.
Combining objectivesThis approach is also called scalarization
[17].Itforms an aggregate objective function based on the weighted sum of theobjectives and convertsmulti-problemintosingle-one Thesingle- prob-lemis solved by using existing methods and theories as presented in sec-tion1.1.1.W e i g h t e d sumisanexamplebelongingtothisapproach.
Optimizing one objective at a timeThe second approach optimizes oneobjectiveofmulti-problematatimewhileimposingalltheotherobjectivefunctions are converted into constraints [65].It is better than the firstapproach as it can solve the non-convex problems However, its outcomeis highly dependent on the order in which the objectives are considered.ε- constraintisanexampleforthisapproach[65].
Using evolutionary methodsEvolutionary methods appear especially suit- able to solvemulti-problems, since they simultaneously deal with a pop-ulation. This allows finding several members ofPSin a single “run” ofthe algorithm, instead of having to perform a series of separate runs as inthecaseofthepreviousmethods.Moreover,evolutionarymethodsarelesssuscept ibletotheshapeorcontinuityofPFs(e.g.,theycaneasilydealwithdiscontinuousorconcav ePFs),whereasthesetwoissuesarearealconcernfor the previous methods [21]. ApplyingEAs to solvingmulti-problemsis called multi-objective optimization algorithms (multi-algorithms) Anyproblems that can be formulated as a function optimization task can besolved bymulti- algorithms.NSGA-II[27], DMEA-II [66] andGrEA[93]areexamplesofmulti-algorithms.
MOOdeals with a multitude of information Hence, there exists morethan one objective which is in conflict with to each other The star coordi-nates y s t e m , s c a t t e r - p l o t m a t r i x , a n d p a r a l l e l c o o r d i n a t e s a r e t h r e e w a y s inwhichnon-dominatedsolutionscanbeillustrated They arepresented Σ i inAppendix.A(p.123).
In order to compare the quality ofmulti-algorithms for solvingmulti- problems, there are two distinct goals that should be sought: (i) gettingthe solutions as close to thePFas possible and (ii) getting the solutions asdiverse as possible along thePF Clearly, these two goals are independentofeachotherandthereexistdifferentmetricstomeasureoneorbothofth easpects It is not easy to find a single metric that can fully demonstrate thepredominance of one algorithm over another in these two aspects Hence,it is important to have at least two metrics to assess both goals of amulti-algorithm.
Amulti-algorithmwill be considered as a good solver if both goalsare appropriately satisfied This is, it is expected to find solutions that areveryclosetoPFand,atthesametime,arewelldistributedalongPF.
There exist over 50 metrics to compare the performance of differentmulti-algorithms [74] Three typical metrics which are commonly utilizedformulti-algorithmsvalidationarelisted:
G D i s a v a l u e r e p r e s e n t i n g h o w ‘ f a r ’P F know 4i sf r o mP F true 5a n di s definedasinequation(1.3):
M whereMis the number of vectors inPF know , p=2,andd i is theEuclideandistance between each vector and the nearest member ofPF true This measurement reflects the convergence aspect ofmulti- algorithms.ThesmallerGDvalueis,thebetteralgorithmis[74].
GDdoes not often work well when amulti-algorithmgenerates veryfew non-dominated solutions.IGDis proposed to alleviate the issue,andisdefinedasinequation(1.4):
(1.4) n wherenis the number of vectors inPF true andd j is the Euclidean dis-tance between each vector inPF true and its nearest vector inPF know IGDcanreflectbothgoalsofamulti-algorithm[74].
For each solutioni, a hypercubev i is constructed with a referencepointWandthesolutioniasthediagonalcornersofthehypercube
.Thereferencepointcansimplybefoundbyconstructingavectorofworst objective function values Thereafter, a union of all hypercubesisfoundanditshypervolume(HV)iscalculatedasformula1.5.UnlikeG DandIGD,HVisaunarymeasure.BothGDandIGDusePFasareference, which is not practical for real-world applications which oftendo not have aPF Thus,HVattracts increasing attentions recently.HVisameasurementofthehypervolumeinobjectivespacethat isdominated by a set of non-dominated points Before computingHV,the values of all objectives are normalized to the range of a referencepoint for each test problem The reference points normally are theworst- possibleoneintheobjectivespace.
TheHVindicator is the only single set quality measure that is knownto be strictly monotonic with regard to Pareto dominance:whenever aPareto set approximation entirely dominates another one, then the indi-cator value of the dominant set will also be better [3] However,GDandIGDarethemostcommonly- usedmetricssincetheystillworkwellwhen j thenumberofobjectivesincreases.
In order to compare and challenge the search capabilities ofmulti-al- gorithms,asetofmulti-objectivetestfunctionswereused.Th esetestfunc- tionsenvelopspecialcharacteristicssuchasmulti-modality,non-convexityand discontinuity, which cause difficulties for mostmulti-algorithms.Someproblems contain a disconnected and asymmetricPFthat causes severalissues tomulti- algorithms reaching all the regions in truePF It is vital tospecify that these test functions essentially reflect the main characteristicsof the optimization problems found in the real world Some of these func-tions contain critical features that make them especially troublesome tosolve. Hence, the basic assumption is that if amulti-algorithmcan solvesuch test functions, it should also be able to handle real-world problemsthough this is not necessarily true.DTLZandWFG, which are two typicaltestproblemsetsformulti-algorithms,areutilizedinthisthesis.
DTLZproblem setFor testing and comparingmulti-algorithms,K Debet al.
[28] have proposed a set of test functions The suite of test functions,which is known asDTLZset, attempts to define genericmulti- algorithmproblems.DTLZset allows the users to define an arbitrary number ofobjectives Increasing the number of objectives leads to a majority of so- lutionst o b e g o o d , t h e r e b y r e d u c i n g t h e e f f e c t o f t h e s e l e c t i o n o p e r a t o r inam u l t i - a l g o r i t h m.Section1 2 1 1 ( p 2 8 )w i l l d i s c u s s t h e d i f f i c u l - ties in more details.This set contains seven problems; namely,DTLZ1,DTLZ2, .,a n d DTLZ7.T h e p r o b l e m DTLZ5(I,M), which is a new ver- sionofDTLZ5problem,willbeusedtoevaluatealgorithmsinthisthesis.
As forDTLZ5(I,M) problem [26], it is defined in Table1.1 The totalnumber of variables isn=M+k−1,wherek=|x M |.
Thefirstpropertyoftheproblemisthatthedimensionality(I)ofPFcanbe i=1 min f1(x)= (1 + 100g(xM ))cos(θ1)cos(θ2) cos(θM−2)cos(θM−1) min f2(x)= (1 + 100g(xM ))cos(θ1)cos(θ2) cos(θM−2)sin(θM−1) min f3(x)= (1 + 100g(xM ))cos(θ1)cos(θ2) sin(θM−2)
. min fM−1(x) = (1 + 100g(xM ))cos(θ1)sin(θ2) min fM (x)= (1 + 100g(xM ))sin(θ1) where θi = π x for i = 1, 2, , (I − 1) g 4(1 + g(x 2 i π
(1 + 2g(xM )xi)for i = I, , (M − 1) Σ )) x ∈x iM (xi − 0.5)2
0 ≤ xi ≤ 1 for i = 1, 2, , n changedb y s e t t i n gI toa n i n t e g e r b e t w e e n2 a n dM
Thesecondo n e ist h a t P F i s n o n - c o n v e x a n d f o l l o w s t h e r e l a t i o n s h i p : Σ M( f i ) 2 = 1. Anotherproperty is that there are firstM−I+1objectives correlated,while others and one of firstM−I+ 1objective conflict with each other.(x i ∈ x M m e a n sthati=M→n)
WFGproblem setS Huband et al.[47] presented a Walking Fish Group(WFG) toolkit for creating flexible and scalable multi-objective problems.The toolkit allows bias, multi-modality, and non-separability characteris-tics to be combined and incorporated as desired It also supports a varietyof Pareto optimal geometries such as convex, concave, mixed convex/con- cave,linear,degenerateanddisconnectedgeometries.
All problems made byWFGtoolkit are well characterized, and scal- able with regard to both the number of objectives and parameters. Theyhavea l s o k n o w n P F s.Ninem u l t i - p r o b l e ms;n a m e l y , W F G 1,W F G 2,
,WFG9, including one suggestion in which both multi-modal and non- separable.
How to construct a generalWFGproblem is shown in [47].WFG3problem ofMobjectives, which is used for evaluating the algorithms inthisthesis,usesfollowingfunctionsandparameterstoconstruct.
Shapefunctions h 1:M =linear m (degenerate) linear1(x 1 , ,x M−1) linear m=2:M−1(x 1 , ,x M−1) M−1 x i i=1 M− m i=
PFofWFG3(M)problemdegeneratesinto alinearhyperplanesuch that Σ Mf i =1,i n w h i c h , t h e f i r s tM−1o b j e c ti v e s a r e p e r f e c t l y c o r r e - lated,wh ile thelastobjectiveisinconflict witheveryotherobjective intheproblem.
Machinelearningalgorithmsusedinthisstudy
In the real world, numerous objects can only be electronically rep- resented with high-dimensional: images, videos, or hyperspectral images,etc.T o analyzealargeamountofdataandprocessthem,itisne cessaryto develop a data processing system As the data has high dimensions, thesystemd i r e c t l y p r o c e s s i n g t h e m m a y b e v e r y c o m p l i c a t e d a n d u n s t a b l e so developing that system is infeasible.In fact, many systems are onlyeffectivebecauseofrelativelylowdimensionaldata.Whenthedimensionsof data are higher than the tolerance of the system, the data cannot beprocessed Moreover, the data with dimensions greater than three, it doesnot have effective visual presentation To visualize them, their dimensionsshouldbereducedasmanyaspossible.M o r e o v e r , highdimens ionaldata
are usually redundant The dimensions of data should also be decreased toextract the required information Therefore, in order to process, visualizeorex t ra ct f e a t u r e s o f h ig h - d i me n s io na l d a t a , d i m e n s i o n a l i t y re d u c t i o n i s ofnecessity[91]. Basically, dimensionality reduction is the transformation of high- di- mensionaldataintoameaningfulrepresentationofreduceddimensionality.Ideally,ther educedrepresentationhas adimensionalitythatcorrespondsto the intrinsic dimensionality of data The intrinsic dimensionality of datais the minimum number of parameters needed to account for the observedpropertiesofthedata[85].
PCAis one of the most popular algorithms for dimensionality reduc- tion [84,81].It computes principal components and uses them to performachangeofbasisondata,sometimesusingonlyfirstfewprincipalcompo- nents and ignoring the rest In which, principal components of a collectionof points in a realp-space are a sequence ofpdirection vectors; thei th vector is the direction of a line that best fits data while being orthogo-nal to the(i−1) th first vectors.The detail ofPCAalgorithm is listed inAppendix.B(AlgorithmB.1onpage127).
Clustering is considered as the most important question of unsuper- vised learning to deal with the data structure partition in unknown area.Moreover, clustering is the basis for further learning and plays an impor-tant role in modern scientific research [92] It involves grouping of similarobjectsi n t o a s e t k n o w n a s a c l u s t e r O b j e c t s i n o n e c l u s t e r a r e l i k e l y to be different when being compared with objects grouped under anothercluster.Some of popular clustering methods that are used including hi- erarchical,p a r t i t i o n i n g , d e n s i t y - b a s e d a n d m o d e l - b a s e d m e t h o d s P A M ,k- meansa n d D B S C A N a r e t h r e e e x a m p l e s o f c l u s t e r i n g a l g o r i t h m s u s e d inthisthesis. k-means algorithmk-means[63], which is one of the simplest unsupervisedlearning algorithms, belongs to the partition clustering methods It solvesthe clustering problems where each cluster is represented by the center ofgravity of the cluster.The detail ofk- meansalgorithm is presented inAppendix.B(AlgorithmB.2onpage128).
Partitioning around medoids clustering algorithmPartitioning around medoidsclusteringalgorithm(PAM)[53],isproposedbyLKaufmanetal.,divides a set of objects intokclusters The objects which are in the same clustersshow a high degree of similarity, while objects belonging to different clus-ters pinpoint a high degree of dissimilarity.A medoid of a cluster is anobject,whoseaveragedissimilaritytoallothersinthesamecluster,ismin- imal.T h e detailofPAMalgorithmislistedinAppendix.B(AlgorithmB.3onpage128 ).
Density-basedspatialclusteringofapplicationswithnoisealgorithm Density- Based Spatial Clustering of Applications with Noise (DBSCAN) [32] is oneofdensity- basedclusteringalgorithms Clustersareidentifiedby lookingat the density of objects.Regions with a high density of objects depict theexistence of clusters; whereas, regions with a low density of objects indicateclustersofnoiseorclustersofoutliers.T h e algorithmgrowsregionswit h a sufficiently high density into clusters and discovers clusters of arbitraryshape in spatial databases with noise.DBSCANhas two parameters:EpsandminObjs.T h e y meanthatthereneedstobeatleast minObjso b j e c t sin one region with radius ofEps.The algorithm starts with an arbitrarystartingobjectthathasnotbeenvisited.T h i s object’sEps- neighborhoodis retrieved recursively If it contains a large enough number of objects, tobe more precise, the number is greater than or equal tominObjs, a clusteris started.Otherwise, the object is considered as noise Note that thisobject might later be found in a sufficiently sizedEps-environment of adifferento b j e c t , h e n c e , b e m a d e p a r t o f a c l u s t e r I f a c l u s t e r c o n t a i n s
Relatedworks
Many-objectiveoptimization
There is no generally agreed definition ofmany-problems However,as number objectives ofmulti-problems is greater than three,multi-prob- lemsareregardedasmany- problems[13,14,16,46,57,58,62,64,77,89,96,97,100] 6 The primary motivation behindmany-problems is to high-light challenges posed by a large number of objectives to existingmulti-algorithms.Hence the definition presented here is an evolving one andservesamorepracticalthantheoreticalpurpose[56].
An example ofmany-problems in real-world:A many-objective car side- impact problem [80] is defined as equation1.6.It is transformed from asingle- problemof one objective and 10 constraints which is described intwot a b l e s , T a b l e 1 2 a( e q u a t i o n s 1 7 o n p 2 9 )a n d T a b l e 1 2 b ( e q u a - tion1.8onp.30).
[45]regardsthemulti-problemswithmorethan5conflictingobjectives(inmostcases)asmany-problems.
Table1.2:(a)Definitionofasingle- problemofoneobjectiveand10constraintsforwhichamany- problem(equation1.6)originatedfrom
Difficulties in solvingmany- problems:As dealing with thesemany-prob- lems,multi-algorithms encounter a number of obstacles which have beenidentifiedandreportedin[25,50,56,87].
II[27],SPEA2[102] have difficulty in determining which solutions are betterforn e x t g e n e r a t i o n s i n c e a l a r g e p o r t i o n o f p o p u l a t i o n b e c o m e s e q u a l l y goodwhensolvingmany- problemsatevenearlystagesasshowninFig- ure0.2(p.5).Inotherwords,th eefficiencyofPareto- dominancebaseddecreasessignificantly[57].Ifaggregation- basedorindicator-basedmulti- f (x) :Weight (kg) g1(x) :Abdomen load (KN) g2/g3/g4(x) :Upper/Middle/Lower viscous criterion (m/s) g5/g6/g7(x) :Upper/Middle/Lower rib deflection (mm) g8(x) :Pubic force (KN) g9(x) :Velocity of V-Pillar at middle point (mm/ms) g10(x) :Velocity of front door at V-Pillar (mm/ms) x1/x2 :Inner/reinforcement thickness of B-Pillar (mm) x3/x4 :Floor-side inner/cross-member thickness (mm) x5/x6 :Beam/beltline reinforcement thickness of door (mm) x7 :Roof rail thickness (mm) x8/x9 :B-Pillar/floor-side inner material x10/x11 :Height/hitting-position of barrier (mm)
Table1.2:(b)Definitionofasingle- problemofoneobjectiveand10constraintsforwhichamany- problem(equation1.6)originatedfrom(continued) algorithmsareused su ch as[3],t h e y sti llh av et o searchsimu lt an eo us l yin an exponentially increasing number of directions.This is called thedominanceresistancephenomenon[56].
Second,i n o r d e r t o d e p i c t t h e f r o n t r e s u l t , i t i s a m u s t t o i n c r e a s e the size of population exponentially [50] While the size of population islimited,u n d e r n o n - d e g e n e r a t e s c e n a r i o s , P F o f a nm– objectivep r o b l e m is(m-1)- dimensionm a n i f o l d [ 49].W h i l e F i g u r e 1 3 a n e e d s 9 s o l u t i o n s to presentingPFofDTLZ1(2) problem, Figure1.3bneeds 91 ones forpresenting that ofDTLZ1(3) problem As a result, a tremendous numberof solutions may be required for a good approximation of the completePFforlargem- objectivespace.
Third,onceaPFisobtained,decisionmakerschoosethefinalsolu- tion.At this stage, visualization of solutions alternatives becomes veryimportant.B e s i d e s r e p r e s e n t a t i o n s ( m e n t i o n e d i n s u b s e c t i o n 1 1 2 3 o n p.19),othermethodshavebeenproposedin[34,67,70,88],therehasstillbeen a need for a straightforward intuitive method for presenting solutioninaspacewithmorethan3dimensions.T h e r e f o r e , decisionma kersstill
Figure1.3:An exampleforpresentingDTLZ1PFs havedifficultyoptingforthefinalsolution.
Fourth, it is difficult to evaluate the performance of algorithm.Ifdistance-based performance measures such asGDandIGDare deployed,PFof problem must be known prior to evaluation and a huge number ofsolutions inPFare required within the reference set to get more reliableresults[49].IfanHVisutilized,itrequireslargecomputationalcompl exity.
Based on the difficulties facingmany-problems, non-Pareto basedalgorithms, Pareto-based algorithms and their improvements; algorithmsforsolvingmany- problemscanbeclassifiedintosixapproaches.Theyarerelaxed dominance based, diversity based, aggregation based, indicatorbased,referencesetbasedandobjectivereductionbasedapproache s.
This approach differentiates itself from non-dominated solutions and en- hances the selection pressure towardsPF Many variants of this approachcan be categorized into classes: value-based dominance and number-baseddominance.The first classmodifies the Pareto dominance by changingthe objective values of solutions when comparing them with f 2 f 3 each other inorder toenlargethedominating areaofthenon- dominatedsolution sothat someofthemaremorelikelytobedominatedbyothers.Th eclassincludesε-dominε- MOEA[24],grid- dominGrEA[93].Thesecondclassattemptstocompareasolutionwithanoth erbycountingthenumberofobjectivesof that solution to see which is better, or the same as, or worse than theother objective in other solutions The class includesfavour relation[30],(1−k)-dominance[33],L-dominance[104].
Though most of the recent works focus on improving the selection pres- suret o w a r d s P F a s m e n t i o n e d i n t h e p r e v i o u s p a r a g r a p h , a n o t h e r w a y to alter Pareto-based calculations formany-problems is to apply a cus-tomizedd iv ersity - b ased app ro ac h Ing en e r a l , t h i s a pp ro ach i sp ropo sed to progress performance by diminishing adverse impacts of diversity main-taining.Grid- based criteria 7[ 93], SDE (shift-based density estimation) [58],OGEA (objective grouping evolutionary algorithm) [39] are several exam- plesbelongingtothediversity-basedapproach.
Another method to distinguish many-objective solutions is to use aggrega- tion function.Based on the information aggregated, this method can beclassified into two classes: aggregation of individual information and ag-gregationofpairwisecomparisons.Thefirstclassusesindividualinforma-tion such as objective values or objective ranked in aggregating in order tocompare solutions with each other Weighted sum, weighted Tchebycheffin MOEA/D algorithm [98], vector angle distance scaling and weightedTchebycheff in MSOPS 8in [ 48] RD 9[ 55] are examples of the first class.In the second class, pairwise compared results with other solutions can beused in aggregation based approach to evaluate solutions.Global Detri-mentin[36]belongstothesecondclass.
Utilizingi n d i c a t o r v a l u e s t o d i r e c t t h e s e a r c h a p p e a r s t o b e a s t r a i g h t way to deal withmany-problems, since the approximation set is eval-uatedaccordingtotheindicator.Thesemethods can be divided intothree classes: hypervolume-driven, distance-based pointer driven, and R2marker driven algorithms HypE (hypervolume estimation algorithms) [3]belonging to hypervolume-driven; IBEA (indicator-based evolutionary al-gorithm)
[101], AGE (approximation-guide evolutionary) [6] are membersof distance- driven; MOMBI (many objective metaheuristic based on R2indicator) [37]basesonR2indicator.
This approach uses a set of reference solutions to measuring the quality ofsolutions, in which the search process is guided by the solutions within thereferencesolutionset.Two_Arch2(improvedtwo-archivealgorithm)[89],NSGA- III(areference-pointbasedmany-objectiveNSGA-II)[23],SDIEA(strengthened diversity indicator and reference vector-based evolutionaryalgorithm) [82]arethreeexamplesofthereferencesetbasedapproach.
While aforementioned approaches, namelymany-algorithms try to im- provemulti-algorithms or try to tackle the difficulties in solvingmany- problems,objectivereductionisanotherapproachwhichisproposedtocir-cumvent the roadblock ofmany-problems It aims at dealing withmany-problems which have a similarPFto another one with fewer objectives.Section1.2.2willdiscussthisapproachindetail.
Multi-problems containing two or three objectives are solved bymulti-al- gorithms,car-buying[52]isanexampleforthistypeofproblems,NSGA-II
[27] is an example for this type of algorithms Otherwise,many- problemscontainingmorethanthreeobjectivesaresolvedbymany- algorithms,car stopYES condition?End objective dimensionality reduction ( remove redundant objectives ) Start multi/many- algorithm ( generate nondominated set ) side-impact problem [80] (equation1.6on p.28) is an example formany- problems,NSGA-III[23]isanexampleformany-algorithms.Whilemulti-algorithms use traditional Pareto relation dominance to compare solutionsduring evolving;many-algorithms utilize modified Pareto relation domi- nance,orothertechniquessuchasdiversityorusers’preferencestodothat.
Objectivereduction
Objectives inmulti-problems ormany-problems are regarded asfeatures Applying dimensionality reduction in solvingmany-problems iscalledobjectivereduction.Objective reductionproceduresessentiallyhavetwo major modules as shown in Figure1.4 They are generator moduleand reduction one.The first module, which is amulti-/many- algorithm,generatesanapproximationofPF.Thesecondmodule,whichhasresponsi- bilityofobjectivereductionbyusingreductiontechniques,calledobjectivedimensionalityre duction(ODR).
Basic conceptsThe givenmulti-problems for objective reduction alwayshave the formulation defined in Equation1.2(on page15); the originalobjective set is denoted asF 0={f 1 ,f 2 , ,f m };PF 0refers toPFof theoriginalmulti-problem.Forthe convenience, the notationu (F) is usedto denote the sub-vector ofugiven by a nonempty objective subsetF.For example, ifu= (f 1(x), f 2(x), f 3(x)) T ,F:={f 1 , f 3},thenu (F) =(f 1(x),f 3(x)) T
Definition1.9 (Non-conflicting):Two objectivesf i , f j ∈F 0are said to benon- conflicting,if∀u,v∈ PF 0 u (f i ) ≤v (f i ) ⇔u (f j ) ≤v (f j ) [97].
Definition1 1 0 ( E s s e n t i a l o b j e c t i v e s e t ) :A n e s s e n t i a l o b j e c t i v e s e t i s d e f i n e das the smallest set of conflicting objectives (F T ,|F T |=m) which can gener- atethesamePFasthatbytheoriginalproblembyF 0={f 1 ,f 2 , ,f M }[77].
The essential objective set also called a relevant objective set [79].The number of essential objectives (m, m≤M) is regarded as the dimen-sionalityofaproblem[77].
Theredundantobjectivecoulda l s o b e c a l l e d a n o n e s s e n t i a l o b j e c - tive[ 35].Notably,a n o b j e c t i v e c o u l d b e r e d u n d a n t i f i t i s n o n - c o n f l i c t i n g (orcorrelated)withsome otherobjectives[77].
Figure1.5shows an example problem of three objectivesf 1(x), f 2(x),andf 3(x); subject to0≤x≤1.It can be seen thatf 1(x)andf 2(x),f 1(x)andf 3(x)are conflict to each other, whilef 2(x)andf 3(x)are non-conflicttoeachother.
Dimensionality reduction method is often used to avoid the curse of di- mensionalityfromexperimentallifesciences.B e i n g appliedinMOO,ob- jectivereductionmethodisoftenemployedtoreducelargefeaturespa ceto smaller one.It brings three main benefits.First, it can reduce thecalculation load ofmany-algorithms, i.e., it takes less time to operate andlessspacetostore.Furthermore,theproblemswithfewerobjectivescanbe solvable bymulti-algorithms or they can be solved more efficiently bymany- algorithms.Finally,itcansupportdecisionmakersinunderstand-ingmany- problems more deeply as the essential or redundant objectivesareindicated[56,77].
There are several types of classifying objective reductions They areobjective selection and objective extraction reductions; online and offlineobjective reductions; Pareto dominance structure based approach and cor-relationbasedone;withouterrortypeandallowingerrorone.
The first categorization, which bases on each type of objectives afterobjective reduction, refers to objective selection reduction and objectiveextraction reduction To explain the data, theobjective extraction reduc-tioncreates novel objective(s) from the original objective(s) It formulatesthe essential/reduced objective as a linear combination of the original ob-jectives.NSGA-II-
OEin[15],OEMin[16]aretwoexamplesbelongingtothis class Theobjective selection reductionfinds the smallest subset ofthegivenobjectivesinordertogeneratethesamePFasthesetoforig i-nal objectives does [8,38].Belonging this class,kOSSA,mOSSA in [61],α- DEMOin[4]arealgorithmstonameafew.
Anotherclassification,whichbasesonthetimeofincorporatingODRintomul ti-algorithms/many-algorithms,objectivereductionscanbecat- egorized into two classes.They are offline and online classes.Offlineclass:in order to effectively assist the decision making after the search,objective reduction is considered as offline.It means thatODRis usedright aftermany-algorithms finish their search. Exact/Greedy algorithmsforδ-MOSSork-EMOSS[10],PCSEA- basedobjective reduction [79] be-long to this offline class.Online class:while offline objective reduction isused in the decision making, online objective reduction can also improvethe search itself, REDGA (NSGA-IIequipped with the reduction method)in[51],L- PCAin[77],orNSGA-II-OEin[15]areexamplesofonlineclass.
Objective reduction is also categorized as Pareto dominance structurebasedapproachandcorrelationbasedone[97].Paretodominancestruc turebased approach bases on preserving the dominance relations in the givennon- dominatedsolutionsinordertoretainthenumberofthosesolutionsas many as possible after redundant objectives are removed ORA-RPSSin [40],σNR in [38],PCSEA- basedobjective reduction in [79] are examplesbelonging to this class.Correlationbased approach, which bases on thecorrelation between each of pairs of objectives, aims to keep the mostconflictobjectivesandtoremovetheobjectivesthatarelowornon-conflictwith each other PCA-NSGAII in [26], MICA-NORMOEA in [43], NCIEin [90] are algorithms belonging to this class.The correlation betweenobjectivesmaybecorrelationcoefficient[77]ormutualinformation[41 ].
Additiontothreeabovecategorizations,itispossibletoclassifymeth-ods basing on their consideration of error.The first is concerned withfinding the smallest subset of the original objectives for whichPFremainsthe same.This may be considered as thewithout errortype In somecases,itfindsanevensmallersubsetofobjectivesforwhichthene wPFis a reasonable approximation ofPFof the original problem This may beconsideredastheallowingerrortype.Inotherwords,classifyingobjectivereduction methods bases on whether they attempt to determine exactly oronly approximatePFof the original problem.Greedy algorithm 10and Exact algorithm for MOSS 11[ 11] belong to the without error type.BZgreedy algorithm and BZ exact algorithm forδ-MOSS 12and k-
Besidesfouraforementionedclassifications, thisthesisclassifiesa ndinvestigates several existing objective reductions based on the type ofPFs,whichareusedasinputforobjective reduction,intermsoftheco mpleteorthepartialPF.ThealgorithmsPCA-NSGAII[26],C-PCA-
NSGAII,MVU-PCA-NSGAII[76],L-PCA[77],MICA-
[41]arethecompletePF-basedobjectivereductions.P C S E A - b a s e d [79],ORA-RPSS[40],NNROSGA[42]arethepartialPF-basedones.
Apart from aforementioned classifications, recently, authors in [29]have proposed an objective reduction algorithm, which based on adaptivepropagatingt r e e c l u s t e r i n g f o rm a n y - p r o b l e m s ,c a n p r e s e r v e t h e s t r u c -
11 TheMinimumObjectiveSubset(MOSS)problemistheproblemoffindingthesmallestsubsetofobjectivesofmulti- problem,suchthattheweakParetodominancerelationisleftunchangedonagivenset
PartialPF [79],[40],[42] ture of the original problem as much as possible.In [44], the authorspropose formulation of objective reduction technique with multi- objectivesocial spider optimization (MOSSO) algorithm to provide decision regard-ing conflict objectives and generate approximate PF of non-dominated so-lutions Authors in [60] propose two objective reduction algorithms (LHAand NLHA), which use a hyperplane with non-negative sparse coefficientsto roughly approximate the structure of PF, for linearly and non- linearlydegeneratePFs.
A López Jaimes et al.[61] propose two variants of algorithm to reduce thenumber of objectives in amulti-problemby identifying the most conflictobjectives The first variant determines the minimum subset of objectivesthatyieldstheminimumerrorpossible.
AL Jaimes et al.[51] propose and analyze two schemes One scheme re- duces periodically the number objectives during the search until the re- quired objective subset size has been reached (REDGA-S) The secondapproach is a more conservative scheme that alternately uses the reducedandtheentiresetofobjectivestocarryoutthesearch(REDGA-
S Bandyopadhyay et al.in [4] proposeα-DEMO algorithm.The algo- rithmp e r i o d i c a l l y r e o r d e r s t h e o b j e c t i v e s b a s e d o n t h e i r c o n f l i c t s t a t u s andselectsasubsetofconflictingobjectivesforfurtherproc essing.I t de- terminesthenumberofobjectivestobekeptis[α∗M],thenselectsasubsetof conflicting objectives using a correlation-based ordering of objectives.However,ithasnotbeenvalidatedredundantproblem.
X Guo et al.[42] propose a non-redundant objective set generation al- gorithm (NNROSGA) The algorithm using a decompositionmulti-algo- rithmbased (namely, MOEA/D) to generate a small number of repre- sentative non-dominated solutions widely distributed on the PF. Then,constructing non-dominated objective set by using the information of con-flictingobjectivepairobtainedinthepreviousstage.
H Wang et al.[90] propose an objective reduction approach based on non- linear correlation information entropy (NCIE) The approach includes 2key steps, namely correlation analysis and objective selection.The cor-relation analysis based on modified NCIE which is computed basing onnon-dominated population.
By adding the information of covariance, themodified NCIE cannot only handle both linear and non-linear correlationbutalsodescribeconflictrelation.O b j e c t i v e selectionstepselectsthe mostconflict objective which is the largest absolute sum of negative NCIEs tothe other ones.Then, this step omits the objectives that are positivelycorrelatedtotheselectedobjectives. σNR
Authors in [38] propose a fast algorithm to find a minimum set of ob- jectives preserving the dominance structure as much as possible, viz., agreedy algorithm forσNR It also presented a measure for measuring thecapability of preserving the dominance structure of an objective set, i.e.,theredundancyofanobjectivetoanobjectiveset.
It is experimented forDTLZ5(I,M) redundant problem One does notneedto provide numberofthereduced objectives.Ho w e v e r , a solutionset sampledfromatruePF,whilethetruePFoftenhasnotbeenknown.
NLuoetal.[62]proposeanobjectivereductionframeworkformany-prob-lems using objective subspace extraction, named OSEOR A new conflictinformation measurement among different objectives is defined to sort therelative importance of each objective, and then an effective approach isdesignedtoextractseveraloverlappedsub- spaceswithreduceddimension-ality during the execution ofmulti-algorithms The experimental resultsindicate that performance of NSGA-II can be significantly enhanced usingOSEORonbothnon- redundantandredundantmany-p rob l e ms.
Benchmarksandperformancemeasures
Benchmarkmethods
The thesis compares performance of the proposed algorithms withtwo other existing methods (OC-ORA [41] 15and L- PCA[77] 16 ).Fivemany-algorithms are used to get non-dominated solutions for both theproposed and existing methods.They are grid-based evolutionary algo-rithm(GrEA)
[93],kneepointdrivenevolutionaryalgorithm(KnEA)[99],reference- pointb a s e d n o n - d o m i n a t e d s o r t i n g ( NSGA-III)[ 23],r e f e r e n c e
[14],newdominancerelation-basedevolutionaryalgorithm(θ-DEA) [96].T h e partialPF- basedORAsinchapter3alsocomparetoaParetodominancebasedone,namelyPCSEA-based,in[79].
Benchmarkproblems
In order to compare and challenge the reduction capabilities of objec- tivedimensionalityreductionalgorithms,thethesisusestwotestproblems,namely,DTL Z5(I,M)[26]andWFG3(M)[47].Thedetailedoftheseprob- lemsarepresentedinSection1.1.2.5(onp.22)
DTLZ5(I,M) problem containsn=M+ 9variables in decision spaceandMobjective functions in objective space.This problem has threeproperties:
– The dimensionality (I) ofPFcan be changed by settingIto an integerbetween2andM.
– FirstM−I+1o b j e c t i v e s areperfectlycorrelate d, wh ile th eoth e rsandoneoffirstM−I+1o b j e c t i v e s areinconflictwitheachother.
ParetooptimalfrontofWFG3(M)problemdegeneratesintoalinear hyperplanesuchthat Σ Mf i =1,which,thefirstM−1obje cti v e sare perfectlycorrelated,whilethelastobjectiveisinconflictwitheachoneinthepro bleminturn.
ThecompletePF-basedobjectivereductional- gorithms 49
Efficiencyinmany-algorithmsinobjectivereduction
51 forsolvingmany-problems.Theproposedalgorithmismoreefficientthanexisting ones and it can automatically determine these parameters More-over, the proposedORAstill achieves comparable results with the existingmethods.
Thissubsectionfirstdescribeshowthethesisinvestigatestheimpact ofmany-algorithms with anODR, and vice versa Second, it presents thedesignofexperimentsincludingtestproblemsandexperimentalsettin gs.
This study is designed to investigate the impact betweenmulti-/many- algorithmsand anODR First, the study examines howmulti-/many-algo-rithms affect the performance of anODR The study also evaluates whatbenefitsmany-algorithms can obtain when they are combined with anODR.
Inordertodemonstratetheimpactofmulti-algorithms/many-algo-rithms on anODR, experiments are designed to compare the performanceofanODRwhenitiscombinedwithmulti-algorithmsorwithmany- algo-rithms Figure2.1illustrates the integration ofmulti- algorithms/many-algorithms on anODR The objective set of problems is input for method.Theloopstartswithmulti-algorithmsormany- algorithms.Thesealgo-rithms are utilized to generate non-dominated solution set (approximationofPF) AnODRanalyzes this set to determine which objective is essentialorwhichoneisredundant.The redundantobjectivesareremovedwhil etheessentialonesarekept.Theloopiscontinueduntilnomoreobjectiveisdiscard ed.
L-PCA(see par.L-PCAandMVUon p.42), is utilized for ob-jective dimensionality reduction.T h e r e a r e p l e n t y o f o b j e c t i v e r e d u c - tions( a s p r e s e n t e d i n s u b - s e c t i o n 1 2 2 o n p a g e 3 4 ).H o w e v e r , t o i l l u s -
Start input/output: objective set multi/many- algorithm nondominated solution set selected objective set stop condition? no yes End objective dimensionality reduction objective set objective set ← selected
Figure2.1:The integrationofanODRintomulti-algorithms/many-algorithms trate to efficiency inmany-algorithmin objective reduction, the thesischoosesL-PCA There is no particular reason for this selection exceptthatL-PCAdoes efficiently in removing redundant objectives. Twomulti-algorithms; namely,NSGA-II[27] andSPEA2[102]; and twomany- al-gorithms; namely,NSGA-III[23] andSPEA2+SDE[58] are employed togeneratenon-dominatedsolutionsets.
In order to examine whethermany-algorithms can obtain advan- tagesw h e n t h e y a r e c o m b i n e d w i t h a n O D R ,e x p e r i m e n t s a r e d e s i g n e d to compare the performance of the integration of amany- algorithmwithanO D R a g a i n s t t h e p e r f o r m a n c e o fm a n y - a l g o r i t h ma l o n e F i g u r e 2 2
Start input/output: objective set many- algorithm
Start input/output: objective set many- algorithm nondominated solution set selected objective set stop condition? no yes End objective dimensionality reduction objective set objective set ← selected nondominated solution set
(a) UsingtheintegrationofanODRintomany-algorithms (b) Using many-algorithms
Figure2.2:Twowaysusingmany-algorithmstodealwithmany-problems reveals two ways usingmany-algorithms to deal withmany- problems.Figure2.2auncovers the integration of anODRinto amany- algorithmwhile Figure2.2bunseals a common way to use amany- algorithmfordealing with amany-problem Operation of Figure2.2ais similar to Fig-ure2.1.Themethod,wi th L-PCA(seepar.L- PCAa n d M V U o np.42)is used to remove the redundant objectives Five well-knownmany- algo-rithms; namely;GrEA[93],KnEA[99],NSGA-III[23],RVEA*[14], andθ–DEA[96]areusedinFigure2.2tosearchfornon-dominatedset.
T h e experiments useL-PCA(see par.L-PCA and MVUon p.42) for doingobjective reduction.InL-PCA, the thresholdθ, which is used to decidewhichobjectivesshouldbeincluded,issetto0.997assuggestedin[77]. All of themulti-/many-algorithms, which are used in the experiment,are implemented by PlatEMO–an Evolutionary Multi-Objective Optimiza-tion Platform [83].The population size is set to 200, and the number ofgenerationsisse t to 2000.Theprobabi lity ofcrossoverandmutatio nisset to 0.9 and 0.1, respectively The distribution index for crossover is setto 5, and the distribution index for mutation is set to 20 [77] The qualityofPFprovided by the different algorithms is evaluated by usingGDandIGDmetrics.
Thiss e c t i o n f i r s t p r e s e n t s t h e r e s u l t s a n d a n a l y s i s t o d e m o n s t r a t e the impact ofmulti-/many-algorithmsonODRs.After that, it presentsresults and analyses to show the benefits gained when they are combinedwithanODR.
2.1.3.1 The impact ofmulti-/many- algorithmso n dimension- alityredu ct ion a lg orith ms
Forthepurposeofdepictingtheimpactsofmulti-objective/many- objectiveevolutionary algorithms (multi-/many-algorithms) onODRs,this thesis examines the impacts of two pairs ofmulti-algorithm/many-al- gorithm; namely,NSGA-II/NSGA-IIIandSPEA2/SPEA2+SDE, onL-
PCA 1for o b j e c t i v e r e d u c t i o n First, t h e t h e s i s u n v e i l ss o m e c a s e s t u d i e s t o i l - lustratehowamulti-algorithm/many-algorithmaffectsanODRon aspecificproblem–DTLZ5(6,8).Subsequently,itdeciphers the impacts ofthepairsondifferenttestproblems.
This subsection illustrates step-by-step howL-PCAis combined withmulti-algorithm/many-algorithmfor reducing redundant objectives onDTLZ5(6,8).
L- PCAwhencombiningwithSPEA2+SDEandSPEA2onDTLZ5(6,8).Table2.1decod esthematrixR (following step3in Algorithm1.2)with its corresponding eigenvalues and eigenvectors ofL-PCAwhen com- biningw i t h S P E A 2 + S D E o n D T L Z 5 (6,8).T h e c o r r e l a t i o n m a t r i x
R o f populationresultsisdepictedinTable2.1a,andthecorrespondingeigen- valuesandeigenvectors 2a r e presentedinTable2.1b(followingstep4in Algorithm1.2).
Next,thenumberofsignificanteigenvectors(V)isdeterminedasthe smallestnumberofelementeigenvalues(N v )suchthat Σ N v e j ≥θ,where θisvariancethreshold.Thethesisusesθ=0.997asrecommendedin[77].First, an objectivef j which has the highest contribution toV j by magni-tude is picked If there exists, at least, one other objective having opposite-sign with the selected objective, then the objectives (with opposite-sign)are picked If not (all objectives have the same sign), an objective withthesecondhighestcontributionbymagnitudeisselected.
Followingthesesteps,asetofobjectivesF e ={f 1 ,f 2 ,f 3 ,f 4 ,f 5 ,f 6 ,f 7 ,f 8} is selected.Then, fromF e the thesis identifies the subsets of identicallycorrelated objectives (RCM-Reduced correlation matrix) with the samesizeasRexceptcolumnsnotinF e T h e thesisdeterminespotentiali den- ticallycorrelatedsubsetSˆ 1 =Sˆ 2 =Sˆ 3 ={f 1 ,f 2 ,f 3} Thresholdcut
T cor is calculated 3 equal to 0.8522 Correlation satisfies condition greaterthan or equals toT cor thenS 1=S 2=S 3={f 1 , f 2 , f 3} In each sub-setS, the thesis retains the objective with the highest selection score,ande l i m i n a t i n g t h e o t h e r s 4 Therefore,h a v i n gs c ={s c 1 ,sc 2 ,sc 3}{2.29E−01,2.29E−01,2.29E−01}, then the thesis selects objective2and removes objectives 1 and 3.T h u s ,F s ={f 2 ,f 4 , f 5 , f 6 , f 7 ,f 8}isretained.
Table2.2uncaps the matrix R with its corresponding eigenvalues andeigenvectorsofL-PCAwhenbeingcombinedwithSPEA2onDTLZ5(6,8).The correlation matrix R is presented in Table2.2a, and the eigenvaluesandeigenvectorofthe correlation RareillustratedinTab le2.2b.BasedonT a b l e 2 2 b ,a l l o f e i g h t p r i n c i p a l c o m p o n e n t s h a v e t o b e i n c l u d e d t o accountf o rθ = 0.997,l e a d i n g t oF e ={f 1 ,f 2 ,f 3 ,f 4 ,f 5 ,f 6 ,f 7 ,f 8}.
01 valuesR 1,2=0.16826,R 1,3=0.08172, R 2,3=0.09392are less thanT cor =0.9751,thesubsetofidenticallycorrelatedobjectivesisempty.Asaresult,t hethesiscannotreduceanyobjective.
In short, the combination ofL-PCAwithSPEA2(amulti- algorithm)cannot reduce any redundant objectives while the combination ofL-PCAwithSPEA2+SDE(amany-algorithm) can correctly reduce redundantobjectivesforsolvingDTLZ5(6,8).
Pair ofNSGA-IIandNSGA-III:The performance ofL- PCAwhenbeingcombinedwithNSGA-IIandNSGA-
01 ues and eigenvectors ofL-PCAwhen being combined withNSGA- IIonDTLZ5(6,8).Accordingt o t h e d a t a p r e s e n t e d i n T a b l e 2 3 ,a l l p r i n c i - pal components (eight components) need to accumulate to have the totalwhich is greater than or equal toθ=0.997.Based on eigenvalues andeigenvectorsinTable2.3b,setF e (conflictingobjectives)isdetermineda s
R 1,2=0.171481islessthanT cor =0.9751,thesubsetofidenticallycorre- latedobjectivesisempty.Asaresult,nomoreobjectivecanberemoved.Table2 4 u n c o v e r s t h e m a t r i x R w i t h i t s c o r r e s p o n d i n g e i g e n v a l -
10 ues and eigenvectors ofL-PCAwhen being combined withNSGA- IIIonDTLZ5(6,8).The conflicting objectives along six significant principal com- ponentsa r e d e t e r m i n e d a sF e ={f 1 ,f 2 ,f 3 ,f 4 ,f 5 ,f 6 ,f 7 ,f 8}.Correlation
Sˆ 1 =Sˆ 2 =Sˆ 3 ={f 1 ,f 2 ,f 3} T cor =1.0−0.4083(1−5/8)=0.8469, and all valuesR 1,2=R 1,3=R 2,3re greater thanT cor , therefore,there are three identically correlated setS 1=S 2=S 3={f 1 , f 2 , f 3}. Thethesisc alcu la tess c ={2.59E−01,2 59E−01,2 59E−01}andre tai n stheindexofthemaximumvalue,hence,thethesisretainsf 2andremovesothers.A s a result,thethesisretainsF s ={f 2 ,f 4 ,f 5 ,f 6 ,f 7 ,f 8}.
Table 2.5:Means, standard deviationsof the number of objectives retained; andthe number of successes when integrating objective reduction (L-PCA) intomulti-algorithms/many-algorithms
DTLZ5(2,5) DTLZ5(3,5) DTLZ5(5,10) DTLZ5(7,10) DTLZ5(5,20)
Retain Succes s Retain Succes s Retain Succes s Retain Succes s Retain Succes s
0 20 5.00±0.00 20 any redundantobjectiveswhilethecombinationofL-PCAwithNSGA-III cancorrectlyreduceredundantobjectivesassolvinganinstanceofDTLZ5(6,8).
The success of dimensionality reduction algorithm when being combined withmulti-algorithms/many-algorithms
Table2.5releases the mean and standard deviation of retained objec- tives which are found out by the combinations ofL-PCAand threemulti- algorithms/many-algorithms; namely,NSGA-II,SPEA2,NSGA- IIIandSPEA2+SDE, for removing redundant objectives within 20 running times.Italsorevealsthenumberoftimeswhenthealgorithmscorrectlyretainess entialobjectives.
It is clearly seen from Table2.5, with non-dominated solutions ob- tained fromNSGA-IIandSPEA2, theORAcan only successfully removeredundanto b j e c t i v e s w h e n t h e n u m b e r o f o r i g i n a l o b j e c t i v e s i s s m a l l For example, the combinations ofL-PCAandNSGA- II/SPEA2can ex-actly remove redundant objectives onDTLZ5(2,5) andDTLZ5(3,5) How-ever, when the number of original objectives increases, the combination ofL- PCAwithmulti-algorithms cannot successfully remove redundant ob- jectives For example, the combination ofL-PCAwithNSGA- II/SPEA2isnot successful any time in removing redundant features onDTLZ5(7,10),andDTLZ5(5,20).
In contrast, with non-dominated solutions obtained fromNSGA-IIIandSPEA2+SDE,theobjectivereductioncansuccessfullyremoveredun- dant objectives even when the number of original objectives increases. Forexample, the combination ofL-PCAwithSPEA2+SDEcan successfullyremove redundant objectives in all the five instances of the test problemwhile the combination ofL-PCAwithNSGA-IIIonly cannot successfullyremoveredundantobjectivesinthetwocasesofDTLZ5(5,20).
In summary, the quality of the non-dominated solution sets generatedbymulti-algorithms ormany-algorithms plays an important role in theperformance of an objective reduction algorithm The combination of anODRwithmany-algorithmscansuccessfullyremoveredundantobjectiveseven if the number of original objectives is large However, the combina-tion of anODRwithmulti- algorithms can often only remove redundantobjectiveswhenthenumberoforiginalobjectivesissmall.
2.1.3.2 Theimpactofdimensionalityreductionalgorithmonmany- alg ori th ms
In order to demonstrate the benefits of anODRwith amany-algo- rithm,thethesiscomparestheperformanceofmany-algorithmscombiningthe objective reduction againstmany-algorithms alone.Two metrics ofGDandIGDareusedtoexaminethealgorithms.
Table2.6uncaps the mean and standard deviation (in parentheses)ofGDandIGDoffivemany- algorithmsincludingGrEA[93],KnEA[99],NSGA-III[23],RVEA*[14],θ- DEA[96].I G D 1andGD 1refertoIGD andG D o fm a n y - a l g o r i t h msw i t h o u t b e i n g c o m b i n e d w i t h a n y o b j e c - tive reduction algorithm, respectively.IGD 2andGD 2refer toIGDandGDofmany-algorithms combined with an objective reduction algorithm,namely,L-
PCAforremovingredundantobjectives,respectively.Thetablealsorepresentsthem eanandstandarddeviationofthenumberofobjec- tiveswhichareretainedafterobjectivereductionis carriedout.
Problems DTLZ5 DTLZ5 DTLZ5 DTLZ5 DTLZ
Table 2.6:Me ans and standa rd deviati ons of the numbe r of objecti ves retaine d (Re- tain), those of IGD,
GD of approx imate PFs (IGD 1 ,G
ThepartialPF-basedobjectivereductionalgo- rithms 73
PCS-LPCAobjectivereductionalgorithm
3.1 PCS-LPCAo b je c ti v e r e d u ct i on a lg o r i t h m
The main purpose of this algorithm is to take advantage ofPCSEAand alleviate the limitations of the PCSEA-based objective reduction al- gorithm The proposed algorithm usesPCSEAto generate non- dominatedsolutionsw h i c ha r e t h e n u se d b y L - P CA (s e e p a r L -
P C A a n d M V U on p.42) to eliminate redundant objectives.The thesis names it, which isproposedandfiguredoutinAlgorithm3.1,asPCS-LPCA.
The algorithm has two key ideas: (1) the proposed algorithm can takeadvantageofPCSEA,whichhascapabilityoffindingsomekeysolutionsinPFwithlo wercomplexitythanothermulti-algorithms/many-algorithms;
(2) unlike PCSEA-based objective reduction [79] (Algorithm1.4, p.45),theproposedalgorithmavoidsusingthesensitiveparameterthresho ldC.
4 P u Unique-Nondominated(P); // R e t a i n the uniqu esolutions
FromtheAlgorithm3.1,steps3-12areiterateduntilthetermination j=1 criterion is satisfied In step3,PCSEAis performed, in which,PCSEAfinds solutions only lying on the “corner” ofPFcorresponding to remainobjective set insteadof the complete objective space Among solution-sobtained byPCSEA, only unique non-dominated solutions are retained,whiletherestisdiscarded(step4).Step5,L-
PCAobjectivereductionisexecuted This step findsthe minimum set of conflicting objectives andattempts to keep the correlation structure in the unique non- dominatedsetobtainedfromthepreviousstepbyremovingobjectivesthatarenon- conflict or low conflict along the important correlation matrix’s eigenvec-tors The reduction (step5) works as the same pseudo-code between step3and step6as do in Algorithm1.2 The result of this step is an objectiveset If the objective set after reductionF s is unchanged, the loop exits.Otherwise,F t is assigned by the value ofF s , then the algorithm starts atthenextloop.
This subsection indicates the first loop of how the algorithm workswhen it solvesDTLZ5(3,5) problem [26] Parameters forPCSEAare setaspresentedinTable3.2(p.79).
Solutions concentrate on “corners” when comparing to the completeobjectivespacelikeParetooptimalsetinFigure3.1a.
Basing on these solutions, correlation matrix (R), its correspondingeigenvaluesandeigenvectors 1a r e representedinTable3.1.Next, thenum- berofsignificanteigenvectors(V)aspointedoutasthesmallestnumber ofe l e m e n t e i g en va l u e s (N v )s u c h t h a t Σ N v e j ≥θ a r ealsopresented. θ=0.997i su s e d a s r e c o m m e n d e d i n [ 77].I n e a c h c o l u m n , a n o b j e c tf j which has the highest contribution toV j by magnitude is picked. Iftheree x i s t s a t l e a s t a n o t h e r o b j e c t i v e h a v i n g o p p o s i t e - s i g n w i t h t h e s e -
Figure 3.1:Parallel coordinate plots for Pareto optimal set and solution set obtainedbyPCSEA lected objective, the opposite-sign objectives are picked If not, an ob- jective with the second highest contribution by magnitude is selected.Following these steps, a set of objectivesF e ={f 1 ,f 2 , f 3 , f 4 ,f 5}is se-lected Then, fromF e the subsets of identically correlated objectives withthesamesize as R except col umns noti nF e a r eidentified P o t e n t i a l identicallycorrelatedsubsetSˆ 1 =Sˆ 2 =Sˆ 3 ={f 1 ,f 2 ,f 3}i sdetermined.
Threshold cutT cor of 0.5558 is calculated 2 Correlation satisfies the con- dition, which is greater than or equals toT cor , identically correlated setsS 1 S= 2= S 3= { f 1 , f 2 , f 3}.Ine a c h s u b s e tS,t h e o b j e c t i v e w i t h t h e
O b je cti v e v a lu e O b je cti v e v a lu e j=1 highest selection score is retained, and others are eliminated 3 Therefore,sc={sc 1 ,sc 2 ,sc 3}={4.05E−01,4.05E−01,4.05E−01}isfigu redouttoselect objective1, and remove objectives2,3 As a result,F s ={f 1 , f 4 , f 5}is kept for the first loop.F t is assigned to the value ofF s Then the nextloop continues until no other objectives are removed (stop’s value is equaltotrue).
Table 3.1:The correlation matrix (R) with its corresponding eigenvalues (e) andeigenvectors(V)ofDTLZ5(3,5)problem
ThecomputationalcomplexityofthePCS-LPCAconsistsoftwomainparts: executingPCSEA, and reduction usingL-PCA The complexity ofPCSEA 4 ,a n d L - P C A [ 77] 5a r e O(GMNl o g N),O(NM 2 +M 3 ),r e s p e c - tively.Hence,th ecomplexityoftheproposedalgorithm, P CS -LPCA, is
O(M 2 (GNlogN+MN+M 2 ));whereMi sthenumberofobjectives,N andGisthepopulationsizeandthenumberofgenerationsforPCSEA.
In order to evaluate PCS-LPCA algorithm, the thesis designs an ex- periment with a view to making comparison between the existing algo- rithms with PCS-LPCA First, the thesis does comparison between thePCS-LPCAalgorithmandfourwell-knownmany- algorithmsincorporatedwithL-
PCAsoastoknowwhetherPCSEAisbetterthanothersintermsof generating non-dominated solutions for objective reduction Four well-knownmany-algorithms; namely,NSGA-III[23],GrEA[93],RVEA*[14],θ-
DEA[96]areadoptedtosearchfornon-dominatedsolutionsets.Second,the comparison between PCS-LPCA and PCSEA-based objective reduc- tionisexecutedtoknowwhichobjectivereductionisbetterwhilethesamePCSEAisu sedforgeneratingnon-dominatedsolutions.
The thesis does experiment on the algorithms on the bases of twoproblems; namely,DTLZ5(I,M) andWFG3(M) problem.They containredundanto b j e c t i v e s w h i c h h a v e a l r e a d y e x p l a i n e d i n d e t a i l i n s u b s e c - tion1.1.2.5(p.22).
ForDTLZ5(I,M) problem, the thesis tests on 36 instances with valuesofIstartat5to20bystep5,andtherangeofMi s10to100inthestepof 10. ForWFG3(M) problem, the thesis tests on 10 instances with valuesofMs t a r tat10to100inthestepof10.
For generating non-dominated solutions,while the parameters forPCSEAare set as presented in Table3.2, allmany-algorithms that usedthe experiments, are implemented by PlatEMO [83] 6 The population sizeissetto200.T h e probabilityofcrossoverandmutationissetto 0.9a nd
6 Thenu mb er o fg e n e r at i o n s f o r ma n y -a l g o r i t h m s is s e t to 2 0 0 0
0.1, respectively The distribution index for crossover is set to 5, and thedistributionindexformutationissetto20[77].
Parameter Value Parameter Value sizeofpopulation 200 SBXcrossoverinde x 10 numberofgeneratio ns 500 mutationprobabilit y 0.1 crossoverprobabilit y 0.9 polynomialm u ta ti on 20
For objective reduction algorithms, thresholdCis set equal from 0.55to 0.95 at each step 0.05, and 0.975, 0.99 for PCSEA-based objective re-duction algorithm, thresholdθis set equal to 0.997 forL-PCAobjectivereduction algorithm.Thirty independent runs were executed for each in-stance.
3.1.3.1 The dependence of PCSEA-based objective reduction al- gorithm’sresultson thre shol d
Inordertoevaluatetheaffectionofthresholdonresult,thethesisdoesthis experiment.InPCSEA-basedreduction algorithm in Algorithm1.4,there is a thresholdCparameter.The algorithm examines each objec- tivebycalculatingthevalueofrateR(theratiobetweennumberofnon-dominated solutions after and before dropping that objective).IfRisgreater than thresholdCthen that objective is removed from the objec- tiveset.Thethresholdwithvalues:0.55,0.60,0.65,0.75,0.80,0.85,0.90,
0.95, 0.975 and 0.99 for 3 instances ofDTLZ5(I,M) problem are tested.TheproblemwithvaluesofI a r e5,1 0 and15; v a l u e s ofMa r e2 0,4 0 ,
60.Thethesisexecutes30runsindependentlyforeachcase Figure3.2 illustratestheproportionofsuccessinfindingtherelevantobjectiveset. Asc a n b e e n v i s i o n e d f r o m F i g u r e 3 2 ,i fI i se q u a l t o 5 , t h e r a n g e of thresholdCcan be from 0.8 to 0.9 the success rate of finding correctredundant is up to 100 percent.WhenIis set to 10, the percentage ofsuccessinfindingrelevantobjectivesisbestatthresholdCe q u a lto0.95
Figure3.2:T h e proportion( %)ofsuccessinfinding relevantobjectiveset att h e h i g h e s t s l i g h t s m a l l e r t h a n 9 0 p e r c e n t In t h e l i s t o f t h r e s h o l dC,thehighestsuccessisonly20percent.T h a t caninferthatifthe numberof relevant objectives increases, the thresholdCneeds to be set higherandseemssensitive.Moreover,thepercentageofsuccessinfindingcorrectrel evantobjectivesetbecomeslower.H e n c e , itisconcludedthatthreshold
10,100),WFG3(10)andWFG3(100)problemsTable 3.3showsreduc edobjectivessetobtainedforsolving2prob- lems,DTLZ5(I,M) andWFG3(M), by using PCS-LPCA objective reduc- tion algorithm The second and the third columns show that the objectivesare kept as solving 2 instances ofDTLZ5(I,M), namelyDTLZ5(5,10) andDTLZ5(10,100).
ForDTLZ5(5,10) problem, if four objectivesf 7 , f 8 , f 9 , f 10and one ofthe remaining six objectives are kept, the reduced objective set is correct.The table shows that the algorithm has found correct set in 30 times in 30runs.
Table3.3:R e d u c e d setofobjectivesobtainedforPCSEAafterperform ingL-PCAobjectivereduction run DTLZ5(5,10
30 f 3 ,f 7 ,f 8 ,f 9 ,f 10 f 88 ,f 92 ,f 93 ,f 94 ,f 95 ,f 96 ,f 97 ,f 98 ,f 99 ,f 100 f 5 ,f 10 f 58 ,f 100 and one of the remaining 90 objectives are kept, the reduced objective setis correct.The algorithm has found correct set in 27 times in 30 runs intotal.Thealg o rith mh as f ai le d to find th e correctset in th reec ase s, r u n 4,9and22.
Similarly, the results for solvingWFG3(10) andWFG3(100) are shownin4 th and5 th columns.If the last objective and one of remaining objec- tivesa r e r e t a i n e d , t h e a l g o r i t h m d o e s c o r r e c t l y T h e t a b l e u n v e i l s t h a t the algorithm has found correct set in all of 30 runs in solvingWFG3(10)problem
(the4 th column) The last column shows the result when solvingWFG3(100) problem, the algorithm has found correct objective set in
This subsection presents two result aspects.The first aspect is re- duced objective set obtained by objective reduction.The second one iscomparisonb e t w e e n G D a n d I G D m e t r i c s o f s o l u t i o n s e t w i t h o u t a n d with objective reduction While the first aspect is shown in Table3.4, thesecondaspectisuncappedinTable3.5.
Table3.4unveils the numbers of successes in finding the essentialobjectivesintotal30timesforthreemaincasesofreductions.
The first category reduction is listed in4 th column of the table.
Thesecondcategoryincludes4columnswhicharefourcolumns(5 th colum nto8 th c o l u m n )belowthecellof“many-algorithmsandL-
PCA”.Thiscategoryusesconventionmany-algorithmsforgeneratingnon- dominatedsetthenadoptsL-PCAforobjectivereduction.
Thethirdcategory,whichislisted in9 th column,utilizingPCSEAforgeneratingnon- dominatedsetthenL-PCAisusedforobjectivereduction.
Table3.4:C om pa ri so n ofthenumberofsuccessesinfindingcorrectrelevantobj ectiveset in total 30 runs of PCSEA-LPCA withPCSEA-based; andmany-algorithms andL- PCA
Problems I M PCSEA-based many- algorithm sand L-PCA
Problems I M PCSEA-based many- algorithm sand L-PCA
The first category, which isPCSEA-basedobjective reduction 7 , givesquitegoodresultswhenthenumberofessentialobjectivesissmall(5),andtheresu ltsdecreaseasthenumberofobjectivesincreases.T h i s trendofre- sultinsolvingDTLZ5(I,M)problemissimilartothatofsolvingWFG3(M)problem The table uncaps thatPCSEA-basedobjective reduction canfind exactly 30 times in running 30 times as solvingDTLZ5(I,M) with 5essential objectives and total objectives less than or equal to 60; or solvingWFG3(M)problemwith20objectivesintotal.
PCS-Clusterobjectivereductionalgorithm
← are shown in5 th and7 th columns of the table The table unveils that theGD and IGD of reduced solution set obtained by objective reduction arebetter than (smaller than) those of it without objective reduction in all of46casesforbothDTLZ5(I,M),andWFG3(M)problems.
3.2 PCS-Clusterobjective r ed uctio nalgori thm
ThePCSEA-basedORAin [79] can efficiently remove redundant ob- jectives.However, this algorithm has a number of drawbacks.First, thecutoff value of R (Cthreshold) was provided before objective reductionwasr u n S e c o n d , t h e O R A d i d n o t c o n s i d e r t h e i m p o r t a n c e o f t h e o r - dero f r e m o v in g r e d un d an t o b j e c t i v e s Finally,t h e a l g o r i t h m w a st e s t e d onDTLZ5(I,M) problem with only a small number of relevant objectives(specifically5).
Them a i n p u r p o s e o f t h i s p r o p o s e d a l g o r i t h m i s t o t a k e a d v a n t a g e ofPCSEAand alleviate the limitations of thePCSEA- basedobjective re-duction algorithm.The proposed method uses thePCSEAto generatenon-dominated solutions.The objectives in the solution set are consid-ered as object (or point) then for clustering to eliminate redundant objec-tives Algorithm3.2uncovers the main steps of the proposed algorithm orPCS-Clusteralgorithm.
( 1 ) t h e p r o p o s e d a l g o r i t h m c a n takea d v a n t a g e o f P C S E A ,w h i c h i s a b l e t o f i n d s o m e k e y s o l u t i o n s i n PFwith lower complexity than othermulti-/many-algorithms; (2) unlikeAlgorithm1.4,theproposedalgorithmavoidsusingthesensitiveparameterthres holdC.
Objectivesofsolutionsetareconsideredasobjectsinobjectivespace.They are grouped in a number of sets which are known as clusters.Tomeasure the distance between objectivexand objectiveyfor clustering,distancedisusedasformula(3.1): d=1−ρ(x,y) (3.1) whereρ(x, y)is the Pearson correlation coefficient between randomvariablesx a n d y,t h e r a n g e o fρ i sf r o m -
1 t o 1 ; t h e l o w e rρ v a l u ei s , the higher two variables are negatively correlated, the more one objectiveincreases, the more the other decreases; and vice versa, the higherρis, thehigher two variables are positively correlated, therefore both objectivesincreaseordecreaseatthesametime.
This procedure uses two kinds of clustering algorithms; namely,k- meansandD B S C A N T h ek - meansd i v i d e s t h e s e t o f o b j e c t i v e s i n t ok clusters.
The value ofkis determined by using the ELBOW method [54].TheELBOW method computes the distortions under different cluster numberscounting from 1 ton, andkis the cluster number corresponding to 99.0%percentage of variance explained, which is the ratio of the between- groupvariancea n d t h e t o t a l v a r i a n c e DBSCANa u t o m a t i c a l l y d i v i d e s t h e s e t ofobjectivesintoanumberofclustersusingdensity- basedclusteringalgo- rithminsteadofthepredeterminationofthenumberofclusters.
ThissubsectionexplainstheprocessofworkingofthealgorithmwhensolvingDTL Z5(5,10) problem Parameters forPCSEAare set as in Ta-ble3.2(step3). Initially, 10 objectives are assigned forF t set Figure3.3draws a parallel coordinates ofF t objectives of solutions set obtained byPCSEA(thesolutionsconcentrateonlyonthe“corners”o f objectivesp ace).
Figure3.3:P a r a l l e l coordinateplotsforobjectivesofsolutionsetobtained byPCSEAinsolvingDTLZ5(5,10)problem(thefirstloop)
Afterhavingthesolutionsset,amatrixdistance(followedformula3.1)betweenobj ectivesiscalculated.ThematrixdistanceispresentedinTa-ble3.6 A cell of rowiand columnjcontains the distance between objec-tiveiandobjectivej. BasedonmatrixdistanceinTable3.6,aclusteringalgorithm(k- meansorDBSCAN)isexecuted.InthecaseofDBSCAN,itgroups10objectives
Figure 3.4:Parallel coordinate plots for objectives of solution set obtained byPCSEAinsolvingDTLZ5(5,10)problem(thesecondloop)
0 0.0E+0 0 into clusters in which each objective is assigned cluster as{1,1,1,1,1,1,2,3,4,5}, thus there are 5 clusters in total Each cluster is retained for oneobject (one objective inmulti-problem/many-problem). Then five ob-jectives1,7,8,9,10are retainedand assigned forF s set, while others areremoved As a result,F t is not equal toF s ,F t is assigned toF s and theloopiscontinued.
Atthenextloop,PCSEAgeneratesasolutionsetwithF t objectivesset This solution set is drawn as illustrated in Figure3.4 The
O b je cti v e v a lu e distancematrixiscalculatedandportrayedinTable3.7.C l u s t e r i n g d ividesthe
F t objectives set into1,2,3,4,5.Each cluster contains only one object(objective) and needs to keep one object.Therefore, all objectives areretained.In other words,F s is the same asF t or the algorithm exists.Finally,thealgorithmretainsrightfiveobjectives{1,7,8,9,10}.
Table3.7:The matrixdistancebetween5objectives(1,7,8,9,and10)
The computational complexity of the PCS-Cluster consists of twomain parts: executingPCSEAand clustering algorithms The complexityofPCSEAisO(GMNlogN).Since this algorithm uses two clusteringalgorithms;namely,k-meansorDBSCAN; the complexity ofk- meansisNP-hard.Hence,thecomplexityofPCS-Cluster(k-means)isalsoNP- hard.The complexityofDBSCANisO(M 2 ), whereMis the number of objectsor the number of objectives in optimization problem Thus, the complexityof PCS-Cluster(DBSCAN) isO(GM 2 NlogN+M 3 ), in which,Gis thenumber of generations forPCSEA,Mis the number of objectives, andNispopulationsizeforPCSEA.
The thesis does experiment onDTLZ5(I,M) problem [26],PCSEAfor generating non-dominated solutions.Then the thesis compares theproposed algorithm (PCS-Cluster) with the best corresponding results ofthePCSEA-basedobjectivereduction.
The experiments are performed on 28 instances ofDTLZ5(I,M) prob- lemint otal Whilev al ue so fI a r ese t5, 1 0 a n d 1 5,t h o s e ofM areset from 10 to 100 at each step 10.The parameters forPCSEAare set asprovedinTable3.2(p.79).
InPCSEA-basedobjective reduction algorithms, thresholdCis setequal from 0.55 to 0.95 at each step 0.05, and 0.975, 0.99 Parametersfor
PCSEA-Cluster objective reduction (k- meansandDBSCANclusteringalgorithm) are set as exemplified in Table3.8and Table3.9 The distancetype for both clustering algorithms is Pearson correlation, the percentagemeans the percentage of variance explained,minObjsis minimum numberofobjsrequired to form a cluster.The total 30 independent runs areexecutedforeachinstance.
Thissectionfirstexaminesandchoosesvaluesforonethresholdf ork-meansandonethresholdforDBSCAN.Next,theperformanceoftheproposed algorithm is compared with the original method,PCSEA-basedobjectivereduction.
3.2.3.1 Examinationo f t h r e s h o l d s f o r k - m e a n s , a n d D B S C A N i nso l v in g DT LZ5 ( 10,*)pr o b l em
This section examines the effect of choosing the percentage thresholdandthedistancethresholdforclusteringalgorithm(k- meansandDBSCANrespectively)astheresultofsolvingDTLZ5(10,*)problem.F o r distance threshold, the values are set from 0.2 to 0.9 in step 0.05.For percentagethreshold, the values are set 95, 96, 97, 98, 99 As forDTLZ5(I,M) prob-lem,Iis set fixed with the value of 10, the values ofMare set from 20 to100 with step 10 Each instance is run independently 30 times; hence, 270casesaregainedintotal.
Asf o r t h e p e r c e n t a g e t h r e s h o l d , t h e a l g o r i t h m g a i n s t h e b e s t r e s u l t at value of 99 For the distance threshold, Figure3.5deciphers the plotfor the number of successes in finding out the relevant objectives and re- movingr e d u n d a n t o n e s f o r s o l v i n g D T L Z 5 (10,*)p r o b l e m s o n a n u m b e r of different values of distance threshold forDBSCAN.The plot impliesthat the worst result at the threshold of 0.9 and gradually better whenthreshold decreases The result yields the best when threshold is small (atvalues of 0.2, 0.25, and 0.3) Based on this examination, the thesis choosesthedistancethresholdasillustratedinTable3.9forsubsection3.2 3.3.
Figure 3.5:The number of successes in determining the relevant objectives in solvingDTLZ5(10,*)problembyusingDBSCAN
This subsection shows result when solving two instances of DTLZ(I,M)problem and two instances ofWFG3(M) one by using PCS-
Cluster algo- rithmsin30runsofeachinstance.T h e resultsareloadedinTables3.10
Table3.10:Reduced setofobjectivesobtainedbyPCSEAandPCS- Clusterobjectivereduction(k-means) ru n DTLZ5(5,10) DTLZ5(10,100) WFG3(10) WFG3(100)
Clusterobjectivereduction(DBSCAN) run DTLZ5(5,10) DTLZ5(10,100) WFG3(10
30 f 1 ,f 7 ,f 8 ,f 9 ,f 10 f 1 ,f 92 ,f 93 ,f 94 ,f 95 ,f 96 ,f 97 ,f 98 ,f 99 ,f 100 f 1 ,f 10 f 1 ,f 100 and3.11 In each table, the second and the third columns show the objec- tivesarekeptassolving2instancesofDTLZ5(I,M),namelyDTLZ5(5,10)andDT LZ5(10,100).
In Table3.10,PCS-Cluster(k-means) has found a correctset in 30and29timesintotal30runsinsolvingDTLZ5(5,10)andDTLZ5(10,100),respecti vely The algorithm has failed to find any correct objective set insolvingWFG3(10)andWFG3(100).
In Table3.11, the algorithm has found correct essential objective set30 times each for solvingDTLZ5(5,10),WFG3(10) andWFG3(10). Thealgorithm has failed only once, run 4, when solvingDTLZ5(10,100) in thetotalof30runs.
This subsection compares the performance of the proposed algorithm(PCS-Cluster) with the existing methods, which arePCSEA- basedob-jective reduction,L-PCAobjective reductions formany-algorithms orPCSEA The first nine columns of the Table3.12are contents of Table3.4which are explained in subsection3.1.3.3(p.82).Two last columns ofTable3.12indicate the numbers of successes in finding correct relevantobjectivesetforPCS-Clusterofk-meansandDBSCANclusteringalgo- rithms.
It can be seen from the Table3.12, similar to other categories (the4 th column, the5 th column to the8 th column, and the9 th column) as solvingDTLZ5(I,M) problem, the result of PCS-Cluster (two last columns) tendsto decrease gradually as value ofIincreases, and/or value ofMincreases.Ass o l v i n g W F G 3 (M),w h i l e t h e P C S - C l u s t e r (k- means)( c o l u m n 1 0 ) d o e s the worst, the PCS-Cluster(DBSCAN) (the last column of the table) doesthebest.
Especially, the table illustrates that the PCS-Cluster(k-means) canfindessentialobjectivescorrectlyin837times,whichisgreaterthan any
Table 3.12:Comparison of the number of successes in finding the correct relevantobjective set in the total 30 runs of PCS-Clusters withPCSEA-based,many-algorithmsandL-PCA,andPCS-LPCA
Problem s I M PCSEA- based many- algorithm sand L-PCA
Problem s I M PCSEA- based many- algorithm sand L-PCA
7 othersinsolvingDTLZ5(I,M)problem.However,itdoestheworstinsolv- ingWFG3(M)problemwithonly11successes;meanwhile,the