1. Trang chủ
  2. » Luận Văn - Báo Cáo

A new optimization based on parallelizing hybrid PSOGSA algorithm

11 8 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 11
Dung lượng 548,78 KB

Nội dung

A New Optimization Based on Parallelizing Hybrid PSOGSA Algorithm Jie Yu , Hong-Jiang Wang2 , Jeng-Shyang Pan3 , Kuo-Chi Chang2,4,5 , Truong-Giang Ngo6(B) , and Trong-The Nguyen2,7 College of Mechanical and Automotive Engineering, Fujian University of Technology, Fuzhou 350118, China Fujian Provincial Key Laboratory of Big Data Mining and Applications, Fujian University of Technology, Fuzhou, China albertchangxuite@gmail.com, jvnthe@gmail.com College of Computer Science and Engineering, Shandong University of Science and Technology, Shandong, China College of Mechanical and Electrical Engineering, National Taipei University of Technology, Taipei, Taiwan Department of Business Administration, North Borneo University College, Sabah, Malaysia Thuyloi University, 175 Tay Son, Dong Da, Hanoi, Vietnam giangnt@tlu.edu.vn Haiphong University of Management and Technology, Haiphong, Vietnam Abstract This study suggests a new metaheuristic algorithm for global optimization, based on parallel hybridizing the swarm optimization (PSO) and Gravitational search algorithm (GSA) Subgroups of the population are formed by dividing the swarm’s community Communication between the subsets can be developed by adding strategies for the mutation Twenty-three benchmark functions are used to test its performance to verify the feasibility of the proposed algorithm Compared with the PSO, GSA, and parallel PSO (PPSO), the findings of the proposed algorithm reveal that the proposed PPSOGSA achieves higher precision than other competitor algorithms Keywords: Parallel PSOGSA algorithm · Mutation strategy · Particle swarm optimization · Gravitational search algorithm Introduction Nowadays, the metaheuristic algorithms have been used in many industries, such as power, transportation, aviation [1], and other fields There are three kinds of metaheuristic algorithms inspired by nature: those generated by natural physical phenomena, those generated by biological evolution, and those generated by the living habits of populations Now, there are many representative algorithms in each of them, such as gravitational search algorithm(GSA) [2], simulated annealing algorithm (SA) [3] and black hole (BH) © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd 2021 J.-S Pan et al (eds.), Advances in Intelligent Information Hiding and Multimedia Signal Processing, Smart Innovation, Systems and Technologies 212, https://doi.org/10.1007/978-981-33-6757-9_22 A New Optimization Based on Parallelizing Hybrid PSOGSA Algorithm 169 [4], which are beneficial representatives inspired by natural physical manifestation Differential evolution (DE) [5] and genetic algorithms (GA) [6] are metaheuristic algorithms inspired by biological evolution process in nature, and more metaheuristic algorithms are generated by living habits of the population For example, PSO [7], gray wolf algorithm (GWO) [8], firefly algorithm (FA) [9], and whale optimization algorithm (WOA) [10] are powerful algorithm representatives For most of the metaheuristic algorithms, this method has a slower convergence rate and is prone to locally optimal solutions Many researchers have proposed a variety of hybrid algorithms to improve this phenomenon, mainly using the advantages of integrating different algorithms to enhance the mining ability, whether it is partial or comprehensive Its performance is better than the single optimization algorithm before mixing [11–13] For example, in 2010, Seyedali Mirjalili and others proposed a hybrid algorithm of PSO and GSA (PSOGSA) [14], which combines the advantages of PSO and GSA, making its performance superior to the original PSO and GSA algorithm Some scholars have proposed the clustering algorithm to improve performance For example, the parallel particle swarm optimization (PPSO) algorithm [15–17] proposed by scholars, the leapfrog algorithm of clustering, is a good representative However, the idea of mixing and clustering helps improve the algorithm’s performance, and this study put forward a new optimization algorithm based on a parallel hybrid PSOGSA algorithm brought adding optimal solution’s mutation strategy to enhance the algorithm’s performance 23 benchmark functions are selected for the performance test of the improved algorithm and compared with related four algorithms Three Standard Algorithms The PSO mathematical is as follows: vid (t + 1) = ωvid (t) + c1 · rand · pbest di − xid (t) + c2 · rand · gbest − xid (t) (1) xid (t + 1) = xid (t) + vid (t + 1) (2) In the formula, vid (t) indicates that the ith particle d (d ∈ {1, 2, , D}, then defined spatial dimension) dimension speed is D, the ith particle current position is xid (t), the inertia weight is ω, furthermore c, c2 are two constants, and the random number is rand in the range of [0, 1], gbest is the optimal solution currently obtained The mathematical model of the GSA algorithm can be expressed by a series of formulas as follow Formulas (3)–(5): veldi (t + 1) = rand · veldi (t) + aid (t) (3) Sid (t + 1) = Sid (t) + veldi (t + 1) (4) aid (t + 1) = Fid (t)/Mid (t) (5) 170 J Yu et al The ith particle position where Sid (t) represents, the ith particle velocity is veldi (t), aid (t) is the acceleration of the ith particle, Mid (t) is the inertia mass of the ith particle, the ith particle resultant force is Fid (t) The calculation method of inertia mass is shown in Eqs (6) and (7) mi (t) = fiti (t) − worst(t) best(t) − worst(t) (6) N Mi (t) = mi (t)/ mj (t) (7) j=1 where the ith particle fitness function value represents fiti (t), best(t) represents the optimal global value obtained currently, and the worst fitness value currently obtained is worst(t), the total number of particles is N With inertial mass, the interaction force between particles can be expressed as: Fijd (t) = G(t) · Mi (t) · Mj (t) · Xjd (t) − Xid (t) Rij (t) + ε (8) The Fijd (t) represents the d -dimension gravity between particles i and j, Rij (t) represents the Euclidean distance between particles i and j, ε is a constant, G(t) is constant of gravity, and its expression is shown in Eq (9) G(t) = G0 · exp(−α · t/maxt) (9) where α and G0 are constant, the set a maximum number of iteration is smaxt With the support of the above mathematical formula, the expression of the resultant force is: N Fid (t) = randj · Fijd (t) (10) j=1,j=i The formula of the PSOGSA algorithm can be expressed by Formulas (11) and (12) Vid (t + 1) = ω Vid (t) + c1 · rand · aid (t) + c2 · rand2 · gbestd − Xid (t) Xid (t + 1) = Xid (t) + Vid (t + 1) (11) (12) where c1 , c2 are constant, rand1 , rand2 are random number belonging to [0, 1], the inertia weight coefficient is ω , and the ith particle velocity represents Vid (t), the ith particle acceleration is aid (t), and its calculation method is the same as that in GSA Xid (t) is the position, i particles in the d dimension under the number iterations t The current optimal solution represents gbest d A New Optimization Based on Parallelizing Hybrid PSOGSA Algorithm 171 Parallel PSOGSA This paper proposes a parallel PSOGSA The idea is to divide N (all particle individuals) into G subgroups, let G subgroups run PSOGSA independently to find the optimal value, and let G subgroups communicate with each other under a specific number of iterations, so as to show the advantages of cooperation between subsets, so that the subgroups can continuously update to the high-quality solution In this study, there are two communication strategies for subgroup communication, and the aforementioned strategies are all triggered by a specific number of iterations Strategy (1): If you want the algorithm to quickly jump out of the local optimal solution trap, a mutation idea is proposed, which is far away from the optimal global solution In each subgroup, the same numbers of individuals are randomly selected for mutation In the case of a specific number of iterations triggered by R, update according to Formula (13) gbest Xij = Xk ×γ (13) where Xij (i is a randomly selected individual i ∈ {1, 2, , sizepop/G} in the subgroups, sizepop is the maximum number of particles, j ∈ {1, 2, , dim}, dim is the dimension gbest of search space) is the current solution of the selected individual, Xk is the kth dimension value of the currently obtained global optimal solution (k ∈ {1, 2, , dim}), γ is a number in the range [0.1, 1] Strategy (2): choose two subgroups Ga and Gb arbitrarily from the G subgroups, and when R1 is triggered by a specific number of iterations, Ga , Gb communicates with the remaining Gk (k ∈ / {a, b}) about the optimal value and optimal solution The communication method is shown in Fig Fig Schematic diagram of communication strategy In Fig 1, k is the number of the current iteration, R is the trigger condition, max is the maximum number of cycles, Gi (i ∈ {1, 2, , n}) represents the subgroups, n is the groups’ average number, G1 and G3 are two selected subgroups The optimal global solution of subgroups is perturbed in a small range of variation, which can further expand the search scope of the avoid falling and algorithm into local optimum The method is 172 J Yu et al shown in Formulas (14) and (15): ⎛ ⎞ Popsize Vid (t)/PopSize⎠ Wd (t) = ⎝ (14) i=1 gbest∗d (t) = gbest ∗d (t) + Wd (t) · N (0, 1) (15) where: Vid (t) and popsize are the same as before, N (0, 1) is the standard normal distribution function, gbest∗d (t) is the optimal solution currently obtained by the subgroups Taking group G = as an example, Fig shows the process of PPSOGSA algorithm using two communication strategies, where k is set to R that is the starting condition of the first strategy, k = R1 is the starting condition of the second strategy, and k = max is the ending condition of the algorithm cycle That is to say, every R iterations of the algorithm, PPSOGSA use strategy (1) for communication, and every R1 iterations, PPSOGSA uses strategy (2) for communication Fig Take the grouping into four groups as an example, the communication method of PPSOGSA The PPSOGSA algorithm pseudo-code is shown in Table Experimental Data The PPSOGSA algorithm performance is tested by 23 benchmark functions In this experiment, the objective function is each function minimum value in the corresponding range The parameters of various optimization algorithms are set as follows For PSO, the following settings are used: C1 = C2 = 1.5, ω is linearly reduced from 0.9 to 0.2, the maximum speed vmax = 5, vmin = −5 GSA uses the following settings: G0 = 100, α = 23 For PSOGSA, c1 = 0.5, c2 = 1.5, G = 1, α = 23, ω is a random number of [0, 1] For PPSO, C1 = C2 = 1.5, ω decreases linearly from 0.9 to 0.2, vmax = 5, vmin = −5 For PPSOGSA, the following settings are used: G = 1, α = 23, ω is [0, 1] random number, divided into four groups, R = 10, R1 = 50 The maximum number of iterations of all algorithms is 1000, and the search agent is set to popsize = 60 The way A New Optimization Based on Parallelizing Hybrid PSOGSA Algorithm 173 Table Pseudo-code of PPSOGSA Initialization: Initialize N particles and divide them into G groups randomly and evenly, the largest generation max iteration, set communication strategy trigger conditions R and R1 Initialization of gravitational constant, inertia mass, acceleration Iteration: 1: While T < max iteration 2: Update the gravitational constant through Formula (9) 3: For groups = to G 4: For i = to N/G 5: Calculate the fitness function value of each particle 6: Update global optimal solutions and optimal values of the subgroups and the whole population 7: end For 8: Update the inertia mass, gravity, acceleration particle speed and position 9: According to the updated particle velocity, using Formulas (14) and (15) to update the global optimal disturbance momentum 10: end For 11: If T is an integral multiple of R, use strategy for communication 12: If T is an integral multiple of R1 , use strategy for communication 13: T = T + 14: end While of this experiment is that each algorithm runs 20 times independently, and the average value of 20 experimental data is obtained to the experimental result, which is shown in Table The bold numbers in Table are the best values obtained, divided into the best average value and the best optimal value According to the statistical analysis of Table 2, for the best average value: PSO, GSA, PSOGSA, PPSO, PPSOGSA, the number of functions with the best performance is 5, 7, 6, 7, 17, respectively For the best optimal value in the test process: PSO, GSA, PSOGSA, PPSO, PPSOGSA, the number of functions with the best performance is 10, 11, 12, 10, 18, respectively From the statistical data, as a result, we confirm that the overall performance of PPSOGSA is better than PPSO, GSA, PSOGSA, and PSO Under the objective function of the multi-dimensional, the function value accuracy is higher, and it is closer to the function optimal value Figure is the convergence curve of some selected benchmark functions After comparing the convergence curves of each algorithm, it can be found that the PPSOGSA algorithm proposed in this paper has a faster convergence speed and its performance is better than the four optimization algorithms compared in the figure 6.52E−07 6.74E−03 8.21E−01 1.60E−01 4.17E+01 4.82E+00 9.56E−01 −5464.62 7.18E+01 5.23E+00 9.04E−01 4.36E+00 3.62E+01 0.998 3.558E−03 −1.0316 0.3979 F2 F3 F4 F5 F6 F7 F8 F9 F10 F11 F12 F13 F14 F15 F16 F17 F18 0.3979 −1.0316 3.075E−04 0.998 5.82E+00 2.21E+00 5.77E−01 3.09E+00 1.36E+01 −6848.82 3.08E−02 1.40E+00 9.05E−01 7.42E−02 4.61E−02 2.18E−04 1.24E−09 0.3979 −1.0316 1.549E−03 4.537 2.14E+00 5.28E+01 7.20E+00 1.56E−10 1.60E+01 −2790.40 1.59E−02 2.27E−17 2.61E+01 2.37E−09 2.09E+02 1.82E−08 1.33E−17 Ave Ave Best GSA PSO F1 F 0.3979 −1.0316 5.330E−04 0.998 4.10E−03 2.21E−04 2.61E+00 1.18E−10 9.95E+00 −4123.50 8.30E−03 1.02E−20 2.58E+01 1.74E−09 8.40E+01 1.41E−08 7.50E−18 Best 0.3979 −1.0316 2.480E−03 1.987 3.33E−19 2.91E+00 2.41E−02 7.44E−10 1.18E+02 −7611.80 4.26E−02 2.56E−19 3.17E+01 4.16E+01 1.93E+02 3.94E−09 2.48E−19 Ave PSOGSA 0.3979 −1.0316 3.075E−04 0.998 5.15E−20 2.09E−01 4.53E−11 3.20E−10 5.97E+01 −9904.59 1.63E−02 1.76E−19 2.15E+01 1.16E+01 2.27E+01 1.68E−09 1.35E−19 Best 0.3979 −1.0316 8.288E−04 1.790 1.35E−01 1.87E−02 2.21E−02 3.63E−11 3.98E+01 −7150.20 9.46E−04 3.32E−02 2.79E+01 7.84E−32 4.15E−56 2.78E−65 3.48E−65 Ave PPSO 0.3979 −1.0316 3.075E−04 0.998 3.42E−03 3.27E−04 4.11E−09 3.11E−12 2.00E+00 −8463.31 5.61E−05 9.24E−03 2.72E+01 1.33E−33 1.21E−60 4.48E−70 5.84E−69 Best 0.3979 −1.0316 3.107E−04 1.047 5.93E−10 6.73E−12 1.09E−02 1.07E−15 8.49E+01 −8171.93 2.46E−04 4.52E−12 2.43E+01 7.18E−71 4.11E−139 7.01E−73 4.09E−143 Ave PPSOGSA Table Comparison the PPSOGSA with the GSA, PSO, PSOGSA, and PPSO algorithms based on 23 test functions (continued) 0.3979 −1.0316 3.075E−04 0.998 1.54E−14 1.18E−13 1.11E−16 8.88E−16 4.19E+01 −9303.21 7.01E−06 4.18E−15 2.38E+01 3.01E−71 1.62E−140 2.72E−73 8.90E−144 Best 174 J Yu et al −8.2647 −8.3919 −9.5902 F21 F22 F23 −10.5364 −10.4029 −10.1532 −3.3220 −10.2117 −10.4029 −6.8402 −3.3220 −3.8628 −10.5364 −10.4029 −10.1532 −3.3220 −3.8628 Best −6.2309 −7.2871 −7.0124 −3.2865 −3.8628 Ave PSOGSA The significance of bold in the Table is the best one in comparison algorithms −3.1899 F20 −3.8628 Ave −3.8628 GSA Ave Best PSO F19 F −10.5364 −10.4029 −10.1532 −3.3220 −3.8628 Best Table (continued) −10.5364 −10.4029 −10.1532 −3.2238 −3.8628 Ave PPSO −10.5364 −10.4029 −10.1532 −3.3220 −3.8628 Best −9.9980 −9.8755 −10.1532 −3.3220 −3.8628 Ave PPSOGSA −10.5364 −10.4029 −10.1532 −3.3220 −3.8628 Best A New Optimization Based on Parallelizing Hybrid PSOGSA Algorithm 175 176 J Yu et al F1 F8 F11 F5 F10 F21 Fig The convergence curve of some selected functions A New Optimization Based on Parallelizing Hybrid PSOGSA Algorithm 177 Conclusion In this study, we introduced a parallel PSOGSA algorithm based on hybridizing the PSOGSA algorithm and using the idea of clustering The concept of mutation and the interaction between subgroups are used to make the algorithm approach to the optimal value Twenty-three benchmark functions are used for evaluating the PPSOGSA algorithm performance The obtained results are compared with PSOGSA, GSA, PSO, and PPSO algorithm shows that the proposed PPSOGSA algorithm provides overall performance that is better than the other four optimization algorithms Acknowledgements This work was supported in part by Fujian provincial buses and special vehicles R&D collaborative innovation center project (Grant Number: 2016BJC012) References Nguyen, T.T., Pan, J.S., Dao, T.K.: An improved flower pollination algorithm for optimizing layouts of nodes in wireless sensor network IEEE Access 7, 75985–75998 (2019) Rashedi, E., Nezamabadi-Pour, H., Saryazdi, S.: GSA: a gravitational search algorithm Inf Sci 179(13), 2232–2248 (2009) Van Laarhoven, P.J., Aarts, E.H.: Simulated annealing In: Simulated Annealing: Theory and Applications, pp 7–15 Springer, Berlin (1987) Hatamlou, A.: Black hole: a new heuristic optimization approach for data clustering Inf Sci 222, 175–184 (2013) Price, K.V.: Differential evolution In: Handbook of Optimization, pp 187–214 Springer, Berlin (2013) Kennedy, J., Eberhart, R.: Particle swarm optimization In: Proceedings of ICNN’95International Conference on Neural Networks, vol 4, pp 1942–1948 IEEE (1995) Shi, Y.: Particle swarm optimization: developments, applications and resources In: Proceedings of the 2001 Congress On Evolutionary Computation (IEEE Cat No 01TH8546), (2001), vol 1, pp 81–86 IEEE Mirjalili, S., Mirjalili, S.M., Lewis, A.: Grey wolf optimizer Adv Eng Software 69 (2014) Yang, X.-S.: Firefly algorithm In: Nature-Inspired Metaheuristic Algorithms, vol 20, pp 79– 90 (2008) 10 Mirjalili, S., Lewis, A.: The whale optimization algorithm Adv Eng Software 95 (2016) 11 Esmin, A., Lambert-Torres, G., Alvarenga, G.B.: Hybrid evolutionary algorithm based on PSO and G.A mutation In: Sixth International Conference on Hybrid Intelligent Systems (HIS’06), 2006, pp 57–57 IEEE (2006) 12 Nguyen, T.-T., Qiao, Y., Pan, J.-S., Chu, S.-C., Chang, K.-C., Xue, X., Dao, T.-K.: A hybridized parallel bats algorithm for combinatorial problem of traveling salesman J Intell Fuzzy Syst Preprint 1–10 (2020) https://doi.org/10.3233/jifs-179668 13 Nguyen, T.-T., Pan, J.-S., Chu, S.-C., Roddick, J.F., Dao, T.-K.: Optimization localization in wireless sensor network based on multi-objective firefly algorithm J Netw Intell 1, 130–138 (2016) 14 Mirjalili, S., Hashim, S.Z.M.: A new hybrid PSOGSA algorithm for function optimization In: 2010 International Conference on Computer and Information Application, pp 374–377 IEEE (2010) 15 Chang, J.-F., Roddick, J.F., Pan, J.-S., Chu, S.-C.: A parallel particle swarm optimization algorithm with communication strategies (2005) 178 J Yu et al 16 Chang, K.C., Chu, K.C., Wang, H.C., Lin, Y.C., Pan, J.S.: Energy saving technology of 5G base station based on internet of things collaborative control IEEE Access 8, 32935–32946 (2020) 17 Chang, K.-C., Chu, K.-C., Wang, H.-C., Lin, Y.-C., Pan, J.-S.: Agent-based middleware framework using distributed CPS for improving resource utilization in smart city Futur Gener Comput Syst 108, 445–453 (2020) ... largest generation max iteration, set communication strategy trigger conditions R and R1 Initialization of gravitational constant, inertia mass, acceleration Iteration: 1: While T < max iteration... A New Optimization Based on Parallelizing Hybrid PSOGSA Algorithm 175 176 J Yu et al F1 F8 F11 F5 F10 F21 Fig The convergence curve of some selected functions A New Optimization Based on Parallelizing. .. gbest d A New Optimization Based on Parallelizing Hybrid PSOGSA Algorithm 171 Parallel PSOGSA This paper proposes a parallel PSOGSA The idea is to divide N (all particle individuals) into G

Ngày đăng: 03/12/2021, 16:16

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN