DOI QR코드

DOI QR Code

Many-objective Evolutionary Algorithm with Knee point-based Reference Vector Adaptive Adjustment Strategy

  • Zhu, Zhuanghua (Information Department, Shanxi Finance & Taxation College)
  • Received : 2022.04.18
  • Accepted : 2022.08.18
  • Published : 2022.09.30

Abstract

The adaptive adjustment of reference or weight vectors in decomposition-based methods has been a hot research topic in the evolutionary community over the past few years. Although various methods have been proposed regarding this issue, most of them aim to diversify solutions in the objective space to cover the true Pareto fronts as much as possible. Different from them, this paper proposes a knee point-based reference vector adaptive adjustment strategy to concurrently balance the convergence and diversity. To be specific, the knee point-based reference vector adaptive adjustment strategy firstly utilizes knee points to construct the adaptive reference vectors. After that, a new fitness function is defined mathematically. Then, this paper further designs a many-objective evolutionary algorithm with knee point-based reference vector adaptive adjustment strategy, where the mating operation and environmental selection are designed accordingly. The proposed method is extensively tested on the WFG test suite with 8, 10 and 12 objectives and MPDMP with state-of-the-art optimizers. Extensive experimental results demonstrate the superiority of the proposed method over state-of-the-art optimizers and the practicability of the proposed method in tackling practical many-objective optimization problems.

Keywords

1. Introduction

Many-objective optimization[1,2,3,4]has gained wide attentions over the past few years due to its applications in various engineering problems[5,6,7,8]. Mathematically, many-objective optimization problems (MaOPs)[9,10,11,12] can be formulated as follows:

Min F(X) = (f1(X), f2(x), ..., fM(X))

subject to X ∈ Ω       (1)

where 𝑋 = (𝑥1, 𝑥2, . . . , 𝑥𝐷) indicates a D-dimensional decision vector, and M>3. Since the objectives conflict with each other, only the best trade-off solutions can be obtained. For MaOPs, the Pareto dominance is used to describe the relations between solutions. Solution 𝑋𝑎 Pareto dominates 𝑋𝑏 if and only if ∀𝑖 ∈ {1,2, . . . , 𝑀}: 𝑓𝑖(𝑋𝑎) ≤ 𝑓𝑖(𝑋𝑏) and ∃𝑖 ∈ {1,2, . . . , 𝑀}: 𝑓𝑖(𝑋𝑎) < 𝑓𝑖(𝑋𝑏). The best trade-off solutions in the objective space are called Pareto fronts.

Different from tackling multi-objective problems, which are generally with 2 or 3 objectives, the Pareto dominance[4,13,14,15,16] is faced with the loss of evolutionary pressure when dealing with MaOPs. To effectively deal with MaOPs, researchers have tailored various techniques, which can be divided into the following three categories. (1) Dominance relation-based methods, which enhance the evolutionary pressure by utilizing the dominance relations between solutions. Typical methods include MaOEA-ARV[9], NSGA-III[17], MaOEA-RNM[18] and NSGA-II-SDR[19]. These optimizers generally rely on the Pareto dominance and its variants to maintain the evolutionary pressure. Although these optimizers have shown promising performance on various MaOPs, they are still faced with the issue of converging to local regions. (2) Indicator-based methods. IGD[20], HV[21] and R2[22] are frequently used indicators, which choose solutions by measuring the contribution to the employed indicators. Among them, HV is a widely used indicator for measuring the contribution of solutions. However, HV-based methods have various limitations. For example, HV is computationally expensive and prefers to knee-point regions. (3) Decomposition-based methods, of which the main idea is to decompose the complex MaOPs into multiple simple problems and tackle them collaboratively. To be specific, the diversity is guaranteed by decomposed subproblems, while the convergence pressure is enhanced by scalarizing functions. MOEA/D[23], MOEA/DD[24], and MOEA/D-DE[25] belong to the category. However, as paper[26] points out, the Pareto fronts of MaOPs are various. The predefined weight or reference vectors are unable to effectively cover all Pareto fronts. Note that weight vector and reference vector have the similar meanings. To alleviate this issue, some effective methods, such as the adaptive weight adjustments[27], adaptive scalarizing methods[28] and dynamic adjustment reference vectors[29], have been proposed by adaptively adjusting the weight or reference vectors during optimization.

As discussed above, most of the adaptive reference vector strategies are designed to diversify solutions to cover the true Pareto fronts as much as possible, but they are unable to balance the convergence and diversity simultaneously. Therefore, this paper aims to design a knee point-based reference vector adaptive adjustment strategy to cover the true Pareto fronts as much as possible and concurrently ensure the convergence. After that, a many-objective evolutionary algorithm with knee point-based reference vector adaptive adjustment strategy is designed for effectively tackling MaOPs. In summary, the contributions can be described as follows:

1) To adaptively balance the convergence and diversity, the knee point-based reference vector adaptive adjustment strategy is designed elaborately, where knee points are used to adaptively define the reference vectors and the fitness values of solutions are also defined based on the adjusted reference vectors.

2) Many-objective evolutionary algorithm with knee point-based reference vector adaptive adjustment strategy (MaEA-KRVA) is proposed and tested on multiple MaOPs with up to 12 objectives and practical problemsin comparison with MOEA/DD, KnEA,MOMBI-II and NSGA-III. The experimental comparisons illustrate that the proposed knee point-based reference vector adaptive adjustment strategy is outstanding in terms of the spread and IGD indicators, demonstrating the superiority of the proposed method over other methods.

The structure of this paper is organized as follows. Section 2 reviews related work on the adjustment of weight or reference vectors. Section 3 introduces the details of the knee point-based reference vector adaptive adjustment strategy. Section 4 shows the main components of MaEA-KRVA. The performance of MaEA-KRVA is tested and analyzed in Section 5. The conclusions are drawn in Section 6.

2. Related work

The performance of decomposition-based methods heavily relies on the distribution of weight or reference vectors. Therefore, the adjustment of weight or reference vectors is greatly critical for optimizers. Typical techniques for tackling this issue can be summarized as follows.

Basically, decomposition-based methods consist of two categories. The first one is to decompose MaOPs into a set of single-objective optimization problems, while the second one is to decompose MaOPs into sub-MOPs. Typical methods in the first category include some weighted aggregation-based optimizers proposed in early days and multi-objective evolutionary algorithms based on decomposition [23]. It is worthy to mention that MOEA/D is the representative of decomposition-based methods, where the weight vectors are initially design to be uniformly distributed. MOEA/D-M2M and NSGA-III are the typical representatives of the second category. However, recent studies[17,18] have proved that the fixed weight vectors in decomposition-based methods may be unable to distribute solutions along the whole Pareto fronts. Therefore, Gu et al[30] proposed to utilize a linear interpolation of non-dominated solutions to adjust the weight vectors. Li et al[31] adaptively modified the weight vectors using a simulated annealing technique. Jiang et al[32] proposed a Pareto-adaptive weight vector strategy to dynamically adjust the weight vectors. Qi et al[27] designed an adaptive weight vector adjustment strategy, which regulated the weight vectors periodically based on the current solution distribution. Wang et al[33] proposed to adaptively construct the weight vectors during evolutionary process to uniformly guide solutions to the true Pareto fronts. More works on the adaptive adjustment of weight vectors can be found in papers[34,35,36,37].

As can be seen from the discussions above, a lot of efforts have been devoted to the theoretical research of the adaptive adjustment of weight vectors to ensure the diversity of solutions. Different from the strategies above, this paper attempts to design a more flexible weight vector adaptive adjustment strategy to adaptively balance the convergence and diversity, and further proposes a many-objective evolutionary algorithm with knee point-based reference vector adaptive adjustment strategy for MaOPs.

3. Knee point-based Reference Vector Adaptive Adjustment Strategy

Adaptively adjusting the reference vector is of great significance for the diversity of solutions. Theoretically, for one solution set, if the reference vector can be adjusted according to the distribution of solutions, the overall diversity can be ensured. Further, if the convergence of solutions can be also taken into account when defining the reference vector, then the convergence and diversity can be guaranteed, simultaneously. Therefore, this paper proposes the following knee point-based reference vector adaptive adjustment strategy to realize the idea above.

For one solution set {𝑋𝟏,𝑋𝟐, . . . , 𝑋𝑵}, the reference vector can be mathematically defined with (2),

\(\begin{aligned}V^{r e f}=\frac{\left|F\left(\mathrm{X}^{\text {knee }}\right)-Z\right|}{\left\|F\left(\mathrm{X}^{\text {knee }}\right)-Z\right\|_{2}}\end{aligned}\)       (2)

where 𝑋knee indicates the knee point of solution set {𝑋𝟏,𝑋𝟐, . . . , 𝑋𝑵}, which can be computed with the following (3),

\(\begin{aligned}\mathrm{X}^{\text {knee }}=\arg \min _{X^{i} \in\left\{X^{1}, X^{2}, \ldots, X^{N}\right\}}\left(-\frac{\left|W \times X^{i}\right|}{\|W\|_{2}}\right)\end{aligned}\)       (3)

W = ((X1', X2',..., XM')T)-1 x (1, 1,..., 1)       (4)

where XM′ represents the extreme solution on the M-th objectives.

Z in (2) is formulated with (5),

\(\begin{aligned}Z=\frac{\left(z^{\text {nad }}+z^{*}\right)}{2}\end{aligned}\)       (5)

where 𝑧nad and 𝑧* denote the nadir point and ideal point, respectively.

Based on (2), the fitness value of solution 𝑋𝑖 ∈ {𝑋𝟏, 𝑋𝟐, . . . , 𝑋𝑵} can be computed with the following (6),

fit(Xi) = Vref × F(Xi)       (6)

From (2), it can be seen that the reference vector can be adaptively adjusted according to the distribution of solutions. Hence, the diversity of solutions can be ensured accordingly. In terms of convergence, as paper[38] declares, the knee point is beneficial to choose solutions with better convergence. (2) is essentially determined by the knee point. The knee point, as can be seen from (3), (4) and (5), is actually one solution with the minimal projection distance on the vector (1,1,…,1). Due to the involvement of knee points, the convergence of solutions can also be guaranteed. According to the analyses above, it can be seen that the proposed adaptive reference vector is able to concurrently balance the convergence and diversity.

4. Many-objective Evolutionary Algorithm with Knee point-based Reference Vector Adaptive Adjustment strategy

4.1 General framework of MaEA-KRVA

Algorithm 1 shows the details of MaEA-KRVA. To begin with, a population of N solutions is randomly initialized. Then, two solutions are randomly selected from the current population, referring to line 1. If the two solutions have evident Pareto dominance relation, the dominating solution is included into the mating pool, referring to lines 7~8. Otherwise, compute the fitness values of the two solutions and the solution with a smaller value is included into the mating pool, referring to lines 9~13. After that, perform the simulated binary crossover and polynomial mutation on solutions[39] in the mating pool. The generated solutions are then combined with parent population. Perform the fast non-dominated sorting strategy on the resultant population, referring to line 16. Divide the k-th 𝐹𝑘 Pareto front into N-|𝑃3|sets using the k-means strategy. For solutions in each set, compute their fitness values and include the solution with the minimum fitness value into next population, referring to 19~22. Repeat the operations above until the stopping criterion is reached. Accordingly, Fig. 1 presents the flow chart of MaEA-KRVA.

Algorithm 1: Pseudocode of MaEA-KRVA

Input: N(population size)

Output: P

1: P←Initialization(N)

2: While termination criterion not satisfied do

3: 𝑃1 ← ∅

4: {fit(𝑋1), fit(𝑋2), . . .} ← KRVA(P)

5: While |𝑃1| < 𝑁 do

6: randomly choose 𝑋𝑖, 𝑋𝑗 from P

7: If 𝑋𝑖 ≺ 𝑋𝑗 or 𝑋𝑖 ≻ 𝑋𝑗 then

8: 𝑃1 ←choose the dominating solution from 𝑋𝑖 and 𝑋𝑗

9: Else if fit(𝑋𝑖) < fit(𝑋𝑗)then

10: 𝑃1 ← 𝑃1⋃{𝑋𝑖}

11: Else if fit(𝑋𝑖) ≥ fit(𝑋𝑗) then

12: 𝑃1 ← 𝑃1⋃{𝑋𝑗}

13: End

14: End

15: 𝑃2 ← 𝑃⋃𝑉Variation(𝑃1)

16: {𝐹1, 𝐹2, . . . , 𝐹𝑘} ← Non − dominated Sorting( 𝑃2)

17: 𝑃3 ← ⋃𝑖=1𝑘−1𝐹𝑖

18: {𝑆1, 𝑆2, . . .} ← Categorize 𝐹𝑘 into 𝑁 − |𝑃3| sets using k-means

19: For 𝑆𝑖 ∈ {𝑆1, 𝑆2, . . .}do

20: {fit(𝑋1), fit(𝑋2), . . .} ← KRVA(𝑆𝑖)

21: P←Solution with the minimum fitness value

22: End

23: End

E1KOBZ_2022_v16n9_2976_f0001.png 이미지

Fig. 1. Flow chart of MaEA-KRVA

4.2 Knee point-based Reference Vector Adaptive Adjustment Strategy

Algorithm 2 shows the details of the knee point-based reference vector adaptive adjustment strategy. As can be seen, the ideal and nadir points are firstly computed, referring to line 2. After that, compute the extreme points and construct vector W according to (4), referring to line 3. Then, compute the project of each solution on vector W, referring to line 5. According to the results above, compute the knee point, referring to lines 7~13. The reference vector can be therefore obtained according to (2), referring to line 14. The fitness values of solutions can be computed based on the resultant reference vector, referring to lines 14~16.

Algorithm 2: KRVA(P)

Input: P

Output: fit

1: fit ← ∅

2: Compute the ideal point and nadir point

3: Compute the extreme points and construct W according to (4)

4: For 𝑋𝑖 ∈ {𝑋1, 𝑋2, . . . , 𝑋N}do

5: 𝑇(𝑋𝑖) ← − |𝑊 × 𝑋𝑖|/‖𝑊‖2

6: End

7: 𝑡 ← 𝑇(𝑋1); 𝑋knee ← 𝑋1

8: For i ← 2: N do

9: If t < T(𝑋𝑖) then

10: t ← T(𝑋i)

11: 𝑋knee ← 𝑋𝑖

12: End

13:End

14: Compute the reference vector 𝑉ref according to (2)

14: For 𝑋𝑖 ∈ {𝑋1, 𝑋2, . . . , 𝑋N} do

15: fit(𝑋𝑖) ← 𝑉ref × 𝐹(𝑋𝑖)

16:End

4.3 Computational complexity

The computational complexity of MaEA-KRAV is determined by three components, the non-dominated sorting strategy, variation operation and knee point-based reference vector adaptive adjustment strategy. For the sake of description, this paper utilizes N individuals to optimize a MaOP with D decision variables and M objectives. Therefore, the complexity of non-dominated sorting is 𝑂(MN2) as the original developers declared[17]. The variation, including the simulated binary crossover and polynomial mutation, needs 𝑂(DN) to generate N solutions[17]. The computational complexity of the knee point-based reference vector adaptive adjustment strategy, as can be seen from Algorithm 2, is 𝑂(MN). Considering all the components above, the complexity of MaEA-KRAV is 𝑂(MN2).

5. Experiments and analyses

To verify the performance of MaEA-KRAV, subsection 5.1 firstly introduces the employed comparison optimizers, as well as benchmarking instances. After that, the experimental analyses are presented in subsection 5.2. Subsection 5.3 further tests MaEA-KRAV on practical problems.

5.1 Experimental settings

This paper employs MOEA/DD[24], KnEA[38], MOMBI-II[40] and NSGA-III[17] as comparison optimizers, which are specially tailored for MaOPs. All the parameters of comparison optimizers are set according to the original developers. For example, for MOEA/DD, the probability of choosing parents locally is set to 0.9. For KnEA, the rate of knee points in the population is set to 0.5. For MOMBI-II, the tolerance threshold is set to 0.001. The benchmarking instances are from WFG test suite[41], where WFG1~WFG8 are used. Table 1 summarizes the characteristics of WFG1~WFG8. For each instance, the amounts(M) of objectives are set to 8, 10 and 12. Corresponding lengths of variable vector are 17, 19 and 21, respectively. The used practical problems are MPDMP, which can be found in many engineering problems. MPDMP aims to minimize the distances from one solution to the vertexes of polygon. For a M-sided polygon, M-dimensional MPDMP can be formulated mathematically. For details of MPDMP, please refer to the original paper[42]. The population is set to 120, and the maximum evaluations is 12000. For fair comparisons, each instance run 20 times. The Friedman test is used to statistically show their differences.

Table 1. Summary of characteristics of WFG instances

E1KOBZ_2022_v16n9_2976_t0001.png 이미지

IGD[20] and Spread[43] are used as the indicators to measure the performance of optimizers, of which the mathematical expressions are presented as (7) and (8),

\(\begin{aligned}I G D=\frac{\sum_{v \in P^{*}} d(v, P)}{\left|P^{*}\right|}\end{aligned}\)       (7)

where P indicates the solution set obtained by optimizers. 𝑃 is one set consisting of sampled Pareto optimal solutions on the true Pareto front, and d(v,P) indicates the minimal distance between point v and all the points in P. In this experiment, 𝑃 contains around 10000 points.

\(\begin{aligned}Spread=\frac{\sum_{i=1}^{M} d\left(E_{i}, P\right)+\sum_{X \in P}|d(X, P)-\bar{d}|}{\sum_{i=1}^{M} d\left(E_{i}, P\right)+(|P|-M) \times \bar{d}}\end{aligned}\)       (8)

where 𝑑(𝑋, 𝑃) = min𝑌∈𝑃,𝑋≠𝑌‖𝐹(𝑋) − 𝐹(𝑌)‖, \(\begin{aligned}\bar{d}=\frac{1}{|P|} \sum_{X \in P} d(X, P)\end{aligned}\), P is the final non-dominated set. {𝐸1, 𝐸2, . . . , 𝐸𝑀} are M extreme solutions in the set of Pareto optimal solutions.

5.2 Experimental analyses

Table 2 displays the average IGD values of optimizers on WFG test suite, where the best values are highlighted in boldface. From Table 2, it can be seen that MaEA-KRAV performs the best on WFG4~WFG7 with different amounts of objectives. Besides, MaEA-KRAV also gains the most outstanding performance on WFG3 with 8 and 10 objectives and WFG8 with 10 and 12 objectives. However, MaEA-KRAV loses to KnEA on WFG1 and WFG2 with 8 and 10 objectives. According to the rankings in the last row, it can be observed that MaEA-KRAV has the smallest the ranking, indicating that MaEA-KRAV is statistically the best optimizer among the five optimizers. Moreover, Fig. 2 further exhibits the convergence curves of optimizers on WFG6 with 10 objectives in terms of IGD indicator. From Fig. 2, it can be seen that MaEA-KRAV outperforms the comparison optimizers at each generation, which empirically illustrates that MaEA-KRAV performs the best during the overall optimization process. From Table 2 and Fig. 2, it can be seen that the analysis results match very well, illustrating MaEA-KRVA performs consistently the best on WFG test suite.

Table 2. Comparison of IGD values of optimizers on WFG test suite

E1KOBZ_2022_v16n9_2976_t0002.png 이미지

E1KOBZ_2022_v16n9_2976_f0002.png 이미지

Fig. 2. Convergence curve of IGD on WFG6 with 10 objectives

Table 3 shows the average spread values of optimizers. From the experimental results, it can be observed that MaEA-KRAV outperforms MOEA/DD, KnEA, MOMBI-II and NSGA-III on 22 out of 24 functions, and loses to NSGA-III on 8-objective WFG5 and WFG6, exhibiting an evident advantage over comparison optimizers. Moreover, the Friedman test results in the last row also further confirm the superiority of MaEA-KRAV over other optimizers. In addition, Fig. 3 visually shows the Spread values over generations. It still can be observed that MaEA-KRAV performs consistently the best as analyzed above at each generation, which empirically demonstrates the efficiency of MaEA-KRAV .

Table 3. Comparison of Spread values of optimizers on WFG test suite

E1KOBZ_2022_v16n9_2976_t0002.png 이미지

E1KOBZ_2022_v16n9_2976_f0003.png 이미지

Fig. 3. Convergence curve of Spread on WFG6 with 10 objectives

Fig. 4 visually shows the Pareto fronts obtained by optimizers on WFG6 with 10 objectives, where the true Pareto front of WFG6 is also included to have distinct comparisons. From Fig. 4, it is shown that MaEA-KRVA is able to obtain the Pareto fronts similar to the true Pareto front of WFG6. On the contrary, the Pareto fronts obtained by MOEA/DD and MOMBI-II perform the worst in terms of the convergence and diversity. In addition, both KnEA and NSGA-III show better convergence than MOEA/DD and MOMBI-II, but still lose to MaEA-KRVA in terms of the diversity and uniformity.

E1KOBZ_2022_v16n9_2976_f0004.png 이미지

Fig. 4. Comparisons of Pareto fronts on WFG6 with 10 objectives

5.3 Application of MaEA-KRVA to MPDMP

To test the practicability of MaEA-KRVA, this paper further applies MaEA-KRVA to the Multi-Point Distance Minimization Problems (MPDMP)[28]. All parameters are set as subsection 5.1 explains. The experimental results are presents in Tables 4~5.

Table 4. Comparison of IGD values of optimizers on MPDMP.

E1KOBZ_2022_v16n9_2976_t0004.png 이미지

Table 5. Comparison of Spread values of optimizers on MPDMP

E1KOBZ_2022_v16n9_2976_t0005.png 이미지

From Table 4, it can be seen that MaEA-KRAV performs the best on MPDMP with different amounts of objectives. Table 5 also shows the similar experimental results as Table 3 shows. Further, from the Friedman test results in Table 4 and Table 5, it still can be observed that MaEA-KRAV gains the smallest rankings and achieves the most outstanding performance.

6. Conclusion

General methods for the adaptive adjustment of reference or weight vectors can only ensure the distribution of solutions as much as possible. Differently, this paper proposes to utilize knee points to adaptively construct reference weight vectors to concurrently balance the diversity and convergence. Moreover, this paper further designs the many-objective evolutionary algorithm with knee point-based reference vector adaptive adjustment strategy.

The extensive experiments and analyses on WFG test suite and MPDMP empirically illustrate the efficiency of the proposed method on MaOPs. Further work will be focused on the MaOPs with large-scale variables.

References

  1. S Ghorbanpour, T Pamulapati, R Mallipeddi, "Swarm and evolutionary algorithms for energy disaggregation: challenges and prospects," International Journal of Bio-Inspired Computation, 17(4), 215-226, 2021. https://doi.org/10.1504/IJBIC.2021.116548
  2. B Qu, Q Zhou, Y Zhu, "An improved brain storm optimisation algorithm for energy-efficient train operation problem," International Journal of Bio-Inspired Computation, 17(4), 236-245, 2021. https://doi.org/10.1504/IJBIC.2021.116549
  3. Y Cao, L Zhou, F Xue, "An improved NSGA-II with dimension perturbation and density estimation for multi-objective DV-Hop localisation algorithm," International Journal of BioInspired Computation, 17(2), 121-130, 2021. https://doi.org/10.1504/IJBIC.2021.114081
  4. M, Zhang, L. Wang, W. Guo, et al., "Many-Objective Evolutionary Algorithm based on Dominance Degree," Applied Soft Computing, vol.113, pp. 107869, 2021.
  5. A Daoudi, K Benatchba, M Bessedik, "Performance assessment of biogeography-based multiobjective algorithm for frequency assignment problem," International Journal of Bio-Inspired Computation, 18(4), 199-209, 2021. https://doi.org/10.1504/IJBIC.2021.119981
  6. Y Zhou, Y Sai, L Yan, "An improved extension neural network methodology for fault diagnosis of complex electromechanical system," International Journal of Bio-Inspired Computation, 18(4), 250-258, 2021. https://doi.org/10.1504/IJBIC.2021.119950
  7. Z. Cui, J. Wen, Y. Lan, et al., "Communication-efficient federated recommendation model based on many-objective evolutionary algorithm," Expert Systems with Applications, vol. 201, pp.116963, 2022.
  8. M. Zhang, W. Guo, L. Wang, et al., "Modeling and optimization of watering robot optimal path for ornamental plant care. Computers & Industrial Engineering, vol.157, pp.107263, 2021.
  9. M. Zhang, L. Wang, W. Li, et al., "Many-objective evolutionary algorithm with adaptive reference vector. Information Sciences, vol. 563, pp. 70-90, 2021. https://doi.org/10.1016/j.ins.2021.01.015
  10. A Garg, S Singh, L Gao, "Multi-objective optimisation framework of genetic programming for investigation of bullwhip effect and net stock amplification for three-stage supply chain systems," International Journal of Bio-Inspired Computation, 16(4), 241-251, 2020. https://doi.org/10.1504/IJBIC.2020.112329
  11. Z Cui, X Jing, P Zhao, "A new subspace clustering strategy for AI-based data analysis in IoT system," IEEE Internet of Things Journal, 8(16), 12540-12549, 2021. https://doi.org/10.1109/JIOT.2021.3056578
  12. X Cai, S Geng, D Wu, "A multicloud-model-based many-objective intelligent algorithm for efficient task scheduling in internet of things," IEEE Internet of Things Journal, 8(12), 9645-9653, 2021. https://doi.org/10.1109/JIOT.2020.3040019
  13. Y Zhang, X Cai, H Zhu, "Application an improved swarming optimisation in attribute reduction," International Journal of Bio-Inspired Computation, 16(4), 213-219, 2020. https://doi.org/10.1504/IJBIC.2020.112353
  14. Z Cui, F Xue, S Zhang, "A hybrid blockchain-based identity authentication scheme for multiWSN," IEEE Transactions on Services Computing, 13(2), 241-251, 2020. https://doi.org/10.1109/tsc.2020.2964537
  15. Z Cui, X Xu, F Xue, "Personalized recommendation system based on collaborative filtering for IoT scenarios," IEEE Transactions on Services Computing, 13(4), 685-695, 2020. https://doi.org/10.1109/tsc.2020.2964552
  16. Z Cui, Y Zhao, Y Cao, "Malicious code detection under 5G HetNets based on a multi-objective RBM model," IEEE Network, 35(2), 82-87, 2021.
  17. K. Deb, H. Jain, "An evolutionary many-objective optimization algorithm using reference-pointbased nondominated sorting approach, part I: solving problems with box constraints," IEEE transactions on evolutionary computation, vol.18, no.4, pp. 577-601, 2014. https://doi.org/10.1109/TEVC.2013.2281535
  18. M. Zhang, L. Wang, W. Guo, et al., "Many-objective evolutionary algorithm based on relative nondominance matrix," Information Sciences, vol.547, pp. 963-983, 2021. https://doi.org/10.1016/j.ins.2020.09.061
  19. Y. Tian, R. Cheng, X. Zhang, et al., "A strengthened dominance relation considering convergence and diversity for evolutionary many-objective optimization," IEEE Transactions on Evolutionary Computation, vol.23, no.2, pp. 331-345, 2019. https://doi.org/10.1109/tevc.2018.2866854
  20. C. A. Coello, N. C. Cortes, "Solving multiobjective optimization problems using an artificial immune system," Genetic programming and evolvable machines, vol.6, no.2, pp.163-190, 2005. https://doi.org/10.1007/s10710-005-6164-x
  21. E. Zitzler, L. Thiele, "Multiobjective evolutionary algorithms: a comparative case study and the strength Pareto approach," IEEE transactions on Evolutionary Computation, vol.3, no.4, pp. 257-271, 1999. https://doi.org/10.1109/4235.797969
  22. H. Trautmann, T. Wagner, D. Brockhoff, "R2-EMOA: Focused multiobjective search using R2-indicator-based selection," in Proc. of International Conference on Learning and Intelligent Optimization, pp. 70-74, 2013.
  23. Q. Zhang, H. Li, "MOEA/D: A multiobjective evolutionary algorithm based on decomposition. IEEE Transactions on Evolutionary Computation, vol.11, no.6, pp.712-731, 2007. https://doi.org/10.1109/TEVC.2007.892759
  24. Z. Xiong, J. Yang, Z. Hu, et al., "Evolutionary many-objective optimization algorithm based on angle and clustering," Applied Intelligence, vol. 51, no.4, pp. 2045-2062, 2021. https://doi.org/10.1007/s10489-020-01874-2
  25. H. Li, Q. Zhang, "Multiobjective optimization problems with complicated Pareto sets, MOEA/D and NSGA-II," IEEE transactions on evolutionary computation, vol.13, no.2, pp. 284-302, 2009. https://doi.org/10.1109/TEVC.2008.925798
  26. H. Ishibuchi, Y. Setoguchi, H. Masuda, et al., "Performance of decomposition-based manyobjective algorithms strongly depends on Pareto front shapes," IEEE Transactions on Evolutionary Computation, vol.21, no.2, pp. 169-190, 2017. https://doi.org/10.1109/TEVC.2016.2587749
  27. Y. Qi, X. Ma, F. Liu, et al., "MOEA/D with adaptive weight adjustment," Evolutionary computation, vol.22, no.2, pp. 231-264, 2014. https://doi.org/10.1162/EVCO_a_00109
  28. R. Wang, Q. Zhang, T. Zhang, "Decomposition-based algorithms using Pareto adaptive scalarizing methods," IEEE Transactions on Evolutionary Computation, vol.20, no.6, pp. 821-837, 2016. https://doi.org/10.1109/TEVC.2016.2521175
  29. R. Cheng, Y. Jin, M. Olhofer, et al., "A reference vector guided evolutionary algorithm for manyobjective optimization," IEEE Transactions on Evolutionary Computation, vol.20, no.5, pp. 773- 791, 2016. https://doi.org/10.1109/TEVC.2016.2519378
  30. F Gu, L. Liu, "A novel weight design in multi-objective evolutionary algorithm," in Proc. of International Conference on Computational Intelligence and Security, IEEE, pp. 137-141, 2010.
  31. H. Li, D. Landa-Silva, "An adaptive evolutionary multi-objective approach based on simulated annealing," Evolutionary computation, vol.19, no.4, pp. 561-595, 2011. https://doi.org/10.1162/EVCO_a_00038
  32. S. Jiang, Z. Cai, J. Zhang, Y. Ong, "Multiobjective optimization by decomposition with Paretoadaptive weight vectors," in Proc. of 2011 Seventh international conference on natural computation, IEEE, vol.3, no.1260-1264, 2011.
  33. R. Wang, R. C. Purshouse, P. J. Fleming, "Preference-inspired co-evolutionary algorithms using weight vectors," European Journal of Operational Research, vol.243, no.2, pp.423-441, 2015. https://doi.org/10.1016/j.ejor.2014.05.019
  34. X. Zhou, H. Ma, J. Gu, H. Chen, W. Deng, "Parameter adaptation-based ant colony optimization with dynamic hybrid mechanism," Engineering Applications of Artificial Intelligence, vol.114, pp.105139, 2022.
  35. Z. An, X. Wang, B. Li, Z. Xiang, B. Zhang, "Robust visual tracking for UAVs with dynamic feature weight selection," Applied Intelligence, pp. 1-14, 2022.
  36. X. Li, H. Zhao, L. Yu, H. Chen, W. Deng, W. Deng, "Feature extraction using parameterized multisynchrosqueezing transform," IEEE Sensors Journal, vol.22, no.14, pp.14263-14272, 2022. https://doi.org/10.1109/JSEN.2022.3179165
  37. H. Zhao, J. Liu, H. Chen, J. Chen, Y. Li, J. Xu, W. Deng, "Intelligent diagnosis using continuous wavelet transform and gauss convolutional deep belief network," IEEE Transactions on Reliability, pp. 1-11, 2022.
  38. X. Zhang, Y. Tian, Y. Jin, "A knee point-driven evolutionary algorithm for many-objective optimization," IEEE Transactions on Evolutionary Computation, vol.19, no.6, pp. 761-776, 2015. https://doi.org/10.1109/TEVC.2014.2378512
  39. H. Martin, D. Maravall, "Adaptation, anticipation and rationality in natural and artificial systems: computational paradigms mimicking nature," Natural Computing, vol.8, no.4, pp.757-775, 2009. https://doi.org/10.1007/s11047-008-9096-6
  40. R Hernandez, C A Coello Coello, "Improved metaheuristic based on the R2 indicator for manyobjective optimization," in Proc. of the 2015 annual conference on genetic and evolutionary computation, pp. 679-686, 2015.
  41. S. Huband, P. Hingston, L. Barone, et al., "A review of multiobjective test problems and a scalable test problem toolkit," IEEE Transactions on Evolutionary Computation, vol.10, no5, pp.477-506, 2006. https://doi.org/10.1109/TEVC.2005.861417
  42. M. Koppen, K. Yoshida, "Substitute distance assignments in NSGA-II for handling manyobjective optimization problems," in Proc. of International Conference on Evolutionary MultiCriterion Optimization, pp. 727-741, 2007.
  43. Y. Wang, L. Wu, F, Yuan, "Multi-objective self-adaptive differential evolution with elitist archive and crowding entropy-based diversity measure," Soft Computing, vol.14, no.3, pp. 193-209, 2010. https://doi.org/10.1007/s00500-008-0394-9