Political optimizer with interpolation strategy for global optimization.

Political optimizer (PO) is a relatively state-of-the-art meta-heuristic optimization technique for global optimization problems, as well as real-world engineering optimization, which mimics the multi-staged process of politics in human society. However, due to a greedy strategy during the election...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Aijun Zhu, Zhanqi Gu, Cong Hu, Junhao Niu, Chuanpei Xu, Zhi Li
Formato: article
Lenguaje:EN
Publicado: Public Library of Science (PLoS) 2021
Materias:
R
Q
Acceso en línea:https://doaj.org/article/e92d55b700b149b1942dcfe3af38f387
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:e92d55b700b149b1942dcfe3af38f387
record_format dspace
spelling oai:doaj.org-article:e92d55b700b149b1942dcfe3af38f3872021-11-25T06:19:19ZPolitical optimizer with interpolation strategy for global optimization.1932-620310.1371/journal.pone.0251204https://doaj.org/article/e92d55b700b149b1942dcfe3af38f3872021-01-01T00:00:00Zhttps://doi.org/10.1371/journal.pone.0251204https://doaj.org/toc/1932-6203Political optimizer (PO) is a relatively state-of-the-art meta-heuristic optimization technique for global optimization problems, as well as real-world engineering optimization, which mimics the multi-staged process of politics in human society. However, due to a greedy strategy during the election phase, and an inappropriate balance of global exploration and local exploitation during the party switching stage, it suffers from stagnation in local optima with a low convergence accuracy. To overcome such drawbacks, a sequence of novel PO variants were proposed by integrating PO with Quadratic Interpolation, Advance Quadratic Interpolation, Cubic Interpolation, Lagrange Interpolation, Newton Interpolation, and Refraction Learning (RL). The main contributions of this work are listed as follows. (1) The interpolation strategy was adopted to help the current global optima jump out of local optima. (2) Specifically, RL was integrated into PO to improve the diversity of the population. (3) To improve the ability of balancing exploration and exploitation during the party switching stage, a logistic model was proposed to maintain a good balance. To the best of our knowledge, PO combined with the interpolation strategy and RL was proposed here for the first time. The performance of the best PO variant was evaluated by 19 widely used benchmark functions and 30 test functions from the IEEE CEC 2014. Experimental results revealed the superior performance of the proposed algorithm in terms of exploration capacity.Aijun ZhuZhanqi GuCong HuJunhao NiuChuanpei XuZhi LiPublic Library of Science (PLoS)articleMedicineRScienceQENPLoS ONE, Vol 16, Iss 5, p e0251204 (2021)
institution DOAJ
collection DOAJ
language EN
topic Medicine
R
Science
Q
spellingShingle Medicine
R
Science
Q
Aijun Zhu
Zhanqi Gu
Cong Hu
Junhao Niu
Chuanpei Xu
Zhi Li
Political optimizer with interpolation strategy for global optimization.
description Political optimizer (PO) is a relatively state-of-the-art meta-heuristic optimization technique for global optimization problems, as well as real-world engineering optimization, which mimics the multi-staged process of politics in human society. However, due to a greedy strategy during the election phase, and an inappropriate balance of global exploration and local exploitation during the party switching stage, it suffers from stagnation in local optima with a low convergence accuracy. To overcome such drawbacks, a sequence of novel PO variants were proposed by integrating PO with Quadratic Interpolation, Advance Quadratic Interpolation, Cubic Interpolation, Lagrange Interpolation, Newton Interpolation, and Refraction Learning (RL). The main contributions of this work are listed as follows. (1) The interpolation strategy was adopted to help the current global optima jump out of local optima. (2) Specifically, RL was integrated into PO to improve the diversity of the population. (3) To improve the ability of balancing exploration and exploitation during the party switching stage, a logistic model was proposed to maintain a good balance. To the best of our knowledge, PO combined with the interpolation strategy and RL was proposed here for the first time. The performance of the best PO variant was evaluated by 19 widely used benchmark functions and 30 test functions from the IEEE CEC 2014. Experimental results revealed the superior performance of the proposed algorithm in terms of exploration capacity.
format article
author Aijun Zhu
Zhanqi Gu
Cong Hu
Junhao Niu
Chuanpei Xu
Zhi Li
author_facet Aijun Zhu
Zhanqi Gu
Cong Hu
Junhao Niu
Chuanpei Xu
Zhi Li
author_sort Aijun Zhu
title Political optimizer with interpolation strategy for global optimization.
title_short Political optimizer with interpolation strategy for global optimization.
title_full Political optimizer with interpolation strategy for global optimization.
title_fullStr Political optimizer with interpolation strategy for global optimization.
title_full_unstemmed Political optimizer with interpolation strategy for global optimization.
title_sort political optimizer with interpolation strategy for global optimization.
publisher Public Library of Science (PLoS)
publishDate 2021
url https://doaj.org/article/e92d55b700b149b1942dcfe3af38f387
work_keys_str_mv AT aijunzhu politicaloptimizerwithinterpolationstrategyforglobaloptimization
AT zhanqigu politicaloptimizerwithinterpolationstrategyforglobaloptimization
AT conghu politicaloptimizerwithinterpolationstrategyforglobaloptimization
AT junhaoniu politicaloptimizerwithinterpolationstrategyforglobaloptimization
AT chuanpeixu politicaloptimizerwithinterpolationstrategyforglobaloptimization
AT zhili politicaloptimizerwithinterpolationstrategyforglobaloptimization
_version_ 1718413898355310592