TY - JOUR
T1 - Partial reinforcement optimizer
T2 - an evolutionary optimization algorithm [formula presented]
AU - Taheri, Ahmad
AU - RahimiZadeh, Keyvan
AU - Beheshti, Amin
AU - Baumbach, Jan
AU - Rao, Ravipudi Venkata
AU - Mirjalili, Seyedali
AU - Gandomi, Amir H.
PY - 2024/3/15
Y1 - 2024/3/15
N2 - In this paper, a novel evolutionary optimization algorithm, named Partial Reinforcement Optimizer (PRO), is introduced. The major idea behind the PRO comes from a psychological theory in evolutionary learning and training called the partial reinforcement effect (PRE) theory. According to the PRE theory, a learner is intermittently reinforced to learn or strengthen a specific behavior during the learning and training process. The reinforcement patterns significantly impact the response rate and strength of the learner during a reinforcement schedule, achieved by appropriately selecting a reinforcement behavior and the time of applying reinforcement process. In the PRO algorithm, the PRE theory is mathematically modeled to an evolutionary optimization algorithm for solving global optimization problems. The efficiency of the proposed PRO algorithm is compared to well-known Meta-heuristic Algorithms (MAs) using Wilcoxon and Friedman statistical tests to analyze results from 75 benchmarks of the CEC2005, CEC2014, and CEC-BC-2017 test suits, which include unimodal, multimodal, hybrid, and composition functions. Additionally, the proposed PRO algorithm is applied to optimize a Federated Deep Learning Electrocardiography (ECG) classifier, as a real case study, to investigate the robustness and applicability of the proposed PRO. The experimental results demonstrate that the PRO algorithm outperforms existing meta-heuristic optimization algorithms by providing a more accurate and robust solution.
AB - In this paper, a novel evolutionary optimization algorithm, named Partial Reinforcement Optimizer (PRO), is introduced. The major idea behind the PRO comes from a psychological theory in evolutionary learning and training called the partial reinforcement effect (PRE) theory. According to the PRE theory, a learner is intermittently reinforced to learn or strengthen a specific behavior during the learning and training process. The reinforcement patterns significantly impact the response rate and strength of the learner during a reinforcement schedule, achieved by appropriately selecting a reinforcement behavior and the time of applying reinforcement process. In the PRO algorithm, the PRE theory is mathematically modeled to an evolutionary optimization algorithm for solving global optimization problems. The efficiency of the proposed PRO algorithm is compared to well-known Meta-heuristic Algorithms (MAs) using Wilcoxon and Friedman statistical tests to analyze results from 75 benchmarks of the CEC2005, CEC2014, and CEC-BC-2017 test suits, which include unimodal, multimodal, hybrid, and composition functions. Additionally, the proposed PRO algorithm is applied to optimize a Federated Deep Learning Electrocardiography (ECG) classifier, as a real case study, to investigate the robustness and applicability of the proposed PRO. The experimental results demonstrate that the PRO algorithm outperforms existing meta-heuristic optimization algorithms by providing a more accurate and robust solution.
KW - Evolutionary computation
KW - Partial reinforcement theory
KW - Meta-heuristic optimization
KW - Federated deep learning
KW - ECG
UR - http://www.scopus.com/inward/record.url?scp=85175561563&partnerID=8YFLogxK
U2 - 10.1016/j.eswa.2023.122070
DO - 10.1016/j.eswa.2023.122070
M3 - Article
AN - SCOPUS:85175561563
SN - 0957-4174
VL - 238
SP - 1
EP - 20
JO - Expert Systems with Applications
JF - Expert Systems with Applications
IS - Part F
M1 - 122070
ER -