TY - JOUR
T1 - Workflow offloading for energy minimization under deep reinforcement learning
AU - Wang, Shuang
AU - Liu, Yibo
AU - Wu, Tianxing
AU - Zhang, Yang
AU - Sheng, Quan Z.
PY - 2025/11
Y1 - 2025/11
N2 - With the increasing number of computationally intensive workflows that cloud, edge, and end devices need to process to optimize the overall energy consumption of the system for workflows, lots of energy consumption has been consumed. Given the dynamic nature of data sizes within workflows, data compression techniques are employed to curtail transmission energy consumption or offload tasks directly to edge and cloud devices for execution. It is difficult to select suitable devices for these workflows to minimize energy consumption. To address the dynamic variations in task data of workflows and the heterogeneous characteristics of resources, we propose a novel offloading scheme based on deep reinforcement learning (DRL). We first propose a task priority algorithm that considers energy consumption as a key factor. Subsequently, we construct a mathematical model based on the Markov Decision Process (MDP), which considers both task and system states to minimize overall system energy consumption. Finally, we employ the deep Q network algorithm from deep reinforcement learning to train the proposed MDP model where the experience pool replacement algorithm of the deep Q network algorithm is enhanced for improved learning efficiency. To validate the proposed approach, we conduct experiments to fine-tune algorithm parameters and assess the significant benefits of data compression in energy consumption optimization. Compared with other algorithms, the proposed algorithm can execute the same workflow applications with lower energy consumption and acceptable makespan.
AB - With the increasing number of computationally intensive workflows that cloud, edge, and end devices need to process to optimize the overall energy consumption of the system for workflows, lots of energy consumption has been consumed. Given the dynamic nature of data sizes within workflows, data compression techniques are employed to curtail transmission energy consumption or offload tasks directly to edge and cloud devices for execution. It is difficult to select suitable devices for these workflows to minimize energy consumption. To address the dynamic variations in task data of workflows and the heterogeneous characteristics of resources, we propose a novel offloading scheme based on deep reinforcement learning (DRL). We first propose a task priority algorithm that considers energy consumption as a key factor. Subsequently, we construct a mathematical model based on the Markov Decision Process (MDP), which considers both task and system states to minimize overall system energy consumption. Finally, we employ the deep Q network algorithm from deep reinforcement learning to train the proposed MDP model where the experience pool replacement algorithm of the deep Q network algorithm is enhanced for improved learning efficiency. To validate the proposed approach, we conduct experiments to fine-tune algorithm parameters and assess the significant benefits of data compression in energy consumption optimization. Compared with other algorithms, the proposed algorithm can execute the same workflow applications with lower energy consumption and acceptable makespan.
KW - Computing offloading
KW - Data compression
KW - Deep reinforcement learning
KW - Edge computing
KW - Energy consumption
UR - http://www.scopus.com/inward/record.url?scp=105019766108&partnerID=8YFLogxK
U2 - 10.1007/s00607-025-01576-y
DO - 10.1007/s00607-025-01576-y
M3 - Article
AN - SCOPUS:105019766108
SN - 0010-485X
VL - 107
SP - 1
EP - 32
JO - Computing
JF - Computing
IS - 11
M1 - 222
ER -