Abstract
In cyber-physical systems, it is advantageous to leverage cloud with edge resources to distribute the workload for processing and computing user data at the point of generation. Services offered by cloud are not flexible enough against variations in the size of underlying data, which leads to increased latency, violation of deadline and higher cost. On the other hand, resolving above-mentioned issues with edge devices with limited resources is also challenging. In this work, a novel reinforcement learning algorithm, Capacity-Cost Ratio-Reinforcement Learning (CCR-RL), is proposed which considers both resource utilization and cost for the target cyber-physical systems. In CCR-RL, the task offloading decision is made considering data arrival rate, edge device computation power, and underlying transmission capacity. Then, a deep learning model is created to allocate resources based on the underlying communication and computation rate. Moreover, new algorithms are proposed to regulate the allocation of communication and computation resources for the workload among edge devices and edge servers. The simulation results demonstrate that the proposed method can achieve a minimal latency and a reduced processing cost compared to the state-of-the-art schemes.
Original language | English |
---|---|
Pages (from-to) | 245-254 |
Number of pages | 10 |
Journal | IEEE Transactions on Emerging Topics in Computational Intelligence |
Volume | 6 |
Issue number | 2 |
Early online date | 28 Dec 2020 |
DOIs | |
Publication status | Published - Apr 2022 |
Keywords
- Artificial intelligence
- Computational modeling
- Deep learning
- deep-reinforcement learning
- edge computing
- game theory
- measurement systems
- Performance evaluation
- Processor scheduling
- Resource management
- resource provision
- Servers
- Task analysis