With the rapid increase of vehicles, the explosive growth of data flow and the increasing shortage of spectrum resources, the performance of existing task offloading scheme is poor, and the on-board terminal can’t achieve efficient computing. Therefore, this article proposes a task offload strategy based on reinforcement learning computing in edge computing architecture of Internet of vehicles. Firstly, the system architecture of Internet of vehicles is designed. The Road Side Unit receives the vehicle data in community and transmits it to Mobile Edge Computing server for data analysis, while the control center collects all vehicle information. Then, the calculation model, communication model, interference model and privacy issues are constructed to ensure the rationality of task offloading in Internet of vehicles. Finally, the user cost function is minimized as objective function, and double-layer deep Q-network in deep reinforcement learning algorithm is used to solve the problem for real-time change of network state caused by user movement. The results show that the proposed offloading strategy can achieve fast convergence. Besides, the impact of user number, vehicle speed and MEC computing power on user cost is the least compared with other offloading schemes. The task offloading rate of our proposed strategy is the highest with better performance, which is more suitable for the scenario of Internet of vehicles.
Bibliographical noteFunding Information:
This work was supported by the Scientific and Technological Innovation Programs of Higher Education Institutions in Shanxi (STIP) under Grant 201802004.
© 2020 Institute of Electrical and Electronics Engineers Inc.. All rights reserved.
Copyright 2021 Elsevier B.V., All rights reserved.
- Internet of Vehicles
- Mobile edge computing
- Privacy security
- Reinforcement learning
- Task offloading