Journal of System Simulation ›› 2023, Vol. 35 ›› Issue (12): 2550-2559.doi: 10.16182/j.issn1004731x.joss.22-0841

• Papers • Previous Articles     Next Articles

Task Scheduling for Internet of Vehicles Based on Deep Reinforcement Learning in Edge Computing

Ju Xiang(), Su Shengchao(), Xu Chaojie, He Beibei   

  1. School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai 201620, China
  • Received:2022-07-20 Revised:2022-08-22 Online:2023-12-15 Published:2023-12-12
  • Contact: Su Shengchao E-mail:juxiang@sues.edu.cn;jnssc@sues.edu.cn

Abstract:

Aiming at the offloading and execution of delay-constrained computing tasks for internet of vehicles in edge computing, a task scheduling method based on deep reinforcement learning is proposed. In multi-edge server scenario, a software-defined network-aided internet of vehicles task offloading system is built. On this basis, the task scheduling model of vehicle computation offloading is given. According to the characteristics of task scheduling, a scheduling method based on an improved pointer network is designed. Considering the complexity of task scheduling and computing resource allocation, the deep reinforcement learning algorithm is used to train the pointer network. The vehicle offloading tasks is scheduled by the trained pointer network. The simulation results show that with the same computing resources of edge servers, the proposed method is better than other methods in processing the number of delay-constrained computing tasks, and effectively improves the service capability of the internet of vehicles task offloading system.

Key words: internet of vehicles, edge computing, task scheduling, pointer network, deep reinforcement learning

CLC Number: