Journal of System Simulation ›› 2024, Vol. 36 ›› Issue (7): 1670-1681.doi: 10.16182/j.issn1004731x.joss.23-0443

Previous Articles    

Task Analysis Methods Based on Deep Reinforcement Learning

Gong Xue1(), Peng Pengfei1, Rong Li1(), Zheng Yalian2, Jiang Jun1   

  1. 1.Naval University Of Engineering, Wuhan 430033, China
    2.State Key Laboratory of Water Resources and Hydropower Engineering Science, Wuhan University, Wuhan 430072, China
  • Received:2023-04-14 Revised:2023-06-01 Online:2024-07-15 Published:2024-07-12
  • Contact: Rong Li E-mail:gogxue@163.com;33574319@qq.com

Abstract:

In response to the high coupling of task interaction and many influencing factors in task analysis, a task analysis method based on sequence decoupling and deep reinforcement learning (DRL) is proposed, which can achieve task decomposition and task sequence reconstruction under complex constraints. The method designs an environment for deep reinforcement learning based on task information interaction, while improving the SumTree algorithm based on the difference between the loss functions of the target network and the evaluation network, achieving the priority evaluation among tasks. The activation function operation mechanism is introduced into the deep reinforcement learning network, followed by extracting the task features, putting forward the greedy activation factor, optimizing the parameters of the deep neural network, and determining the optimal state of the intelligent agent, thus facilitating its state transition. The multi-objective task execution sequence diagram is generated through experience replay. The simulation experiment results show that the method can generate executable task diagrams under optimal scheduling; and it has better adaptivity to dynamic scenarios compared with static scenarios, showing a promising prospect of widespread application in domain task planning.

Key words: task analysis, reinforcement learning, evaluation network, greedy factors, coupled tasks, activation functions

CLC Number: