Journal of System Simulation ›› 2023, Vol. 35 ›› Issue (11): 2345-2358.doi: 10.16182/j.issn1004731x.joss.22-0666

• Papers • Previous Articles     Next Articles

Intercell Dynamic Scheduling Method Based on Deep Reinforcement Learning

Ni Jing(), Ma Mengke   

  1. University of Shanghai for Science and Technology, Shanghai 200093, China
  • Received:2022-06-20 Revised:2022-09-04 Online:2023-11-25 Published:2023-11-24

Abstract:

In order to solve the intercell scheduling problem of dynamic arrival of machining tasks and realize adaptive scheduling in the complex and changeable environment of the intelligent factory, a scheduling method based on a deep Q network is proposed. A complex network with cells as nodes and workpiece intercell machining path as directed edges is constructed, and the degree value is introduced to define the state space with intercell scheduling characteristics. A compound scheduling rule composed of a workpiece layer, unit layer, and machine layer is designed, and hierarchical optimization makes the scheduling scheme more global. Since double deep Q network (DDQN) still selects sub-optimal actions in the later stage of training, a search strategy based on the exponential function is proposed. Through simulation experiments of different scales, it is verified that the proposed method can deal with the changeable dynamic environment and quickly generate an optimal scheduling scheme.

Key words: intercell scheduling, dynamic scheduling, reinforcement learning, degree value, compound rule

CLC Number: