Journal of System Simulation ›› 2024, Vol. 36 ›› Issue (7): 1609-1620.doi: 10.16182/j.issn1004731x.joss.23-0385
Previous Articles Next Articles
Received:2023-04-06
Revised:2023-05-29
Online:2024-07-15
Published:2024-07-12
Contact:
Wei Jingxuan
E-mail:jq18890952@163.com;wjx@xidian.edu.cn
CLC Number:
Jiang Quan, Wei Jingxuan. Real-time Scheduling Method for Dynamic Flexible Job Shop Scheduling[J]. Journal of System Simulation, 2024, 36(7): 1609-1620.
Table 1
DFJSP model parameters
| 参数 | 含义 | 值 |
|---|---|---|
| m | 加工机器数量 | {5,10,20,30} |
| nf | 初始工件数量 | 1.5 m或2 m |
| ns | 新工件数量 | 2 m或3 m |
| PRi | 工件优先级 | randi[ |
| ddti | 工件的截止日期紧急度 | randn[1.0, 1.5] |
| ni | 工件所包含的工序数 | ni = randi[m//2, m] |
| Mij | 可用加工机器集合大小 | randi[1, m] |
| Pijk | 加工时间 | randi[ |
| Eijk | 加工能耗 | round(randi[0, 5])+40-Pijk /2) |
| Ai | 新工件的到达时间 | 服从指数分布exp(1/λnew), λnew=randi[25, 100] |
| Bk | 机器的故障时间 | 服从指数分布exp(1/λMTBF), λMTBF=500 |
| Rk | 机器故障修复时间 | 服从指数分布exp(1/λMTTR),,λMTTR=50 |
Table 4
Pareto optimal solution GD values
| nf | m | ns | MPPO | FIFO+R | MT+R | EDD+R | CR+R | R+SPT | R+MEC | R+EA | R+SQT | R+R |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 7 | 5 | 10 | 0.036 | 0.045 | 0.039 | 0.048 | 0.039 | 0.027 | 0.023 | 0.047 | 0.021 | 0.044 |
| 10 | 5 | 15 | 0.052 | 0.066 | 0.067 | 0.06 | 0.067 | 0.050 | 0.019 | 0.064 | 0.031 | 0.071 |
| 15 | 10 | 20 | 0.022 | 0.029 | 0.028 | 0.033 | 0.027 | 0.058 | 0.024 | 0.042 | 0.026 | 0.031 |
| 20 | 10 | 30 | 0.022 | 0.047 | 0.031 | 0.034 | 0.033 | 0.014 | 0.017 | 0.056 | 0.007 | 0.036 |
| 30 | 20 | 40 | 0.018 | 0.033 | 0.028 | 0.035 | 0.032 | 0.017 | 0.014 | 0.059 | 0.005 | 0.03 |
| 40 | 20 | 60 | 0.018 | 0.040 | 0.027 | 0.028 | 0.026 | 0.019 | 0.022 | 0.063 | 0.020 | 0.026 |
| 45 | 30 | 60 | 0.014 | 0.018 | 0.020 | 0.024 | 0.023 | 0.018 | 0.019 | 0.054 | 0.026 | 0.019 |
| 60 | 30 | 90 | 0.019 | 0.037 | 0.024 | 0.027 | 0.026 | 0.005 | 0.009 | 0.056 | 0.003 | 0.028 |
Table 5
Pareto optimal solution IGD values
| nf | m | ns | MPPO | FIFO+R | MT+R | EDD+R | CR+R | R+SPT | R+MEC | R+EA | R+SQT | R+R |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 7 | 5 | 10 | 0.040 | 0.035 | 0.034 | 0.039 | 0.034 | 0.131 | 0.131 | 0.040 | 0.045 | 0.042 |
| 10 | 5 | 15 | 0.099 | 0.093 | 0.111 | 0.096 | 0.085 | 0.244 | 0.364 | 0.136 | 0.138 | 0.096 |
| 15 | 10 | 20 | 0.058 | 0.080 | 0.082 | 0.072 | 0.069 | 0.152 | 0.175 | 0.073 | 0.095 | 0.082 |
| 20 | 10 | 30 | 0.026 | 0.068 | 0.069 | 0.070 | 0.073 | 0.15 | 0.176 | 0.075 | 0.095 | 0.068 |
| 30 | 20 | 40 | 0.024 | 0.072 | 0.084 | 0.073 | 0.082 | 0.163 | 0.186 | 0.083 | 0.110 | 0.08 |
| 40 | 20 | 60 | 0.023 | 0.071 | 0.069 | 0.07 | 0.067 | 0.133 | 0.162 | 0.078 | 0.087 | 0.075 |
| 45 | 30 | 60 | 0.015 | 0.071 | 0.072 | 0.066 | 0.068 | 0.142 | 0.173 | 0.070 | 0.089 | 0.071 |
| 60 | 30 | 90 | 0.026 | 0.085 | 0.086 | 0.081 | 0.087 | 0.158 | 0.188 | 0.084 | 0.109 | 0.081 |
Table 6
Pareto optimal solution Spread values
| nf | m | ns | MPPO | FIFO+R | MT+R | EDD+R | CR+R | R+SPT | R+MEC | R+EA | R+SQT | R+R |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 7 | 5 | 10 | 0.681 | 0.725 | 0.670 | 0.673 | 0.660 | 0.976 | 1.019 | 0.753 | 0.684 | 0.740 |
| 10 | 5 | 15 | 0.800 | 0.649 | 0.666 | 0.682 | 0.682 | 1.012 | 0.994 | 0.812 | 0.897 | 0.760 |
| 15 | 10 | 20 | 0.710 | 0.715 | 0.707 | 0.711 | 0.780 | 0.944 | 0.932 | 0.774 | 0.811 | 0.773 |
| 20 | 10 | 30 | 0.747 | 0.788 | 0.791 | 0.805 | 0.837 | 0.966 | 0.926 | 0.78 | 0.885 | 0.745 |
| 30 | 20 | 40 | 0.742 | 0.889 | 0.887 | 0.799 | 0.807 | 0.942 | 0.952 | 0.857 | 0.901 | 0.870 |
| 40 | 20 | 60 | 0.798 | 0.828 | 0.849 | 0.860 | 0.879 | 0.981 | 0.946 | 0.878 | 0.940 | 0.894 |
| 45 | 30 | 60 | 0.785 | 0.856 | 0.880 | 0.826 | 0.836 | 0.984 | 0.939 | 0.823 | 0.906 | 0.854 |
| 60 | 30 | 90 | 0.822 | 0.899 | 0.884 | 0.888 | 0.872 | 0.996 | 0.965 | 0.887 | 0.959 | 0.888 |
| 1 | Luo Shu. Dynamic Scheduling for Flexible Job Shop with New Job Insertions by Deep Reinforcement Learning[J]. Applied Soft Computing, 2020, 91: 106208. |
| 2 | 尤一琛, 王艳, 纪志成. 基于博弈论的柔性作业车间动态调度研究[J]. 系统仿真学报, 2021, 33(11): 2579-2588. |
| You Yichen, Wang Yan, Ji Zhicheng. Research on Flexible Job-shop Dynamic Scheduling Based on Game Theory[J]. Journal of System Simulation, 2021, 33(11): 2579-2588. | |
| 3 | An Youjun, Chen Xiaohui, Gao Kaizhou, et al. A Hybrid Multi-objective Evolutionary Algorithm for Solving an Adaptive Flexible Job-shop Rescheduling Problem with Real-time Order Acceptance and Condition-based Preventive Maintenance[J]. Expert Systems with Applications, 2023, 212: 118711. |
| 4 | Wen Xiaoyu, Lian Xiaonan, Qian Yunjie, et al. Dynamic Scheduling Method for Integrated Process Planning and Scheduling Problem with Machine Fault[J]. Robotics and Computer-Integrated Manufacturing, 2022, 77: 102334. |
| 5 | Holthaus Oliver, Rajendran Chandrasekharan. Efficient Dispatching Rules for Scheduling in a Job Shop[J]. International Journal of Production Economics, 1997, 48(1): 87-105. |
| 6 | Jun S, Lee S, Chun H. Learning Dispatching Rules Using Random Forest in Flexible Job Shop Scheduling Problems[J]. International Journal of Production Research, 2019, 57(10): 3290-3310. |
| 7 | Priore Paolo, Ponte B, Puente Javier, et al. Learning-based Scheduling of Flexible Manufacturing Systems Using Ensemble Methods[J]. Computers & Industrial Engineering, 2018, 126: 282-291. |
| 8 | Paulo Roberto do O da Costa, Rhuggenaath Jason, Zhang Yingqian, et al. Learning 2-opt Heuristics for the Traveling Salesman Problem via Deep Reinforcement Learning[C]//Proceedings of the 12th Asian Conference on Machine Learning. Chia Laguna Resort, Sardinia, Italy: PMLR, 2020: 465-480. |
| 9 | Lu Hao, Zhang Xingwen, Yang Shuang. A Learning-based Iterative Method for Solving Vehicle Routing Problems[C]//ICLR 2020 Conference Blind Submission. New York, USA: ICLR, 2020. |
| 10 | Zeng Yunhui, Liao Zijun, Dai Yuanzhi, et al. Hybrid Intelligence for Dynamic Job-shop Scheduling with Deep Reinforcement Learning and Attention Mechanism[EB/OL]. (2022-01-03) [2023-03-20]. . |
| 11 | Luo Shu, Zhang Linxuan, Fan Yushun. Real-time Scheduling for Dynamic Partial-no-wait Multiobjective Flexible Job Shop by Deep Reinforcement Learning[J]. IEEE Transactions on Automation Science and Engineering, 2022, 19(4): 3020-3038. |
| 12 | Liu Renke, Piplani R, Toro Carlos. Deep Reinforcement Learning for Dynamic Scheduling of a Flexible Job Shop[J]. International Journal of Production Research, 2022, 60(13): 4049-4069. |
| 13 | Li Funing, Lang Sebastian, Hong Bingyuan, et al. A Two-stage RNN-based Deep Reinforcement Learning Approach for Solving the Parallel Machine Scheduling Problem with Due Dates and Family Setups[J]. Journal of Intelligent Manufacturing, 2024, 35(3): 1107-1140. |
| 14 | Schulman J, Wolski F, Dhariwal P, et al. Proximal Policy Optimization Algorithms[EB/OL]. (2017-08-28) [2023-03-14]. . |
| 15 | Mnih V, Kavukcuoglu K, Silver D, et al. Human-level Control Through Deep Reinforcement Learning[J]. Nature, 2015, 518(7540): 529-533. |
| 16 | Li Kaiwen, Zhang Tao, Wang Rui. Deep Reinforcement Learning for Multiobjective Optimization[J]. IEEE Transactions on Cybernetics, 2021, 51(6): 3103-3114. |
| 17 | Shen Xiaoning, Yao Xin. Mathematical Modeling and Multi-objective Evolutionary Algorithms Applied to Dynamic Flexible Job Shop Scheduling Problems[J]. Information Sciences, 2015, 298: 198-224. |
| 18 | Zitzler Eckart, Deb Kalyanmoy, Thiele Lothar. Comparison of Multiobjective Evolutionary Algorithms: Empirical Results[J]. Evolutionary Computation, 2000, 8(2): 173-195. |
| 19 | Zitzler E, Thiele L. Multiobjective Evolutionary Algorithms: A Comparative Case Study and the Strength Pareto Approach[J]. IEEE Transactions on Evolutionary Computation, 1999, 3(4): 257-271. |
| 20 | Deb K, Pratap A, Agarwal S, et al. A Fast and Elitist Multiobjective Genetic Algorithm: NSGA-II[J]. IEEE Transactions on Evolutionary Computation, 2002, 6(2): 182-197. |
| 21 | Leung Y W, Wang Yuping. An Orthogonal Genetic Algorithm with Quantization for Global Numerical Optimization[J]. IEEE Transactions on Evolutionary Computation, 2001, 5(1): 41-53. |
| [1] | Jiang Ming, He Tao. Solving the Vehicle Routing Problem Based on Deep Reinforcement Learning [J]. Journal of System Simulation, 2025, 37(9): 2177-2187. |
| [2] | Ni Peilong, Mao Pengjun, Wang Ning, Yang Mengjie. Robot Path Planning Based on Improved A-DDQN Algorithm [J]. Journal of System Simulation, 2025, 37(9): 2420-2430. |
| [3] | Chen Zhen, Wu Zhuoyi, Zhang Lin. Research on Policy Representation in Deep Reinforcement Learning [J]. Journal of System Simulation, 2025, 37(7): 1753-1769. |
| [4] | Zhang Yue, Zhang Wenliang, Feng Qiang, Guo Xing, Ren Yi, Wang Zili. Combat-oriented Comprehensive Simulation and Verification Technology for Equipment System RMS [J]. Journal of System Simulation, 2025, 37(7): 1823-1835. |
| [5] | Chen Juan, Zheng Wang, Liu Qianqian, Lu Bin. Automatic Multi-objective Optimization Based on Dynamic Storage Location Allocation Strategy [J]. Journal of System Simulation, 2025, 37(6): 1435-1448. |
| [6] | Gu Xueqiang, Luo Junren, Zhou Yanzhong, Zhang Wanpeng. Survey on Large Language Agent Technologies for Intelligent Game Theoretic Decision-making [J]. Journal of System Simulation, 2025, 37(5): 1142-1157. |
| [7] | Wu Guohua, Zeng Jiaheng, Wang Dezhi, Zheng Long, Zou Wei. A Quadrotor Trajectory Tracking Control Method Based on Deep Reinforcement Learning [J]. Journal of System Simulation, 2025, 37(5): 1169-1187. |
| [8] | Yao Changhua, Bi Shanning, Ma Rufei, Yu Xiaohan, Li Jiaqiang, Chen Jinli. Method for Dynamic Coalition Formation of Wargame Agent for Force Cooperation [J]. Journal of System Simulation, 2025, 37(5): 1188-1196. |
| [9] | Wu Zisong, Chang Daofang, Gai Yuchun. Optimization of Cargo Location Allocation in Four-way Shuttle Warehousing System Based on Two-stage Hybrid Algorithm [J]. Journal of System Simulation, 2025, 37(5): 1234-1245. |
| [10] | Xu Ming, Li Jinye, Zuo Dongyu, Zhang Jing. Signal Timing Optimization via Reinforcement Learning with Traffic Flow Prediction [J]. Journal of System Simulation, 2025, 37(4): 1051-1062. |
| [11] | Wang Xin, Cui Chenggang, Wang Xiangxiang, Zhu Ping. Research on Economic Dispatching Strategy of CHP Units Based on SRL [J]. Journal of System Simulation, 2025, 37(4): 968-981. |
| [12] | Zhang Lei, Zhang Xuechao, Wang Chao, Bo Xianglei. An Intelligent Ambulance Regulation Model Based on Online Reinforcement Learning Algorithm [J]. Journal of System Simulation, 2025, 37(3): 584-594. |
| [13] | Bai Zhenzu, Hou Yizhi, He Zhangming, Wei Juhui, Zhou Haiyin, Wang Jiongqi. Optimization of Dynamic Weapon Target Assignment Considering Random Disturbances [J]. Journal of System Simulation, 2025, 37(12): 2967-2980. |
| [14] | Xiong Jun, Zhang Wenbo, Xiong Zhi, Zhou Feng, Yang Bo. Survey of Cooperative Multi-Agent Path Finding [J]. Journal of System Simulation, 2025, 37(12): 3033-3049. |
| [15] | Liu Xiang, Jin Qiankun. Research on PAC-Bayes-Based A2C Algorithm for Multi-objective Reinforcement Learning [J]. Journal of System Simulation, 2025, 37(12): 3212-3223. |
| Viewed | ||||||
|
Full text |
|
|||||
|
Abstract |
|
|||||
