Journal of System Simulation ›› 2022, Vol. 34 ›› Issue (2): 258-268.doi: 10.16182/j.issn1004731x.joss.21-0337
• Modeling Theory and Methodology • Previous Articles Next Articles
Received:2021-04-20
Revised:2021-07-01
Online:2022-02-18
Published:2022-02-23
Contact:
Xinyi Peng
E-mail:liqirui@gdupt.edu.cn;1742043887@qq.com
CLC Number:
Qirui Li, Xinyi Peng. Job Scheduling and Simulation in Cloud Based on Deep Reinforcement Learning[J]. Journal of System Simulation, 2022, 34(2): 258-268.
| 1 | Xu Z, Liang W, Xia Q. Efficient Embedding of Virtual Networks to Distributed Clouds via Exploring Periodic Resource Demands[J]. IEEE Transactions on Cloud Computing(S2168-7161), 2018, 6(3): 694-707. |
| 2 | 李成辉, 李仁旺, 杨强光, 等. 基于改进萤火虫算法的云计算任务调度算法[J]. 浙江理工大学学报(自然科学版), 2019, 41(3): 354-359. |
| Li Chenghui, Li Renwang, Yang Qiangguang, et al. Cloud Computing Task Scheduling Algorithm Based on Improved Firefly Algorithm[J]. Journal of Zhejiang Sci-Tech University(Natural Sciences Edition), 2019, 41(3):354-359. | |
| 3 | 王康瑾, 贾统, 李影. 在离线混部作业调度与资源管理技术研究综述[J].软件学报, 2020, 31(10): 3100-3119. |
| Wang Kangjin, Jia Tong, Li Ying. State-of-the-art Survey of Scheduling and Resource Management Technology for Colocation Jobs[J]. Journal of Software, 2020,31(10):3100-3119. | |
| 4 | Verma A, Kaushal S. A hybrid Multi-objective Particle Swarm Optimization for Scientific Workflow Scheduling[J]. Parallel Computing(S0167-8191), 2017, 62: 1-19. |
| 5 | Duan H, Chen C, Min G, et al. Energy-aware Scheduling of Virtual Machines in Heterogeneous Cloud Computing Systems[J]. Future Generation Computer Systems(S0167-739X), 2017, 74: 142-150. |
| 6 | Srichandan S, Turuk A K S. Task Scheduling for Cloud Computing Using Multi-objective Hybrid Bacteria Foraging Algorithm[J]. Future Computing and Informatics Journal(S2314-7288), 2018, 3(2): 210-23. |
| 7 | 李强, 刘晓峰. 基于模拟植物生长算法的云作业调度模型[J]. 系统仿真学报, 2018, 30(12): 4649-4658. |
| Li Qiang, Liu Xiaofeng. Cloud Job Scheduling Model Based on Improved Plant Growth Algorithm[J]. Journal of System Simulation, 2018, 30(12): 4649-4658. | |
| 8 | 殷昌盛, 杨若鹏, 朱巍, 等. 多智能体分层强化学习综述[J]. 智能系统学报, 2020, 15(4): 646-655. |
| Yin Changsheng, Yang Ruopeng, Zhu Wei, et al. A Survey on Multi-agent Hierarchical Reinforcement Learning[J]. CAAI Transactions on Intelligent Systems, 2020, 15(4): 646-655. | |
| 9 | Peng Z, Cui D, Zuo J, et al. Random Task Scheduling Scheme Based on Reinforcement Learning in Cloud Computing[J]. Cluster Computing(S1386-7857), 2015, 18: 1595-1607. |
| 10 | Cui D, Peng Z, Xiong J, et al. A Reinforcement Learning-Based Mixed Job Scheduler Scheme for Grid or IaaS Cloud[J]. IEEE Transactions on Cloud Computing(S2168-7161), 2020, 4: 1030-1039. |
| 11 | 袁景凌, 陈旻骋, 江涛, 等. 异构云环境下AHP定权的多目标强化学习作业调度方法[J/OL].(2021-01-05) 控制与决策, 2021:1-8. . |
| Yuan Jingling, Chen Minchi, Jiang Tao, et al. Multi-Objective Reinforcement Learning Job Scheduling Method using AHP Fixed Weight in Heterogeneous Cloud Environment[J/OL].(2021-01-05) Control and Decision, 2021:1-8. . | |
| 12 | Lin J, Cui D, Peng Z, et al. A Two-Stage Framework for the Multi-User Multi-Data Center Job Scheduling and Resource Allocation[J]. IEEE Access(S2169-3536), 2020, 8: 197863-197874. |
| 13 | 郭玉栋, 左金平. 基于霍普菲尔德网络的云作业调度算法[J]. 系统仿真学报, 2019, 31(12): 2859-2867. |
| Guo Yudong, Zuo Jinping. The Scheduling Algorithm of Cloud Job Based on Hopfield Neural Network[J]. Journal of System Simulation, 2019, 31(12): 2859-2867. | |
| 14 | Rangra A, Sehgal V K, Shukla S. A Novel Approach of Cloud Based Scheduling Using Deep-Learning Approach in E-Commerce Domain[J]. International Journal of Information System Modeling and Design(S1947-8186), 2019, 10(3): 59-75. |
| 15 | 李凯文, 张涛, 王锐, 等. 基于深度强化学习的组合优化研究进展[J]. 自动化学报, 2021, 47(11): 2521-2537. |
| Li Kaiwen, Zhang Tao, Wang Rui, et al. Research Reviews of Combinatorial Optimization Methods Based on Deep Reinforcement Learning[J]. Acta Automatica Sinica, 2021, 47(11): 2521-2537. | |
| 16 | 朱斐, 吴文, 伏玉琛, 等. 基于双深度网络的安全深度强化学习方法[J].计算机学报, 2019, 42(8): 1812-1826. |
| Zhu Fei, Wu Wen, Fu Yushen, et al. A Dual Deep Network Based Secure Deep Reinforcement Learning Method[J]. Chinese Journal of Computers, 2019, 42(8): 1812-1826. | |
| 17 | Guo W, Tian W, Ye Y, et al. Cloud Resource Scheduling With Deep Reinforcement Learning and Imitation Learning[J]. IEEE Internet of Things Journal(S2327-4662), 2021, 8(5): 3576-3586. |
| 18 | Peng Z, Lin J, Cui D, et al. A Multi-objective Trade-off Framework for Cloud Resource Scheduling Based on the Deep Q-network Algorithm[J]. Cluster Computing(S1386-7857), 2020, 23(4): 2753-2767. |
| 19 | Lin J, Peng Z, Cui D. Deep Reinforcement Learning for Multi-resource Cloud Job Scheduling[C]// 2018 25th International Conference on Neural Information Processing. Berlin: Springer, 2018: 289-302. |
| 20 | Miettinen A, Nurminen J. Energy Efficiency of Mobile Clients in Cloud Computing[C]// Boston: USENIX Association, 2010: 1-7. |
| [1] | Jiang Ming, He Tao. Solving the Vehicle Routing Problem Based on Deep Reinforcement Learning [J]. Journal of System Simulation, 2025, 37(9): 2177-2187. |
| [2] | Ni Peilong, Mao Pengjun, Wang Ning, Yang Mengjie. Robot Path Planning Based on Improved A-DDQN Algorithm [J]. Journal of System Simulation, 2025, 37(9): 2420-2430. |
| [3] | Chen Zhen, Wu Zhuoyi, Zhang Lin. Research on Policy Representation in Deep Reinforcement Learning [J]. Journal of System Simulation, 2025, 37(7): 1753-1769. |
| [4] | Wu Guohua, Zeng Jiaheng, Wang Dezhi, Zheng Long, Zou Wei. A Quadrotor Trajectory Tracking Control Method Based on Deep Reinforcement Learning [J]. Journal of System Simulation, 2025, 37(5): 1169-1187. |
| [5] | Li Qiang, Qin Huawei, Qiao Bingqin, Wu Ruifang. An Algorithm for Cloud-based Web Service Combination Optimization Through Plant Growth Simulation [J]. Journal of System Simulation, 2025, 37(2): 462-473. |
| [6] | Bai Zhenzu, Hou Yizhi, He Zhangming, Wei Juhui, Zhou Haiyin, Wang Jiongqi. Optimization of Dynamic Weapon Target Assignment Considering Random Disturbances [J]. Journal of System Simulation, 2025, 37(12): 2967-2980. |
| [7] | Zheng Jiayu, Mai Zhuxue, Chen Zheyi. Optimization of Service Caching and Computation Offloading in Digital Twin Cloud-edge Networks [J]. Journal of System Simulation, 2025, 37(11): 2741-2753. |
| [8] | Di Jian, Wan Xue, Jiang Limei. Evolutionary Reinforcement Learning Based on Elite Instruction and Random Search [J]. Journal of System Simulation, 2025, 37(11): 2877-2887. |
| [9] | Xu Zhongkai, Chu Chenyang, Xie Kai, Zhao Ruizhuo, Ke Wenjun. Optimization Dispatch Method for High-proportion Renewable Energy Power Systems Based on SC-PPO [J]. Journal of System Simulation, 2025, 37(10): 2511-2521. |
| [10] | Liang Xiuman, Liu Ziliang, Liu Zhendong. Path Planning of Improved RRT Algorithm Based on Deep Reinforcement Learning [J]. Journal of System Simulation, 2025, 37(10): 2578-2593. |
| [11] | Jiang Jiachen, Jia Zhengxuan, Xu Zhao, Lin Tingyu, Zhao Pengpeng, Ou Yiming. Decision Modeling and Solution Based on Game Adversarial Complex Systems [J]. Journal of System Simulation, 2025, 37(1): 66-78. |
| [12] | Qin Baoxin, Zhang Yuxiao, Wu Sirui, Cao Weichong, Li Zhan. Intelligent Optimization of Coal Terminal Unloading Scheduling Based on Improved D3QN Algorithm [J]. Journal of System Simulation, 2024, 36(3): 770-781. |
| [13] | Li Ming, Ye Wangzhong, Yan Jiehua. Path Planning of Desert Robot Based on Deep Reinforcement Learning [J]. Journal of System Simulation, 2024, 36(12): 2917-2925. |
| [14] | Zhang Yongfu, Liu Yang, Yuan He. A Method for Key Node Identification in Operational Target System Based on War Gaming [J]. Journal of System Simulation, 2024, 36(11): 2654-2661. |
| [15] | Wang Cong, Yu Jiaying, Zhang Hongli. Multi-objective Energy-efficient No-wait Flow Shop Scheduling Based on Hybrid Discrete State Transition Algorithm [J]. Journal of System Simulation, 2024, 36(10): 2345-2358. |
| Viewed | ||||||
|
Full text |
|
|||||
|
Abstract |
|
|||||
