Journal of System Simulation ›› 2024, Vol. 36 ›› Issue (2): 405-414.doi: 10.16182/j.issn1004731x.joss.22-1105

• Papers • Previous Articles     Next Articles

Flipper Control Method for Tracked Robot Based on Deep Reinforcement Learning

Pan Hainan(), Chen Bailiang, Huang Kaihong(), Ren Junkai, Cheng Chuang, Lu Huimin, Zhang Hui   

  1. College of Intelligence Science and Technology, National University of Defense Technology, Changsha 410073, China
  • Received:2022-09-20 Revised:2022-12-11 Online:2024-02-15 Published:2024-02-04
  • Contact: Huang Kaihong E-mail:phn@nudt.edu.cn;kaihong.huang@nudt.edu.cn

Abstract:

Tracked robots with flippers have certain terrain adaptation capabilities. To improve the intelligent operation level of robots in complex environments, it is significant to realize the flipper autonomously control. Combining the expert experience in obstacle crossing and optimization indicators, Markov decision process(MDP) modeling of the robot's flipper control problem is carried out and a simulation training environment based on physics simulation engine Pymunk is built. A deep reinforcement learning control algorithm based on dueling double DQN(D3QN) network is proposed for controlling the flippers. With terrain information and robot state as the input and the four flippers' angle as the output, the algorithm can achieve the self-learning control of the flippers in challenging terrain. The learned flipper control policy is compared with the manual operation in Gazebo 3D simulation environment. The results show that the proposed algorithm can enable the flippers of robot to obtain adaptive adjustment ability, which helps the robot pass complex terrain more efficiently.

Key words: tracked robot, flipper autonomous control, autonomous traversal, DRL, robot operation

CLC Number: