Journal of System Simulation ›› 2023, Vol. 35 ›› Issue (7): 1619-1633.doi: 10.16182/j.issn1004731x.joss.22-0334

• Papers • Previous Articles    

Path Planning of Mobile Robots Based on Memristor Reinforcement Learning in Dynamic Environment

Hailan Yang1(), Yongqiang Qi1(), Baolei Wu2, Dan Rong1, Miaoying Hong1, Jun Wang3   

  1. 1.School of Mathematics, China University of Mining and Technology, Xuzhou 221116, China
    2.School of Computer Science and Technology, China University of Mining and Technology, Xuzhou 221116, China
    3.School of Information and Control Engineering, China University of Mining and Technology, Xuzhou 221116, China
  • Received:2022-04-11 Revised:2022-07-07 Online:2023-07-29 Published:2023-07-19
  • Contact: Yongqiang Qi E-mail:yhailan163@163.com;qiyongqiang@163.com

Abstract:

In order to solve the path planning problem of mobile robots in dynamic environment, two-layer path planning algorithm based on improved ant colony algorithm and MA-DQN algorithm is proposed. Static global path planning is accomplished by ant colony algorithm that improved the probabilistic transfer function and the pheromone updating principle; the traditional DQN algorithm structure is improved by using the memristor as the synaptic structure of neural network, and then completed the local dynamic obstacle avoidance of the mobile robot. The path planning mechanism is switched according to whether there are dynamic obstacles within the sensing range of the mobile robot, so as to completed the path planning task in the dynamic environment. The simulation results show that the algorithm can effectively plan a feasible path for mobile robots in a dynamic environment in real time.

Key words: dynamic environment, (deep q-network)DQN, memristor, in-memory computing, path planning

CLC Number: