系统仿真学报 ›› 2025, Vol. 37 ›› Issue (2): 541-550.doi: 10.16182/j.issn1004731x.joss.23-1220

• 研究论文 • 上一篇    

大规模脉冲神经网络动态加载仿真方法

沈嘉玮, 才大业, 杨国青, 吕攀, 李红   

  1. 浙江大学 计算机科学与技术学院,浙江 杭州 310013
  • 收稿日期:2023-10-10 修回日期:2023-12-27 出版日期:2025-02-14 发布日期:2025-02-10
  • 通讯作者: 杨国青
  • 第一作者简介:沈嘉玮(1998-),男,硕士生,研究方向为类脑计算。
  • 基金资助:
    异构融合类脑计算研究平台(2021ZD0200300)

Dynamic Loading Simulation Method for Large-scale Spiking Neural Network

Shen Jiawei, Cai Daye, Yang Guoqing, Lü Pan, Li Hong   

  1. College of Computer Science and Technology, Zhejiang University, Hangzhou 310013, China
  • Received:2023-10-10 Revised:2023-12-27 Online:2025-02-14 Published:2025-02-10
  • Contact: Yang Guoqing

摘要:

针对大规模脉冲神经网络仿真时存在GPU内存需求高的问题,提出一种针对大规模脉冲神经网络的动态加载仿真 方法 。通过子网络粒度的数据移动,利用主机内存作为更大的内存池,减少GPU显存对于模型仿真规模的限制,实现在单GPU的计算机进行大规模脉冲神经网络仿真,并使用流水线加速技术减少数据移动对仿真速度的影响。最终实现了在单机GPU的实验环境下仿真百万级别神经元规模的仿真,解决了在脉冲神经网络仿真过程中内存不足的问题。

关键词: 类脑计算, 脉冲神经网络, 神经元, 突触, 仿真

Abstract:

To address the problem of high GPU memory requirements in large-scale spiking neural network simulation, a dynamic loading simulation method for large-scale spiking neural networks is proposed. This method uses data movement at the sub-network granularity and utilizes the host memory as a larger memory pool to reduce the limitation of GPU memory on the model simulation scale, enabling large-scale spiking neural network simulation on a single GPU computer. The pipeline acceleration technique is adopted to reduce the impact of data movement on simulation speed. The simulation of a million-scale neural network is achieved in a single GPU experimental environment, which solves the problem of insufficient memory during spiking neural network simulation.

Key words: brain-inspired computing, spiking neural network, neuron, synapse, simulation

中图分类号: