Journal of System Simulation ›› 2024, Vol. 36 ›› Issue (6): 1433-1441.doi: 10.16182/j.issn1004731x.joss.23-1263

• Papers • Previous Articles     Next Articles

Fusing Rotation Angle Coding in Spherical Space for Human Action Recognition

Su Benyue1,2(), Zhu Bangguo1,2, Guo Mengjuan1,2, Sheng Min3   

  1. 1.School of Mathematics and Computer, Tongling University, Tongling 244061, China
    2.School of Computer and Information, Anqing Normal University, Anqing 246133, China
    3.School of Mathematics and Physics, Anqing Normal University, Anqing 246133, China
  • Received:2023-10-18 Revised:2024-01-04 Online:2024-06-28 Published:2024-06-19

Abstract:

The existing human action recognition methods focus more on the translation information such as the coordinates and displacements of skeleton structure, and pay less attention to the motion trend of skeleton structure and the rotation information representing the motion direction of joints and bones. A spatio-temporal convolutional neural network method combining the rotation angle coding in spherical space is introduced. The angle information with scale invariance is obtained by mapping the human action in three-dimensional spherical space, and the dynamic angular velocity information is extracted as the angle code to represent the rotation information of joints and bones in the action trajectory. A spatio-temporal feature extraction and co-occurrence module(STCN) is constructed to better capture the spatio-temporal features of data. A suitable fusion strategy is utilized to fuse the translation features and rotation features. The experimental results show that the rotation angle coding benefits the accuracy improvement of motion representation and the effectiveness of the spatio-temporal feature extraction and co-occurrence module.

Key words: human action recognition, skeleton data, rotation angle encoding, 3D spherical space, spatial-temporal feature

CLC Number: