Journal of System Simulation ›› 2025, Vol. 37 ›› Issue (9): 2409-2419.doi: 10.16182/j.issn1004731x.joss.24-0362

• Papers • Previous Articles    

A Model Combining Self-attention and Weight Sharing for Human Activity Recognition

Ma Lun1, Yang Yue1, Wang Daihe1, Liao Guisheng2, Li Xing1   

  1. 1.School of Information Engineering, Chang'an University, Xi'an 710064, China
    2.National Lab of Radar Signal Processing, Xidian University, Xi'an 710071, China
  • Received:2024-04-08 Revised:2024-08-04 Online:2025-09-18 Published:2025-09-22
  • Contact: Yang Yue

Abstract:

With the prevalence of wearable devices, human activity recognition based on wearable sensor data has garnered significant attention. The central issue in this field is how to extract effective behavioral information from raw sensor data to form corresponding feature vectors. Currently, convolutional neural networks and recurrent neural networks have been widely utilized for feature extraction from multi-sensor data. However, these networks struggle to globally capture the crucial temporal features inherent of human activity over time. To address this, a multi-CNN-BiLSTM-self attention (Multi-CBSA) model based on self-attention and weight sharing has been proposed, taking into consideration the logical correlations among sensors placed on different parts of the body. This model employs uniformly structured and weight-shared sub-networks to extract features from activity data captured by different body parts, simplifying the model architecture and reducing training parameters. In this model, 1-dimensional convolutional neural network is used to convert the original behavioral data into short sequences consisting of advanced features; second, the forward and backward temporal features of the short sequences are obtained by bi-directional long and short-term memory network for each sub-network; and third, representative key features are obtained utilizing the self-attention by assigning dynamic weights to human features; The outputs from each sub-network are fused in a fusion layer. Ablation experiments demonstrate that Multi-CBSA has significant improvements in convergence speed, validation set loss, and single-class activity recognition accuracy after the introduction of self-attention. Comparative experiments show that Multi-CBSA can achieve recognition accuracies of 99.3% and 96.4% on the MHEALTH and PAMAP2 datasets, respectively, with fewer training parameters. Compared to recent state-of-the-art models, the recognition accuracy can be increased by up to 4.2% and 4.4%.

Key words: human activity recognition, wearable sensor, feature extraction, self-attention, weight sharing

CLC Number: