Journal of System Simulation ›› 2026, Vol. 38 ›› Issue (1): 200-210.doi: 10.16182/j.issn1004731x.joss.25-0829

• Papers • Previous Articles     Next Articles

Research on Real-time Animatable Human Avatar Generation via 3D Gaussian Splatting

Zhong Yuyou, Shen Xukun, Hu Yong   

  1. School of Computing, Beihang University, Beijing 100191, China
  • Received:2025-09-01 Revised:2025-10-20 Online:2026-01-18 Published:2026-01-28
  • Contact: Hu Yong

Abstract:

Real-time animatable 3D human avatar generation technology hold significant application value in fields such as virtual reality and remote collaboration. To address the limitations of existing methods in detail modeling, real-time performance, and robustness under novel pose driving, an efficient human avatar generation and driving method based on 3D Gaussian splatting (3DGS) is proposed. This method integrates optimized parametric human reconstruction, tri-plane feature encoding, and dynamic offset prediction to achieve efficient modeling from monocular video input. By introducing a skeleton binding and visibility analysis strategy, while designing a multi-scale regularization loss to address the overfitting problem. Simulation experiments demonstrate that the proposed method achieves outstanding performance across all evaluation metrics, particularly in novel pose driving and occluded scenarios, validating its effectiveness and superiority.

Key words: 3D Gaussian splatting (3DGS), animatable human avatars, monocular video, real-time rendering, parametric model

CLC Number: