系统仿真学报 ›› 2026, Vol. 38 ›› Issue (1): 200-210.doi: 10.16182/j.issn1004731x.joss.25-0829

• 论文 • 上一篇    下一篇

基于3DGS的可实时驱动人体化身生成研究

钟雨攸, 沈旭昆, 胡勇   

  1. 北京航空航天大学 计算机学院,北京 100191
  • 收稿日期:2025-09-01 修回日期:2025-10-20 出版日期:2026-01-18 发布日期:2026-01-28
  • 通讯作者: 胡勇
  • 第一作者简介:钟雨攸(2000-),男,硕士生,研究方向为三维人体重建。

Research on Real-time Animatable Human Avatar Generation via 3D Gaussian Splatting

Zhong Yuyou, Shen Xukun, Hu Yong   

  1. School of Computing, Beihang University, Beijing 100191, China
  • Received:2025-09-01 Revised:2025-10-20 Online:2026-01-18 Published:2026-01-28
  • Contact: Hu Yong

摘要:

3D人体化身生成及实时驱动技术在虚拟现实、远程协作等领域具有重要应用价值。针对现有方法在细节建模、实时性与新姿态驱动鲁棒性方面的不足,提出一种基于高斯泼溅(3D Gaussian splatting, 3DGS)的高效人体化身生成及驱动方法,结合优化参数化人体重建、三平面特征编码与动态偏移预测实现单目视频输入的高效建模。通过引入骨骼绑定与可见性分析策略,同时设计多尺度正则损失以解决过拟合问题。仿真实验结果表明:所提方法在各项指标上均取得了非常优异的表现,尤其在新姿态驱动与遮挡场景下表现出更强的鲁棒性,验证了方法的有效性与优越性。

关键词: 高斯泼溅, 可驱动人体化身, 单目视频, 实时渲染, 参数化模型

Abstract:

Real-time animatable 3D human avatar generation technology hold significant application value in fields such as virtual reality and remote collaboration. To address the limitations of existing methods in detail modeling, real-time performance, and robustness under novel pose driving, an efficient human avatar generation and driving method based on 3D Gaussian splatting (3DGS) is proposed. This method integrates optimized parametric human reconstruction, tri-plane feature encoding, and dynamic offset prediction to achieve efficient modeling from monocular video input. By introducing a skeleton binding and visibility analysis strategy, while designing a multi-scale regularization loss to address the overfitting problem. Simulation experiments demonstrate that the proposed method achieves outstanding performance across all evaluation metrics, particularly in novel pose driving and occluded scenarios, validating its effectiveness and superiority.

Key words: 3D Gaussian splatting (3DGS), animatable human avatars, monocular video, real-time rendering, parametric model

中图分类号: