系统仿真学报 ›› 2023, Vol. 35 ›› Issue (9): 2064-2076.doi: 10.16182/j.issn1004731x.joss.22-1499

• 论文 • 上一篇    

一种面向戏曲妆容细节生成的风格迁移网络

张凤全1(), 曹铎2, 马晓寒2, 陈柏君1, 张江霄3   

  1. 1.北京邮电大学 数字媒体与设计艺术学院, 北京 100876
    2.北方工业大学 信息学院, 北京 100144
    3.邢台学院 数学与信息技术学院, 河北 邢台 054001
  • 收稿日期:2022-12-14 修回日期:2023-03-08 出版日期:2023-09-25 发布日期:2023-09-19
  • 第一作者简介:张凤全(1981-),男,教授,博士,研究方向为虚拟现实与人工智能。E-mail:zhangfq@bupt.edu.cn
  • 基金资助:
    教育部人文社科基金(19YJC760150);国家自然科学基金(61402016);河北省科技青年项(QN2021414);河北省教改项目(2021GJJG570);邢台学院重点课题(XTXYZD202203)

Style Transfer Network for Generating Opera Makeup Details

Zhang Fengquan1(), Cao Duo2, Ma Xiaohan2, Chen Baijun1, Zhang Jiangxiao3   

  1. 1.School of Digital Media and Design Arts, Beijing University of Posts and Telecommunications, Beijing 100876, China
    2.School of Information Science, North China University of Technology, Beijing 100144, China
    3.School of Mathematics and Information Technology Institute, Xingtai University, Xingtai 054001, China
  • Received:2022-12-14 Revised:2023-03-08 Online:2023-09-25 Published:2023-09-19

摘要:

为解决跨域图像仿真中局部风格细节丢失问题,以保护传统优秀文化为视角,设计了一种适用于戏曲人脸妆容的ChinOperaGAN网络框架。为了解决两个图像域内差异的风格迁移,提出了在生成对抗网络中使用多个重叠的局部对抗性鉴别器;考虑到成对的戏曲人脸妆容数据较难获取问题,设计了结合源图像妆容映射生成合成图像来指导图像间局部妆容细节的有效迁移方法;针对戏曲人脸妆容色彩浓重分明的特点,引入了损失函数以约束生成具有高频细节的妆容图像。在开源数据集和自建数据集上进行实验,通过与经典的方法进行定性和定量分析,均优于传统经典方法。结果表明,通过无监督的对抗性学习完成了妆容的迁移,很好地生成了具有高频细节的戏曲人脸妆容风格图像,能够实现图像特征一致、风格匹配的图像迁移,成果可应用于非遗文化的数字化系统仿真,有利于传统文化的传承和发展。

关键词: 戏曲人脸妆容迁移, 生成对抗网络, 局部特征提取, 细节生成, 深度学习

Abstract:

To address the problem of the loss of local style details in cross-domain image simulation, a ChinOperaGAN network framework suitable for opera makeup is designed from the perspective of protecting the excellent traditional culture. In order to solve the style translation of differences in two image domains, multiple overlapping local adversarial discriminators are proposed in the generative adversarial network. Since paired opera makeup data are difficult to obtain, a synthetic image is generated by combining the source image makeup mapping to effectively guide the transfer of local makeup details between images. In view of the characteristics of opera makeup with strong and distinct colors, a loss function is introduced to ensure the generation of makeup images with high-frequency details. Experiments are carried out on open-source datasets and self-built datasets, and the classical method is better than the traditional classical method through qualitative and quantitative analysis. The experimental results show that the proposed method transfers the makeup by unsupervised adversarial learning and generates opera makeup style images with high-frequency details well. It can realize image transfer with consistent image features and matching style and can be applied to digital system simulation of intangible cultural heritage.

Key words: opera makeup transfer, generative adversarial networks, local feature extraction, detail generation, deep learning

中图分类号: