系统仿真学报 ›› 2022, Vol. 34 ›› Issue (6): 1267-1274.doi: 10.16182/j.issn1004731x.joss.20-1062

• 仿真建模理论与方法 • 上一篇    下一篇

一种用于图像融合的无监督深度神经网络

周培培(), 侯幸林()   

  1. 常州工学院 电气信息工程学院,江苏 常州 213032
  • 收稿日期:2020-12-31 修回日期:2021-04-16 出版日期:2022-06-30 发布日期:2022-06-16
  • 通讯作者: 侯幸林 E-mail:zhoupp@czu.cn;houxl@czu.cn
  • 作者简介:周培培(1991-),女,博士,讲师,研究方向为数字图像处理与高动态成像。E-mail:zhoupp@czu.cn
  • 基金资助:
    国防科技重点实验室基金(6142401200301);江苏省高校自然科学面上项目(20KJB520033);常州市应用基础研究计划(CJ20190052)

An Unsupervised Deep Neural Network for Image Fusion

Peipei Zhou(), Xinglin Hou()   

  1. School of Electrical and Information Engineering, Changzhou Institute of Technology, Changzhou 213032, China
  • Received:2020-12-31 Revised:2021-04-16 Online:2022-06-30 Published:2022-06-16
  • Contact: Xinglin Hou E-mail:zhoupp@czu.cn;houxl@czu.cn

摘要:

为解决相机动态范围较小,单次曝光的图像往往无法表达高动态场景的不同区域的问题,构建了一种无监督的深度神经网络,把多次曝光的图像融合成一幅高动态图像。以VGG-Net(visual geometry group-Net)为基础网络,设计编码子网络和解码子网络;以融合前后图像的结构相似度为目标导向,通过引入基于图像局部信息的权重因子,定制适用于图像融合的损失函数,融合图像可兼顾不同输入图像的有效信息。在基准数据集上与多种方法相比,融合图像在主观视觉体验和客观量化指标上均取得了明显提升。

关键词: 模式识别, 高动态场景, 图像融合, 无监督深度网络, 损失函数

Abstract:

Due to the low dynamic range of camera, can not be expressed in the different region of the high dynamic scene a single-exposure image. An unsupervised depth neural network is constructed to fuse the multi-exposure images into a high dynamic image. Based on the VGG-Net, encoding and decoding sub-networks are designed. Guided by the structural similarity of the images before and after fusion, a loss function suitable for image fusion is designed by introducing the weight factors based on the local image information, and the valid information of the different input images is given consideration. Compared with the other methods, the subjective visual experience and objective quantitative indicators of the fused images are improved significantly.

Key words: pattern recognition, high dynamic scene, image fusion, unsupervised deep network, loss function

中图分类号: