Journal of System Simulation ›› 2023, Vol. 35 ›› Issue (2): 308-317.doi: 10.16182/j.issn1004731x.joss.21-0986

• Papers • Previous Articles     Next Articles

Research on Image Super-resolution Reconstruction Based on Loss Extraction Feedback Attention Network

Hong Sun(), Yuxiang Zhang(), Yuelan Ling   

  1. School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
  • Received:2021-09-23 Revised:2021-12-20 Online:2023-02-28 Published:2023-02-16
  • Contact: Yuxiang Zhang E-mail:sunhong@usst.edu.cn;1553944402@qq.com

Abstract:

Since the first application of convolutional neural network to the field of super-resolution image reconstruction (super-resolution convolutional neural network, SRCNN), a large number of studies have proved that deep learning can improve the effect of image reconstruction. Aiming at the too many parameters in the image super-resolution network and the insufficient utilization of image features resulting in less available high-frequency information, a loss extraction feedback attention network (LEFAN) is proposed to reuse parameters in a circular way and increase the reuse of low-resolution image features to capture more high-frequency information. The loss caused in the reconstruction process is extracted and fused into the final super-resolution image. The experimental results show that the algorithm can obtain a better image reconstruction effect by extracting the potential loss and fusing it into the final super-resolution image on the basis of the multiple utilization of low-resolution images.

Key words: feedback mechanism, attention mechanism, loss extraction, super-resolution image reconstruction

CLC Number: