A Self-supervised Residual Feature Learning Model for Multi-focus Image Fusion

Abstract

Multi-focus image fusion (MFIF) attempts to achieve an “all-focused” image from multiple source images with the same scene but different focused objects. Given the lack of multi-focus image sets for network training, we propose a self-supervised residual feature learning model in this paper. The model consists of a feature extraction network and a fusion module. We select image super-resolution as a pretext task in the MFIF field, which is supported by a new residual gradient prior discovered by our theoretical study for low- and high-resolution (LR-HR) image pairs, as well as for multi-focus images. In the pretext task, our network’s training set is LR-HR image pairs generated from natural images, and HR images can be regarded as pseudo-labels of LR images. In the fusion task, the trained network extracts residual features of multi-focus images firstly. Secondly, the fusion module, consisting of an activity level measurement and a new boundary refinement method, is leveraged for the features to generated decision maps. Experimental results, both subjective evaluations and objective evaluations, demonstrate that our approach outperforms other state-of-the-art fusion algorithms.

Publication
IEEE Transactions on Image Processing

cite

@ARTICLE{9805468,
  author={Wang, Zeyu and Li, Xiongfei and Duan, Haoran and Zhang, Xiaoli},
  journal={IEEE Transactions on Image Processing}, 
  title={A Self-Supervised Residual Feature Learning Model for Multifocus Image Fusion}, 
  year={2022},
  volume={31},
  number={},
  pages={4527-4542},
  keywords={Feature extraction;Task analysis;Superresolution;Image fusion;Training;Image edge detection;Representation learning;Multifocus image fusion;self-supervised learning;image super-resolution;deep learning},
  doi={10.1109/TIP.2022.3184250}}