Survey on Image Fusion: Hand Designed to Deep Learning Algorithms

  • Heena Patel
  • Kishor Upla

Abstract

Abstract—Image fusion is the process of
combining/integrating multiple images to generate the single
image having all meaningful information. The input/source
images are captured through various sensing devices under
different parameter setting. It is impossible to focus on all
information or all small objects in single image. Hence, Image
fusion methods provide the composite image known as fused
image with complementary information. Fused image should be
more suitable for human as well as machine perception.
Therefore, several methods have been developing to improve the
quality of images. Traditional methods include spatial and
transform domain based image fusion in which spatial domain
includes the fusion methods with pixel, blocks or segmentation
based processing. Whereas, transform domain utilizes the
transform theories to transform the image in another domain
rather than same domain of the input and performs fusion rule
on transformed image. Spatial domain methods produce spatial
and spectral distortion in the fused image whereas transform
methods often perform inadequately when images obtained from
different sensor modalities. Recently the concept of deep learning
has enlarged to enhance the development in image processing
and computer vision problems such as segmentation,
classification, super-resolution, etc. Deep learning algorithms
such as convolutional neural network (CNN), deep autoencoder
(DAE), and deep belief networks (DBN) with different category
of images such as multi-modal, multi-resolution, multi-temporal
and multi-focus have been proposed for image fusion. Its
applications include disease analysis, disaster assessment,
providing a complete information for diagnosis, detection of
change, etc.

Keywords: CNN, DAE, DBN, deep learning, neural networks, spatial domain, transform domain

References

[1] S. Li, X. Kang, L. Fang, J. Hu, and H. Yin, “Pixel-level image fusion: A
survey of the state of the art,” information Fusion, vol. 33, pp. 100–112,
2017.
[2] X. Zheng, H. Chen, and T. Xu, “Deep learning for chinese word
segmentation and pos tagging.” pp. 647–657, 2013.
[3] Y. Chen, Z. Lin, X. Zhao, G. Wang, and Y. Gu, “Deep learning-based
classification of hyperspectral data,” IEEE Journal of Selected topics in
applied earth observations and remote sensing, vol. 7, no. 6, pp. 2094–
2107, 2014.
[4] C. Dong, C. C. Loy, K. He, and X. Tang, “Learning a deep convolutional
network for image super-resolution,” pp. 184–199, 2014.
[5] Y. Liu, X. Chen, Z. Wang, Z. J. Wang, R. K. Ward, and X. Wang, “Deep
learning for pixel-level image fusion: Recent advances and future
prospects,” Information Fusion, vol. 42, pp. 158–173, 2017.
[6] S. Li, B. Yang, and J. Hu, “Performance comparison of different
multiresolution transforms for image fusion,” Information Fusion, vol.
12, no. 2, pp. 74–84, 2011.
[7] A. Kaur, R. Kaur, M. Mahajan, and S. Singh, “A comparative study of
multi scale transform based fusion techniques,” International journal of
computer science and network, vol. 5, no. 3, pp. 158–173, 2016.
[8] C. Morris and R. S. Rajesh, “Survey of spatial domain image fusion
techniques,” International Journal of Advanced Research in Computer
Science Engineering and Information Technology, vol. 2, no. 3, pp. 249–
254, April 2014.
[9] R. Princess, M, Kumar, Suresh, and R. B. M, “Comprehensive and
comparative study of different image fusion techniques,” vol. 3297, pp.
2320–3765, 10 2007.
[10] S. Chen, J. Luo, Z. Shen, G. Luo, and C. Zhu, “Using synthetic variable
ratio method to fuse multi-source remotely sensed images based on
sensor spectral response,” vol. 2, pp. 1084–1087, July 2008.
[11] X. Li, Q. Xu, F. Gao, and L. Hu, “Pansharpening based on an improved
ratio enhancement,” pp. 1100–1103, July 2015.
[12] Z. Wang, D. Ziou, C. Armenakis, D. Li, and Q. Li, “A comparative
analysis of image fusion methods,” IEEE transactions on geoscience and
remote sensing, vol. 43, no. 6, pp. 1391–1402, 2005.
[13] F. Castanedo, “A review of data fusion techniques,” The Scientific
World Journal, vol. 2013, 2013.
[14] I. J. Cox, “A review of statistical data association techniques for motion
correspondence,” International Journal of Computer Vision, vol. 10, no.
1, pp. 53–66, 1993.
[15] Sahu, Chhamman, Sahur, and R. Kumar, “Pyramid based image fusion,”
International Journal of Engg. Computer Science, vol. 3, no. 8, pp.
7890–7894, 2014.
[16] K. Amolins, Y. Zhang, and P. Dare, “Wavelet based image fusion
techniquesan introduction, review and comparison,” ISPRS Journal of
Photogrammetry and Remote Sensing, vol. 62, no. 4, pp. 249–263, 2007.
[17] F. E. Ali, I. M. El-Dokany, A. A. Saad, and F. E. A. El-Samie, “Fusion of
mr and ct images using the curvelet transform,” pp. 1–8, March 2008.
[18] N. Paramanandham, K. Rajendiran, D. Narayanan, I. V. S., and M.
Anand, “An efficient multi transform based fusion for multi focus
images,” pp. 0984–0988, April 2015.
[19] K. Li, C. Zou, S. Bu, Y. Liang, J. Zhang, and M. Gong, “Multi-modal
feature fusion for geographic image annotation,” Pattern Recognition,
vol. 73, pp. 1–14, 2017.
[20] G. Ma, X. Yang, B. Zhang, and Z. Shi, “Multi-feature fusion deep
networks,” Neurocomputing, vol. 218, pp. 164–171, 2016.
[21] H. Shao, H. Jiang, F. Wang, and H. Zhao, “An enhancement deep feature
fusion method for rotating machinery fault diagnosis,” Knowledge-
Based Systems, vol. 119, pp. 200–220, 2017.
[22] F.-q. Chen, Y. Wu, G.-d. Zhao, J.-m. Zhang, M. Zhu, and J. Bai,
“Contractive de-noising auto-encoder,” pp. 776–781, 2014.
[23] [Online]. Available: http://papers.nips.cc/paper/2359-localitypreservingprojections.
pdf
[24] P. Du, S. Liu, J. Xia, and Y. Zhao, “Information fusion techniques for
change detection from multi-temporal remote sensing images,”
Information Fusion, vol. 14, no. 1, pp. 19–27, 2013.
[25] Jain and Sanjay, “Multi temporal image fusion of earthquake satellite
images,” vol. 3, 01 2012.
[26] G. P. Petropoulos, P. Partsinevelos, and Z. Mitraka, “Change detection of
surface mining activity and reclamation based on a machine learning
approach of multi-temporal landsat tm imagery,” Geocarto International,
vol. 28, no. 4, pp. 323–342, 2013.
[27] M. Vakalopoulou, K. Karantzalos, N. Komodakis, and N. Paragios,
“Building detection in very high resolution multispectral data with deep
learning features,” IEEE International Geoscience and Remote Sensing
Symposium (IGARSS), pp. 1873–1876, July 2015.
[28] G. Farias, S. Dormido-Canto, J. Vega, G. Ratt´a, H. Vargas, G.
Hermosilla, L. Alfaro, and A. Valencia, “Automatic feature extraction in
large fusion databases by using deep learning approach,” Fusion
Engineering and Design, vol. 112, pp. 979–983, 2016.
[29] F. Palsson, J. R. Sveinsson, and M. O. Ulfarsson, “Multispectral and
hyperspectral image fusion using a 3-d-convolutional neural network,”
IEEE Geoscience and Remote Sensing Letters, vol. 14, no. 5, pp. 639–
643, May 2017.
[30] Y. Chen, C. Li, P. Ghamisi, X. Jia, and Y. Gu, “Deep fusion of remote
sensing data for accurate classification,” IEEE Geoscience and Remote
Sensing Letters, vol. 14, no. 8, pp. 1253–1257, Aug 2017.
[31] S. Chaib, H. Liu, Y. Gu, and H. Yao, “Deep feature fusion for vhr remote
sensing scene classification,” IEEE Transactions on Geoscience and
Remote Sensing, vol. 55, no. 8, pp. 4775–4784, Aug 2017.
[32] Y. Liu, X. Chen, H. Peng, and Z. Wang, “Multi-focus image fusion with
a deep convolutional neural network,” Information Fusion, vol. 36, pp.
191–207, 2017.
[33] C. Du and S. Gao, “Image segmentation-based multi-focus image fusion
through multi-scale convolutional neural network,” IEEE Access, vol. 5,
pp. 15 750–15 761, 2017.
[34] C. Gupta and P. Gupta, “Article: A study and evaluation of transform
domain based image fusion techniques for visual sensor networks,”
International Journal of Computer Applications, vol. 116, no. 8, pp. 26–
30, April 2015.
Statistics
0 Views | 0 Downloads
How to Cite
Patel, H., & Upla, K. (2019). Survey on Image Fusion: Hand Designed to Deep Learning Algorithms. Asian Journal For Convergence In Technology (AJCT). Retrieved from http://asianssr.org/index.php/ajct/article/view/774
Section
Article