Survey on Image Fusion: Hand Designed to Deep Learning Algorithms
Abstract
Abstract—Image fusion is the process of
combining/integrating multiple images to generate the single
image having all meaningful information. The input/source
images are captured through various sensing devices under
different parameter setting. It is impossible to focus on all
information or all small objects in single image. Hence, Image
fusion methods provide the composite image known as fused
image with complementary information. Fused image should be
more suitable for human as well as machine perception.
Therefore, several methods have been developing to improve the
quality of images. Traditional methods include spatial and
transform domain based image fusion in which spatial domain
includes the fusion methods with pixel, blocks or segmentation
based processing. Whereas, transform domain utilizes the
transform theories to transform the image in another domain
rather than same domain of the input and performs fusion rule
on transformed image. Spatial domain methods produce spatial
and spectral distortion in the fused image whereas transform
methods often perform inadequately when images obtained from
different sensor modalities. Recently the concept of deep learning
has enlarged to enhance the development in image processing
and computer vision problems such as segmentation,
classification, super-resolution, etc. Deep learning algorithms
such as convolutional neural network (CNN), deep autoencoder
(DAE), and deep belief networks (DBN) with different category
of images such as multi-modal, multi-resolution, multi-temporal
and multi-focus have been proposed for image fusion. Its
applications include disease analysis, disaster assessment,
providing a complete information for diagnosis, detection of
change, etc.
References
survey of the state of the art,” information Fusion, vol. 33, pp. 100–112,
2017.
[2] X. Zheng, H. Chen, and T. Xu, “Deep learning for chinese word
segmentation and pos tagging.” pp. 647–657, 2013.
[3] Y. Chen, Z. Lin, X. Zhao, G. Wang, and Y. Gu, “Deep learning-based
classification of hyperspectral data,” IEEE Journal of Selected topics in
applied earth observations and remote sensing, vol. 7, no. 6, pp. 2094–
2107, 2014.
[4] C. Dong, C. C. Loy, K. He, and X. Tang, “Learning a deep convolutional
network for image super-resolution,” pp. 184–199, 2014.
[5] Y. Liu, X. Chen, Z. Wang, Z. J. Wang, R. K. Ward, and X. Wang, “Deep
learning for pixel-level image fusion: Recent advances and future
prospects,” Information Fusion, vol. 42, pp. 158–173, 2017.
[6] S. Li, B. Yang, and J. Hu, “Performance comparison of different
multiresolution transforms for image fusion,” Information Fusion, vol.
12, no. 2, pp. 74–84, 2011.
[7] A. Kaur, R. Kaur, M. Mahajan, and S. Singh, “A comparative study of
multi scale transform based fusion techniques,” International journal of
computer science and network, vol. 5, no. 3, pp. 158–173, 2016.
[8] C. Morris and R. S. Rajesh, “Survey of spatial domain image fusion
techniques,” International Journal of Advanced Research in Computer
Science Engineering and Information Technology, vol. 2, no. 3, pp. 249–
254, April 2014.
[9] R. Princess, M, Kumar, Suresh, and R. B. M, “Comprehensive and
comparative study of different image fusion techniques,” vol. 3297, pp.
2320–3765, 10 2007.
[10] S. Chen, J. Luo, Z. Shen, G. Luo, and C. Zhu, “Using synthetic variable
ratio method to fuse multi-source remotely sensed images based on
sensor spectral response,” vol. 2, pp. 1084–1087, July 2008.
[11] X. Li, Q. Xu, F. Gao, and L. Hu, “Pansharpening based on an improved
ratio enhancement,” pp. 1100–1103, July 2015.
[12] Z. Wang, D. Ziou, C. Armenakis, D. Li, and Q. Li, “A comparative
analysis of image fusion methods,” IEEE transactions on geoscience and
remote sensing, vol. 43, no. 6, pp. 1391–1402, 2005.
[13] F. Castanedo, “A review of data fusion techniques,” The Scientific
World Journal, vol. 2013, 2013.
[14] I. J. Cox, “A review of statistical data association techniques for motion
correspondence,” International Journal of Computer Vision, vol. 10, no.
1, pp. 53–66, 1993.
[15] Sahu, Chhamman, Sahur, and R. Kumar, “Pyramid based image fusion,”
International Journal of Engg. Computer Science, vol. 3, no. 8, pp.
7890–7894, 2014.
[16] K. Amolins, Y. Zhang, and P. Dare, “Wavelet based image fusion
techniquesan introduction, review and comparison,” ISPRS Journal of
Photogrammetry and Remote Sensing, vol. 62, no. 4, pp. 249–263, 2007.
[17] F. E. Ali, I. M. El-Dokany, A. A. Saad, and F. E. A. El-Samie, “Fusion of
mr and ct images using the curvelet transform,” pp. 1–8, March 2008.
[18] N. Paramanandham, K. Rajendiran, D. Narayanan, I. V. S., and M.
Anand, “An efficient multi transform based fusion for multi focus
images,” pp. 0984–0988, April 2015.
[19] K. Li, C. Zou, S. Bu, Y. Liang, J. Zhang, and M. Gong, “Multi-modal
feature fusion for geographic image annotation,” Pattern Recognition,
vol. 73, pp. 1–14, 2017.
[20] G. Ma, X. Yang, B. Zhang, and Z. Shi, “Multi-feature fusion deep
networks,” Neurocomputing, vol. 218, pp. 164–171, 2016.
[21] H. Shao, H. Jiang, F. Wang, and H. Zhao, “An enhancement deep feature
fusion method for rotating machinery fault diagnosis,” Knowledge-
Based Systems, vol. 119, pp. 200–220, 2017.
[22] F.-q. Chen, Y. Wu, G.-d. Zhao, J.-m. Zhang, M. Zhu, and J. Bai,
“Contractive de-noising auto-encoder,” pp. 776–781, 2014.
[23] [Online]. Available: http://papers.nips.cc/paper/2359-localitypreservingprojections.
[24] P. Du, S. Liu, J. Xia, and Y. Zhao, “Information fusion techniques for
change detection from multi-temporal remote sensing images,”
Information Fusion, vol. 14, no. 1, pp. 19–27, 2013.
[25] Jain and Sanjay, “Multi temporal image fusion of earthquake satellite
images,” vol. 3, 01 2012.
[26] G. P. Petropoulos, P. Partsinevelos, and Z. Mitraka, “Change detection of
surface mining activity and reclamation based on a machine learning
approach of multi-temporal landsat tm imagery,” Geocarto International,
vol. 28, no. 4, pp. 323–342, 2013.
[27] M. Vakalopoulou, K. Karantzalos, N. Komodakis, and N. Paragios,
“Building detection in very high resolution multispectral data with deep
learning features,” IEEE International Geoscience and Remote Sensing
Symposium (IGARSS), pp. 1873–1876, July 2015.
[28] G. Farias, S. Dormido-Canto, J. Vega, G. Ratt´a, H. Vargas, G.
Hermosilla, L. Alfaro, and A. Valencia, “Automatic feature extraction in
large fusion databases by using deep learning approach,” Fusion
Engineering and Design, vol. 112, pp. 979–983, 2016.
[29] F. Palsson, J. R. Sveinsson, and M. O. Ulfarsson, “Multispectral and
hyperspectral image fusion using a 3-d-convolutional neural network,”
IEEE Geoscience and Remote Sensing Letters, vol. 14, no. 5, pp. 639–
643, May 2017.
[30] Y. Chen, C. Li, P. Ghamisi, X. Jia, and Y. Gu, “Deep fusion of remote
sensing data for accurate classification,” IEEE Geoscience and Remote
Sensing Letters, vol. 14, no. 8, pp. 1253–1257, Aug 2017.
[31] S. Chaib, H. Liu, Y. Gu, and H. Yao, “Deep feature fusion for vhr remote
sensing scene classification,” IEEE Transactions on Geoscience and
Remote Sensing, vol. 55, no. 8, pp. 4775–4784, Aug 2017.
[32] Y. Liu, X. Chen, H. Peng, and Z. Wang, “Multi-focus image fusion with
a deep convolutional neural network,” Information Fusion, vol. 36, pp.
191–207, 2017.
[33] C. Du and S. Gao, “Image segmentation-based multi-focus image fusion
through multi-scale convolutional neural network,” IEEE Access, vol. 5,
pp. 15 750–15 761, 2017.
[34] C. Gupta and P. Gupta, “Article: A study and evaluation of transform
domain based image fusion techniques for visual sensor networks,”
International Journal of Computer Applications, vol. 116, no. 8, pp. 26–
30, April 2015.
To ensure uniformity of treatment among all contributors, other forms may not be substituted for this form, nor may any wording of the form be changed. This form is intended for original material submitted to AJCT and must accompany any such material in order to be published by AJCT. Please read the form carefully.
The undersigned hereby assigns to the Asian Journal of Convergence in Technology Issues ("AJCT") all rights under copyright that may exist in and to the above Work, any revised or expanded derivative works submitted to AJCT by the undersigned based on the Work, and any associated written, audio and/or visual presentations or other enhancements accompanying the Work. The undersigned hereby warrants that the Work is original and that he/she is the author of the Work; to the extent the Work incorporates text passages, figures, data or other material from the works of others, the undersigned has obtained any necessary permission. See Retained Rights, below.
AUTHOR RESPONSIBILITIES
AJCT distributes its technical publications throughout the world and wants to ensure that the material submitted to its publications is properly available to the readership of those publications. Authors must ensure that The Work is their own and is original. It is the responsibility of the authors, not AJCT, to determine whether disclosure of their material requires the prior consent of other parties and, if so, to obtain it.
RETAINED RIGHTS/TERMS AND CONDITIONS
1. Authors/employers retain all proprietary rights in any process, procedure, or article of manufacture described in the Work.
2. Authors/employers may reproduce or authorize others to reproduce The Work and for the author's personal use or for company or organizational use, provided that the source and any AJCT copyright notice are indicated, the copies are not used in any way that implies AJCT endorsement of a product or service of any employer, and the copies themselves are not offered for sale.
3. Authors/employers may make limited distribution of all or portions of the Work prior to publication if they inform AJCT in advance of the nature and extent of such limited distribution.
4. For all uses not covered by items 2 and 3, authors/employers must request permission from AJCT.
5. Although authors are permitted to re-use all or portions of the Work in other works, this does not include granting third-party requests for reprinting, republishing, or other types of re-use.
INFORMATION FOR AUTHORS
AJCT Copyright Ownership
It is the formal policy of AJCT to own the copyrights to all copyrightable material in its technical publications and to the individual contributions contained therein, in order to protect the interests of AJCT, its authors and their employers, and, at the same time, to facilitate the appropriate re-use of this material by others.
Author/Employer Rights
If you are employed and prepared the Work on a subject within the scope of your employment, the copyright in the Work belongs to your employer as a work-for-hire. In that case, AJCT assumes that when you sign this Form, you are authorized to do so by your employer and that your employer has consented to the transfer of copyright, to the representation and warranty of publication rights, and to all other terms and conditions of this Form. If such authorization and consent has not been given to you, an authorized representative of your employer should sign this Form as the Author.
Reprint/Republication Policy
AJCT requires that the consent of the first-named author and employer be sought as a condition to granting reprint or republication rights to others or for permitting use of a Work for promotion or marketing purposes.
GENERAL TERMS
1. The undersigned represents that he/she has the power and authority to make and execute this assignment.
2. The undersigned agrees to indemnify and hold harmless AJCT from any damage or expense that may arise in the event of a breach of any of the warranties set forth above.
3. In the event the above work is accepted and published by AJCT and consequently withdrawn by the author(s), the foregoing copyright transfer shall become null and void and all materials embodying the Work submitted to AJCT will be destroyed.
4. For jointly authored Works, all joint authors should sign, or one of the authors should sign as authorized agent
for the others.
Licenced by :
Creative Commons Attribution 4.0 International License.