计算机科学与探索2024,Vol.18Issue(9):2276-2292,17.DOI:10.3778/j.issn.1673-9418.2306058
基于深度学习的多聚焦图像融合方法前沿进展
Critical Review of Multi-focus Image Fusion Based on Deep Learning Method
摘要
Abstract
Multi-focus image fusion is an effective image fusion technology,which aims to combine source images from different focal planes of the same scene to obtain a good fusion result.This means that the fused image will fo-cus on all focal planes,that is,it contains more abundant scene information.The development of deep learning pro-motes the great progress of image fusion,and the powerful feature extraction and reconstruction ability of neural net-work makes the fusion result promising.In recent years,more and more multi-focus image fusion methods based on deep learning have been proposed,such as convolutional neural network(CNN),generative adversarial network(GAN)and automatic encoder,etc.In order to provide effective reference for relevant researchers and technicians,firstly,this paper introduces the concept of multi-focus image fusion and some evaluation indicators.Then,it analyzes more than ten advanced methods of multi-focus image fusion based on deep learning in recent years,discusses the characteristics and innovation of various methods,and summarizes their advantages and disadvantages.In addition,it reviews the application of multi-focus image fusion technology in various scenes,including photographic visualization,medical diagnosis,remote sensing detection and other fields.Finally,it proposes some challenges faced by current multi-focus image fusion related fields and looks forward to future possible research trends.关键词
深度学习/图像融合/多聚焦Key words
deep learning/image fusion/multi-focus分类
信息技术与安全科学引用本文复制引用
李子奇,苏宇轩,孙俊,张永宏,夏庆锋,尹贺峰..基于深度学习的多聚焦图像融合方法前沿进展[J].计算机科学与探索,2024,18(9):2276-2292,17.基金项目
国家重点研发计划政府间国际科技创新合作重点专项(2021YFE0116900) (2021YFE0116900)
国家自然科学基金面上项目(42175157) (42175157)
江苏省高等学校自然科学研究面上项目(22KJB520037,23KJB520036) (22KJB520037,23KJB520036)
无锡市"太湖之光"科技攻关项目(K20231003,K20231010) (K20231003,K20231010)
无锡学院引进人才科研启动专项(2021r032). This work was supported by the National Key Research and Development Program of China(2021YFE0116900),the National Natural Science Foundation of China(42175157),the General Project of Natural Science Research of Jiangsu Higher Education Institutions(22KJB520037,23KJB520036),the"Taihu Light"Science and Technology Project of Wuxi(K20231003,K20231010),and the Research Start-up Fund for Introduced Talents of Wuxi University(2021r032). (2021r032)