红外技术2026,Vol.48Issue(3):280-289,10.
红外与可见光图像特征自适应融合方法
Adaptive Fusion Method for Infrared and Visible Light Images
摘要
Abstract
Multi-modal image fusion techniques aim to preserve the advantages of different modal images,such as texture details and prominent target subjects,for fused images.This study aimed to address the problems of insufficient extraction of cross-modal feature information,the complexity of cross-modal feature modeling,and the difficulty of processing shared information between different modalities that exist in the field of multi-modal image fusion.The properties of the Transformer and convolutional neural networks are combined,introducing two parallel branching networks.A new correlation-driven multi-modal feature-decomposition fusion network is proposed,accomplished by a two-stage training strategy.The first stage facilitates the extraction of cross-modal and shared features from infrared and visible images by modeling the inter-modal correlation and making full use of their information characteristics.In the second phase,a selective kernel network module was designed to adaptively adjust the weight assignment of different modal image features.Experimental results on three publicly available datasets show that the proposed method has advantages over other typical methods in quantitative and qualitative evaluations.关键词
图像融合/跨模态/CNN/Transformer/自适应特征融合Key words
image fusion/cross-modal/CNN/Transformer/adaptive feature fusion分类
信息技术与安全科学引用本文复制引用
李祯,郭佑民,王建鑫,李紫玄..红外与可见光图像特征自适应融合方法[J].红外技术,2026,48(3):280-289,10.基金项目
国家自然科学基金资助项目(72061021). (72061021)