现代信息科技2024,Vol.8Issue(24):163-170,8.DOI:10.19850/j.cnki.2096-4706.2024.24.032
基于多模态传感器的近红外与可见光图像自适应融合模型
Adaptive Fusion Model for Near-infrared and Visible Light Images Based on Multimodal Sensors
李振伟 1施文灶 1付强 2苑俊茹1
作者信息
- 1. 福建师范大学 光电与信息工程学院,福建 福州 350117||福建师范大学 福建省光电传感应用工程技术研究中心,福建 福州 350117||福建师范大学 医学光电科学与技术教育部重点实验室,福建 福州 350117||福建师范大学 福建省光子技术重点实验室,福建 福州 350117
- 2. 福建鑫图光电有限公司,福建 福州 350003
- 折叠
摘要
Abstract
Aiming at the shortcomings of feature extraction and fusion strategies in the existing image fusion methods,this paper proposes an adaptive fusion model for near-infrared and visible light images,called STAFuse,based on frequency domain decomposition.It realizes the effective fusion of different modal image features,by introducing feature extraction modules of Transformer and CNN and the adaptive fusion modules.To address the issues of large size and complex calibration in traditional multi-sensor systems on the acquisition of the multimodal images,a novel multimodal sensor is designed,capable of simultaneously capturing high-resolution visible light images and low-resolution near-infrared images.Experimental results demonstrate that STAFuse outperforms existing models in multiple metrics,which improves by 102.7%compared with DenseFuse model in Structural Similarity(SSIM),improves by 25%compared with DIDFuse model in Visual Information Fidelity(VIF),and is outstanding in maintaining visual quality and image details.关键词
近红外与可见光融合/自适应融合/Transformer/CNN/多模态传感器/频域分解Key words
near-infrared and visible light fusion/adaptive fusion/Transformer/CNN/multimodal sensor/frequency domain decomposition分类
信息技术与安全科学引用本文复制引用
李振伟,施文灶,付强,苑俊茹..基于多模态传感器的近红外与可见光图像自适应融合模型[J].现代信息科技,2024,8(24):163-170,8.