融合视觉显著性的红外航拍行人检测OA北大核心CSTPCD
Aerial Infrared Pedestrian Detection with Saliency Map Fusion
目标检测是计算机视觉的基本任务之一,无人机搭载红外相机为夜间侦察、监视等提供便利.针对红外航拍场景检测目标小、图像纹理信息少、对比度弱以及红外目标检测中传统算法精度有限,深度算法依赖算力及功耗不友好等问题,提出了一种融合显著图的红外航拍场景行人检测方法.首先,采用U2-Net从原始热红外图像中提取显著图对原始图像进行增强;其次分析了像素级加权融合和图像通道替换融合两种方式的影响;再次,重聚类先验框,以提高算法对航拍目标场景的适应性.实验结果表明:像素级视觉显著性加权融合效果更优,对典型YOLOv3、YOLOv3-tiny和YOLOv4-tiny平均精度分别提升了 6.5%、7.6%和 6.2%,表明所设计的融合视觉显著性方法的有效性.
Object detection is a fundamental task in computer vision.Drones equipped with infrared cameras facilitate nighttime reconnaissance and surveillance.To realize small target detection,slight texture information,weak contrast in infrared aerial photography scenes,limited accuracy of traditional algorithms,and heavy dependence on computing power and power consumption in infrared object detection,a pedestrian detection method for infrared aerial photography scenes that integrates salient images is proposed.First,we use U2-Net to generate saliency maps from the original thermal infrared images for image enhancement.Second,we analyze the impact of two fusion methods,pixel-level weighted fusion,and replacement of image channels as image-enhancement schemes.Finally,to improve the adaptability of the algorithm to the target scene,the prior boxes are reclustered.The experimental results show that pixel-level weighted fusion presents excellent results.This method improves the average accuracy of typical YOLOv3,YOLOv3-tiny,and YOLOv4-tiny algorithms by 6.5%,7.6%,and 6.2%,respectively,demonstrating the effectiveness of the designed fused visual saliency method.
张兴平;邵延华;梅艳莹;张晓强;楚红雨
西南科技大学 信息工程学院,四川 绵阳 621010西南科技大学 信息工程学院,四川 绵阳 621010||西南科技大学 四川天府新区创新研究院,四川 成都 610299
计算机与自动化
红外行人检测图像增强显著图YOLOv4
infrared pedestrian detectionsalient mapimage enhancementYOLOv4
《红外技术》 2024 (009)
1043-1050 / 8
国家自然科学基金资助项目(61601382);四川省自然科学基金资助项目(2023NSFSC1388).
评论