|国家科技期刊平台
首页|期刊导航|计算机技术与发展|多分辨率特征协作的图像修复网络

多分辨率特征协作的图像修复网络OACSTPCD

Multi-resolution Feature Collaboration Image Inpainting Network

中文摘要英文摘要

深度生成方法最近通过采用由粗到细的策略在图像修复领域取得了相当大的进展,但子网络串行连接的多阶段修复方法由于结构定位不准确和瓶颈层的特征表达能力欠佳,造成图像结构不连续和细节模糊.针对上述问题,提出一种多分辨率特征协作的图像修复网络,以并行的多分辨率网络结构修复破损图像.对破损图像进行并行的多分辨率编码,学习到不同尺度的结构位置特征,利用迭代融合模块动态融合多尺度信息,为破损结构的恢复提供更准确的定位,从而生成结构连贯的图像.在瓶颈层使用门控多特征提取模块,结合注意力机制和卷积操作的优势,来捕获不同维度上的远距离依赖关系并提取在不同感受野下的特征,然后采用门控残差融合来调整多种特征的权重,增强瓶颈层的特征表达能力,从而更好地恢复出缺失区域的图像细节.在CelebA-hq数据集、FFHQ数据集和Paris StreetView数据集上进行的大量实验表明,该方法在峰值信噪比(Peak Signal-to-Noise Ratio,PSNR)、结构相似性(Structural Similarity,SSIM)和Frechet Inception距离(Frechet Inception Distance,FID)指标上和视觉质量上相较于其他图像修复方法都有较大提升.

Deep generative methods have recently made considerable progress in the field of image inpainting by employing a coarse-to-fine strategy,but multi-stage inpainting methods with serially connected sub-networks result in discontinuous image structures and blurred details due to inaccurate structural localization and the poor feature expressiveness of the bottleneck layer.To address the above problems,a multi-resolution feature collaborative image inpainting network is proposed to inpaint damaged images with a parallel multi-resolution network structure.Parallel multi-resolution encoding is performed on the damaged image to learn the structural features at different scales,and the iterative fusion module is used to dynamically fuse the multi-scale information to provide a more accurate localization for the recovery of the damaged structure,thus generating a structurally coherent image.The gated multi-feature extraction module is used in the bottleneck layer to combine the advantages of the attention mechanism and the convolutional operation,to capture the long-distance dependencies in different dimensions and extract the features under different receptive fields,and then the gated residual fusion is used to adjust the weights of the multi-features,to enhance the feature expression ability of the bottleneck layer,so as to recover the image details of the missing regions better.Extensive experiments on the CelebA-hq dataset,the FFHQ dataset and the Paris StreetView dataset show that the proposed method provides a larger improvement in PSNR,SSIM and FID metrics and in visual quality compared to other image inpainting methods.

晏乙涵;吴昊;袁国武

云南大学 信息学院,云南 昆明 650504

计算机与自动化

图像修复并行的多分辨率网络融合机制注意力机制卷积操作

image inpantingparallel multi-resolution networkfusion mechanismattention mechanismconvolutional operation

《计算机技术与发展》 2024 (007)

9-16 / 8

国家自然科学基金(62061049,11663007);云南省科技厅云南大学双一流建设联合专项(202201BF070001-005)

10.20165/j.cnki.ISSN1673-629X.2024.0098

评论