| 注册
首页|期刊导航|软件导刊|基于可变形注意力的时空特征融合超分辨率方法

基于可变形注意力的时空特征融合超分辨率方法

张墨华 张钰超 刘霁

软件导刊2024,Vol.23Issue(12):234-240,7.
软件导刊2024,Vol.23Issue(12):234-240,7.DOI:10.11907/rjdk.241171

基于可变形注意力的时空特征融合超分辨率方法

A Super Resolution Method for Spatiotemporal Feature Fusion Based on Deformable Attention

张墨华 1张钰超 1刘霁1

作者信息

  • 1. 河南财经政法大学 计算机信息与工程学院,河南 郑州 450000
  • 折叠

摘要

Abstract

Video super-resolution technology aims to convert low resolution videos into high-resolution videos.The existing feature alignment methods based on deformable convolution are limited by the receptive field size,and can only perform local offsets in the convolution space at specified spatial positions.The effect is not good when there is large-scale motion between frames.Therefore,a alignment method based on de-formable attention space transformation is proposed to sample the entire feature map.Firstly,by offsetting,the sampling points are focused on any position related to the current processing location;Secondly,the model uses recursive structures to propagate fused features globally,and Transformer to extract features and align frames locally;Again,input the aligned features into a spatiotemporal feature fusion module with channel attention to supplement the reconstruction information;Finally,the output of the fusion module is propagated bidirectionally with a re-cursive network to supplement the temporal features of adjacent frames,and high-resolution video is obtained through sub-pixel convolution with 4x upsampling.The experiment shows that the network improves the PSNR index by 0.69 dB and 0.43 dB on the REDS4 and Vid4 datas-ets,respectively,with BasicVSR as the baseline.

关键词

循环神经网络/视频超分/Transformer/注意力机制/深度学习

Key words

recurrent neural network/video super-resolution/transformer/attention mechanism/deep learning

分类

信息技术与安全科学

引用本文复制引用

张墨华,张钰超,刘霁..基于可变形注意力的时空特征融合超分辨率方法[J].软件导刊,2024,23(12):234-240,7.

基金项目

河南省科技攻关项目(222102210326) (222102210326)

软件导刊

1672-7800

访问量0
|
下载量0
段落导航相关论文