基于三维卷积时空融合网络的压缩视频质量增强算法OA北大核心CSTPCD
Compressed video quality enhancement algorithm based on 3D convolutional spatio-temporal fusion network
视频数据在存储与网络传输时,通常使用标准压缩算法对原始视频进行压缩.针对压缩后视频存在压缩伪影导致视频质量下降的问题,本文提出一种基于深度学习的后处理方法提高压缩视频质量.首先,提出一种新的三维卷积时空融合网络(3D-CSTF),通过三维卷积的滤波特性提取连续视频帧之间的时空信息,并利用视频帧之间信息的强相关性来提高视频质量.其中,设计了一种用于映射和提取视频帧特征的质量增强网络(Qe-Net).其次,将7个连续的视频帧送到网络进行端到端训练,利用前3帧和后3帧的信息增强当前帧.最后,在MFQE数据集上进行训练和测试.实验结果表明,该方法在视频质量评估标准峰值信噪比(PSNR)上取得了良好的性能.当量化参数(QP)等于37、32、27和22时,相比压缩后的视频,PSNR分别增加0.82 dB、0.83 dB、0.79 dB和0.74dB.
Standard compression algorithms are typically used to compress video data for storage and transmission over networks.However,compressed video can have compression artifacts that degrade quality.To address this prob-lem,a post-processing method based on deep learning is proposed.Firstly,a novel 3-dimensional convolutional spatio-temporal fusion(3D-CSTF)network is designed,which extracts the temporal information between consecu-tive video frames through the filtering characteristics of the 3D convolution kernel in three dimensions,and utilizes the strong correlation of the information between video frames to enhance the video quality.Among it,a quality en-hanced network(Qe-Net)is designed for mapping and extracting video frame features.Secondly,seven consecu-tive video frames are sent to the network for end-to-end training and the current frame is enhanced by using the in-formation of the previous and last three frames.Finally,training and testing are carried out on the MFQEv2 data-set.Experimental results demonstrate that this method achieves excellent performance in terms of the video quality assessment standard PSNR.When the quantization parameter(QP)are equal to 37,32,27 and 22,the PSNR can be increased by 0.82 dB,0.83 dB,0.79 dB and 0.74 dB,respectively.
黄威威;贾克斌
北京工业大学信息学部 北京 100124||北京工业大学计算智能与智能系统北京市重点实验室 北京 100124||先进信息网络北京实验室 北京 100124
3D卷积视频质量增强多帧信息深度学习
3-dimensional convolutionvideo quality enhancementmulti-frame informationdeep learning
《高技术通讯》 2024 (007)
726-733 / 8
北京市自然科学基金(4212001)资助项目.
评论