| 注册
首页|期刊导航|信息安全研究|基于可逆神经网络的黑盒GAN生成人脸反取证方法

基于可逆神经网络的黑盒GAN生成人脸反取证方法

陈北京 冯逸凡 李玉茹

信息安全研究2025,Vol.11Issue(5):394-401,8.
信息安全研究2025,Vol.11Issue(5):394-401,8.DOI:10.12379/j.issn.2096-1057.2025.05.01

基于可逆神经网络的黑盒GAN生成人脸反取证方法

A Black-box Anti-forensics Method of GAN-generated Faces Based on Invertible Neural Network

陈北京 1冯逸凡 2李玉茹2

作者信息

  • 1. 数字取证教育部工程研究中心(南京信息工程大学) 南京 210044||江苏省大气环境与装备技术协同创新中心(南京信息工程大学) 南京 210044
  • 2. 数字取证教育部工程研究中心(南京信息工程大学) 南京 210044
  • 折叠

摘要

Abstract

Generative adversarial network GAN-generated faces forensics models are used to distinguish real faces and GAN-generated faces.But due to the fact that forensics models are susceptible to adversarial attacks,the anti-forensics techniques for GAN-generated faces have emerged.However,existing anti-forensic methods rely on white-box surrogate models,which have limited transferability.Therefore,a black-box method based on invertible neural network(INN)is proposed for GAN-generated faces anti-forensics in this paper.This method embeds the features of real faces into GAN-generated faces through the INN,which enables the generated anti-forensics faces to disturb forensics models.Meanwhile,the proposed method introduces a feature loss during training to maximize the cosine similarity between the features of the anti-forensics faces and the real faces,further improving the attack performance of anti-forensics faces.Experimental results demonstrate that,under the scenarios where no white-box models are involved,the proposed method has good attack performance against eight GAN-generated faces forensics models with better performance than seven comparative methods,and can generate high-quality anti-forensics faces.

关键词

对抗攻击/可逆神经网络/GAN生成人脸/反取证/黑盒

Key words

adversarial attack/invertible neural network/GAN-generated faces/anti-forensics/black-box

分类

计算机与自动化

引用本文复制引用

陈北京,冯逸凡,李玉茹..基于可逆神经网络的黑盒GAN生成人脸反取证方法[J].信息安全研究,2025,11(5):394-401,8.

基金项目

国家自然科学基金项目(62072251) (62072251)

信息安全研究

OA北大核心

2096-1057

访问量0
|
下载量0
段落导航相关论文