| 注册
首页|期刊导航|计算机科学与探索|反向最大化学习:反蒸馏场景下的暗知识清除

反向最大化学习:反蒸馏场景下的暗知识清除

张瑶 李阳 潘志松

计算机科学与探索2026,Vol.20Issue(1):143-153,11.
计算机科学与探索2026,Vol.20Issue(1):143-153,11.DOI:10.3778/j.issn.1673-9418.2505049

反向最大化学习:反蒸馏场景下的暗知识清除

Reversed Maximization Learning:Dark Knowledge Elimination for Anti-distillation

张瑶 1李阳 1潘志松1

作者信息

  • 1. 中国人民解放军陆军工程大学指挥控制工程学院,南京 210000
  • 折叠

摘要

Abstract

Knowledge distillation is a technique that enhances the performance of a student model by transferring the rep-resentational capabilities learnt by a teacher model.However,with the development of data-free knowledge distillation,this technology may also be exploited for unauthorized model replication or enhancement,posing risks of intellectual property infringement and challenges to model security.To address this issue,anti-distillation techniques have emerged,aiming to remove the dark knowledge from the teacher model,thereby preventing potential adversaries from extracting the feature representation capabilities of the teacher model through knowledge distillation.Current research mainly focuses on generating multi-peak outputs to confuse the student model during knowledge distillation,thereby mitigating infringe-ment in knowledge distillation.However,this approach may introduce noise during the training of anti-distillation models,potentially degrading their performance.To tackle this problem,this paper proposes a novel method termed reversed maxi-mization learning(RML),which aims to train an anti-distillation model that preserves the original model's performance while reducing the potential risks associated with knowledge distillation.The proposed method decouples the positive and negative classes in the output of original model through binary probability mechanism.Meanwhile,it employs reverse-ranking module to learn the inverted negative-class outputs of the original model,thereby eliminating confidence differences among negative classes in the anti-distillation model to disrupt knowledge distillation,while preserving the confi-dence advantage of the positive class to maintain the original performance.Extensive experiments on datasets such as Cifar-100 and ImageNet200,as well as across various model architectures,demonstrate that the proposed method achieves significant effectiveness in both data-driven and data-free knowledge distillation scenarios,outperforming comparison approaches.

关键词

知识蒸馏/反蒸馏/反向最大化学习/二元概率分布/模型保护

Key words

knowledge distillation/anti-knowledge distillation/reversed maximization learning/binary probability distri-bution/model protection

分类

信息技术与安全科学

引用本文复制引用

张瑶,李阳,潘志松..反向最大化学习:反蒸馏场景下的暗知识清除[J].计算机科学与探索,2026,20(1):143-153,11.

基金项目

国家自然科学基金(62076251).This work was supported by the National Natural Science Foundation of China(62076251). (62076251)

计算机科学与探索

1673-9418

访问量0
|
下载量0
段落导航相关论文