华南理工大学学报(自然科学版)

• 计算机科学与技术 • 上一篇    下一篇

深度残差网络JPEG隐写分析器的特性

谭舜泉1 刘光庆1 曾吉申2† 李斌2   

  1. 1. 深圳大学 计算机与软件学院,广东 深圳 518060; 2. 深圳大学 信息工程学院,广东 深圳 518060
  • 收稿日期:2017-10-12 出版日期:2018-05-25 发布日期:2018-04-03
  • 通信作者: 曾吉申( 1993-) ,男,博士生,主要从事图像隐写和隐写分析、深度学习研究 E-mail:jishenzeng@foxmail.com
  • 作者简介:谭舜泉( 1980-) ,男,博士,副教授,主要从事多媒体信息安全、深度学习、机器学习等的研究
  • 基金资助:
     国家自然科学基金资助项目( 61772349, 61572329)

Large-Scale JPEG Image Steganalysis Based on DRN
 

TAN Shunquan1 LIU Guangqing1 ZENG Jishen2 LI Bin2    

  1.  1. College of Computer Science and Software Engineering,Shenzhen University,Shenzhen 518060,Guangdong,China; 2. College of Information Engineering,Shenzhen University,Shenzhen 518060,Guangdong,China
  • Received:2017-10-12 Online:2018-05-25 Published:2018-04-03
  • Contact: 曾吉申( 1993-) ,男,博士生,主要从事图像隐写和隐写分析、深度学习研究 E-mail:jishenzeng@foxmail.com
  • About author:谭舜泉( 1980-) ,男,博士,副教授,主要从事多媒体信息安全、深度学习、机器学习等的研究
  • Supported by:
     Supported by the National Natural Science Foundation of China( 61772349, 61572329) 

摘要: 传统的隐写分析技术采用富模型特征,通过集成分类器获得了较高的检测性能. 深度学习框架在隐写分析领域展现出了比传统方法更强大的检测性能. 已有研究表明,深 度残差网络类似于集成分类器. 为确认基于深度残差网络的隐写分析器徐氏网络是否具 有上述特性,考虑到徐氏网络不足够深,文中采用瓶颈架构和组件复制两种方式分别对徐 氏网络进行拓展,得到了4 个变种———瓶颈网络、 30 层网络、 40 层网络和50 层网络,并进 行了3 组实验———第1 组实验通过训练徐氏网络及其4 个变种网络,获得最优的模型,发 现更深的网络并没有比徐氏网络的性能更好;第2 组实验通过删除个别组件,证明了残差 网络中的路径并不依赖于彼此;第3 组实验通过置乱一些组件,发现残差网络在一定程度 上可以重新配置. 实验结果表明,徐氏网络也类似于集成分类器. 

关键词: 图像隐写, 残差网络, 集成分类器, 深度学习 

Abstract: The traditional steganalysis applies Rich Model features through Ensemble Classifier to achieve high detection performance. While the deep learning framework shows more powerful detection performance than traditional ones in steganalysis so far. It has been shown that the deep residual network is similar to the ensemble classifier. To confirm whether or not Xu's network,based on the steganalyzer of deep residual network which we find is not deep enough,is characteristic of the features mentioned above,we introduce deeper bottleneck architecture and reproduction of building blocks to expand them respectively,and we get four variants—bottleneck network,30layer ResNet,40layer ResNet and 50layer ResNet. In this article,three experiments are introduced. The first is to train the Xu's network and the variants in order to obtain the optimal models. As a result,we found that the performance of deeper network is not better than that of Xu's network. The second is to remove a building block, proving that the path in the residual network does not depend on each other. The third is to re-order some building blocks,indicating that the residual network to a certain extent can be re-configured. Finally we conclude that Xu's network is also similar to ensembles of relatively shallow networks. 

Key words:

中图分类号: