您的位置:山东大学 -> 科技期刊社 -> 《山东大学学报(理学版)》

山东大学学报(理学版) ›› 2018, Vol. 53 ›› Issue (3): 63-70.doi: 10.6040/j.issn.1671-9352.2.2017.294

• • 上一篇    下一篇

一种基于浅层卷积神经网络的隐写分析方法

刘明明,张敏情,刘佳,高培贤   

  1. 武警工程大学网络与信息安全武警部队重点实验室, 陕西 西安 710086
  • 收稿日期:2017-08-28 出版日期:2018-03-20 发布日期:2018-03-13
  • 作者简介:刘明明(1992— ),男,硕士,研究方向为信息隐藏、深度学习. E-mail:1078491558@qq.com
  • 基金资助:
    国家自然科学基金资助项目(61379152;61403417)

Steganalysis method based on shallow convolution neural network

LIU Ming-ming, ZHANG Min-qing, LIU Jia, GAO Pei-xian   

  1. Key Laboratory of Network and Information Security Armed Police Force, Engineering University of the Armed Police Force, Xian 710086, Shaanxi, China
  • Received:2017-08-28 Online:2018-03-20 Published:2018-03-13

摘要: 为提高隐写分析的检测准确率,提出了一种基于浅层卷积神经网络的图像隐写分析方法。与深度卷积神经网络相比,浅层卷积神经网络通过减少卷积层和禁用池化层,来加快神经网络收敛速度和减少隐写特征丢失,同时采用增加卷积核数、使用批正则化以及使用单层全连接层的方式,提高隐写分析网络的泛化性能。实验结果表明,针对S-UNIWARD隐写算法,在嵌入率为0.4 bpp和0.1 bpp时,检测准确率分别能达到96%和81.7%,同时在载体库源及嵌入率失配情况下,该方法仍能保持较好的检测性能。

关键词: 神经网络, 深度学习, 浅层卷积, 隐写分析

Abstract: In order to improve the detection rate of steganalysis, a method of image steganalysis based on shallow convolution neural network is proposed. Compared with the deep convolution neural network, the shallow convolution neural network can improve the convergence speed of the neural network and reduce the loss of the steganography feature by reducing the convolution layer and disabling the pool layer. At the same time, the generalization performance of the steganalysis network is improved by using batch normalization functions and using a single fully connected layer. The experimental results show that the detection accuracy can reach 96% and 81.7% respectively when the embedding rate is 0.4 bpp and 0.1 bpp. And the method is still maintain a better detection performance in the case of carrier source and embedding rate mismatch.

Key words: steganalysis, shallow convolution, neural network, deep learning

中图分类号: 

  • TP309
[1] KRIZHEVSKY A, SUTSKEVER I, HINTON G E. ImageNet classification with deep convolutional neural networks[C] // Advances in Neural Information Processing Systems 25. Curran Associates: NIPS, 2012: 1097-1105.
[2] FRIDRICH J, KODOVSKY J. Rich models for steganalysis of digital images[J]. IEEE Transactions on Information Forensics and Security, 2012, 7(3):868-882.
[3] CHANG Chihchung, LIN Chihjen. LIBSVM: a library for support vector machines[J]. ACM Transactions on Intelligent Systems & Technology, 2011, 2(3):1-27.
[4] KODOVSKY J, FRIDRICH J, HOLUB V. Ensemble classifiers for steganalysis of digital media[J]. IEEE Transactions on Information Forensics and Security, 2012, 7(2):432-444.
[5] QIAN Yinlong, DONG Jing, WANG Wei, et al. Deep learning for steganalysis via convolutional neural networks[C] // Proceedings of SPIE Media Watermarking, Security, and Forensics. San FranciscoL: SPIE. 2015: 9409:94090J-94090J-10.
[6] BAS P, FILLER T, PEVNY T. Break our steganographic system: the ins and outs of organizing BOSS[J]. Journal of the American Statistical Association, 2011, 96:488-499.
[7] PEVNY T, FILLER T, BAS P. Using high-dimensional image models to perform highly undetectable steganography[C] // Proceedings of the 12th International Conference on Information Hiding. Calgary:[s.n.] , 2010: 161-177.
[8] HOLUB V, FRIDRICH J. Designing steganographic distortion using directional filters[C] // Proceedings of the IEEE International Workshop on Information Forensics and Security. Tenerife:[s.n.] , 2013: 234-239.
[9] HOLUB V, FRIDRICH J, DENEMARK T. Universal distortion function for steganography in an arbitrary domain[J]. Eurasip Journal on Information Security, 2014, 2014(1):1. DOI:10.1186/1687-417X-2014-1.
[10] XU Guanshuo, WU Hanzhou, SHI Yunqing. Structural design of convolutional neural networks for steganalysis[J]. IEEE Signal Processing Letters, 2016, 23(5):708-712.
[11] LI Bin, WANG Ming, HUANG Jiwu, et al. A new cost function for spatial image steganography[C] // IEEE International Conference on Image Processing.[S.l.] :[s.n.] , 2014: 4206-4210.
[12] IOFFE S, SZEGEDY C. Batch normalization: accelerating deepnetwork training by reducing internal covariate shift[J]. OALib Journal, 2015, 3:448-456. arXiv:1502.03167.
[13] PIBRE L, PASQUET J, IENCO D, et al. Deep learning is a good steganalysis tool when embedding key is reused for different images, even if there is a cover source-mismatch[J]. Electronic Imaging, 2016, 4(8):1-11.
[1] 肖炜茗,王贵君. 基于Bernstein多项式的SISO三层前向神经网络的设计与逼近[J]. 山东大学学报(理学版), 2018, 53(9): 55-61.
[2] 庞博,刘远超. 融合pointwise及深度学习方法的篇章排序[J]. 山东大学学报(理学版), 2018, 53(3): 30-35.
[3] 张芳芳,曹兴超. 基于字面和语义相关性匹配的智能篇章排序[J]. 山东大学学报(理学版), 2018, 53(3): 46-53.
[4] 秦静,林鸿飞,徐博. 基于示例语义的音乐检索模型[J]. 山东大学学报(理学版), 2017, 52(6): 40-48.
[5] 王长弘,王林山. 基于忆阻器的S-分布时滞随机神经网络的均方指数稳定性[J]. 山东大学学报(理学版), 2016, 51(5): 130-135.
[6] 甄艳, 王林山. S-分布时滞随机广义细胞神经网络的均方指数稳定性分析[J]. 山东大学学报(理学版), 2014, 49(12): 60-65.
[7] 刘铭, 昝红英, 原慧斌. 基于SVM与RNN的文本情感关键句判定与抽取[J]. 山东大学学报(理学版), 2014, 49(11): 68-73.
[8] 杨阳, 刘龙飞, 魏现辉, 林鸿飞. 基于词向量的情感新词发现方法[J]. 山东大学学报(理学版), 2014, 49(11): 51-58.
[9] 堵锡华,史小琴,冯长君,李亮. 基于野韭菜挥发性成分的色谱保留指数神经网络预测[J]. 山东大学学报(理学版), 2014, 49(1): 50-53.
[10] 马奎森,王林山*. S-分布时滞随机BAM神经网络的指数同步[J]. 山东大学学报(理学版), 2014, 49(03): 73-78.
[11] 常青, 周立群*. 一类具时滞细胞神经网络的全局一致渐近稳定性[J]. J4, 2012, 47(8): 42-49.
[12] 冯新营1,2,计华1,2,张化祥1,2. 基于聚类优化的RBF神经网络多标记学习算法[J]. J4, 2012, 47(5): 63-67.
[13] 段晨霞1,孙刚2,王贵君1*. 正则模糊神经网络对保极大值函数类的泛逼近性[J]. J4, 2012, 47(3): 81-86.
[14] 张伟伟1,王林山2*. S-分布时滞随机区间细胞神经网络的全局指数鲁棒稳定性[J]. J4, 2012, 47(3): 87-92.
[15] 黄祖达,彭乐群,徐敏. 论变时滞高阶细胞神经网络模型的反周期解[J]. J4, 2012, 47(10): 121-126.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!