DOI QR코드

DOI QR Code

Printer Identification Methods Using Global and Local Feature-Based Deep Learning

전역 및 지역 특징 기반 딥러닝을 이용한 프린터 장치 판별 기술

  • 이수현 (금오공과대학교 소프트웨어공학과) ;
  • 이해연 (금오공과대학교 컴퓨터소프트웨어공학과)
  • Received : 2018.10.04
  • Accepted : 2018.12.01
  • Published : 2019.01.31

Abstract

With the advance of digital IT technology, the performance of the printing and scanning devices is improved and their price becomes cheaper. As a result, the public can easily access these devices for crimes such as forgery of official and private documents. Therefore, if we can identify which printing device is used to print the documents, it would help to narrow the investigation and identify suspects. In this paper, we propose a deep learning model for printer identification. A convolutional neural network model based on local features which is widely used for identification in recent is presented. Then, another model including a step to calculate global features and hence improving the convergence speed and accuracy is presented. Using 8 printer models, the performance of the presented models was compared with previous feature-based identification methods. Experimental results show that the presented model using local feature and global feature achieved 97.23% and 99.98% accuracy respectively, which is much better than other previous methods in accuracy.

디지털 IT 기술의 발달로 인하여 프린터와 스캐너의 성능이 향상되고 가격이 저렴해지면서 일반인들도 쉽게 접할 수 있게 되었다. 그러나 이에 따른 부작용으로 공문서 및 사문서 위조 등의 범죄들이 쉽게 이루어질 수 있다. 따라서 해당 문서가 어떤 프린터를 사용하여 출력 되었는가를 특정할 수 있다면 수사 범위를 줄이고 용의자를 판별하는데 도움이 된다. 본 논문에서는 프린터 장치 판별을 위하여 딥러닝 모델을 제안한다. 먼저 최근 인식 등에서 범용적으로 활용되는 지역 특징 기반의 컨볼루셔널 뉴널 네트워크를 이용한 프린터 장치 판별 모델을 제안하고, 전역 특징 기반의 처리 과정을 네트워크 모델에 도입함으로 인하여 수렴 속도 및 정확도를 향상한 기법을 제안한다. 제안한 모델의 성능은 8개의 프린터 장치를 활용하여 기존 프린터 판별을 위한 특징 기반 기술과 비교를 수행하였다. 그 결과 제안하는 지역 특징 기반의 모델과 전역 특징 기반의 모델이 각각 97.23% 및 99.98%의 높은 판별 정확도를 달성하였고, 기존 기술들에 비하여 높은 정확도를 갖는 우수성을 보였다.

Keywords

JBCRJM_2019_v8n1_37_f0001.png 이미지

Fig. 1. Printer Identification using Wiener Filter and GLCM

JBCRJM_2019_v8n1_37_f0002.png 이미지

Fig. 2. General Structure of CNN

JBCRJM_2019_v8n1_37_f0003.png 이미지

Fig. 3. Training and Testing Process of Proposed Algorithm

JBCRJM_2019_v8n1_37_f0004.png 이미지

Fig. 4. Local Feature-based Printer Identification Model

JBCRJM_2019_v8n1_37_f0005.png 이미지

Fig. 5. Gray Level Co-occurrence Matrix Calculation(in case of 3 bits image)

JBCRJM_2019_v8n1_37_f0006.png 이미지

Fig. 6. GLCM Feature Map Processing for Data Reduction

JBCRJM_2019_v8n1_37_f0007.png 이미지

Fig. 7. Samples from Each Printer

JBCRJM_2019_v8n1_37_f0008.png 이미지

Fig. 8. 128x128 Sample Collection with Random Cropping

JBCRJM_2019_v8n1_37_f0009.png 이미지

Fig. 9. Printer Identification Accuracy of Two Proposed Models

Table 1. List of Printer Used for Experiment and Their Labels

JBCRJM_2019_v8n1_37_t0001.png 이미지

Table. 2 Printer Identification Accuracy Per Epoch

JBCRJM_2019_v8n1_37_t0002.png 이미지

Table 3. Comparison of Printer Identification Algorithms

JBCRJM_2019_v8n1_37_t0003.png 이미지

Table 4. Printer identification Accuracy for Each Printer Using Local Feature Model

JBCRJM_2019_v8n1_37_t0004.png 이미지

Table 5. Printer Identification Accuracy for Each Printer Using Global Feature Model

JBCRJM_2019_v8n1_37_t0005.png 이미지

References

  1. A. K. Mikkilineni, P.-J. Chiang, G. N. Ali, G. T.-C. Chiu, J. P. Allebach, and E. J. Delp, "Printer identification based on texture features," in Proceedings of the International Conference on Digital Printing Technologies, pp. 306-311, 2004.
  2. A. K. Mikkilineni, O. Arslan, P.-J. Chiang, R. M. Kumontoy, J. P. Allebach, G. T.-C. Chiu, and E. J. Delp, "Printer forensics using svm techniques," in Proceedings of the International Conference on Digital Printing Technologies, pp. 223-226, 2005.
  3. W. Deng, Q. Chen, F. Yuan, and Y. Yan, "Printer identification based on distance transform," in Proceedings of the International Conference on Intelligent Networks and Intelligent Systems, pp. 565-568, 2008.
  4. S. Elkasrawi, and F. Shafait, "Printer identification using supervised learning for document forgery detection," in Proceedings of the 11th IAPR International Workshop on Document Analysis Systems, pp. 146-150, 2014.
  5. J.-H. Choi, D.-H. Im, H.-Y. Lee, J.-T. W. J.-H. Ryu, and H.-K. Lee, "Color laser printer identification by analyzing statistical features on discrete wavelet transform," in Proceedings of the IEEE International Conference on Image Processing, pp. 1505-1508, 2009.
  6. S.-J. Ryu, H.-Y. Lee, D.-H. Im, J.-H. Choi, and H.-K. Lee, "Electrophotographic printer identification by halftone texture analysis," in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 1846-1849, 2010.
  7. D.-G. Kim and H.-K. Lee, “Colour laser printer identification using halftone texture fingerprint,” Electronics Letters, Vol. 51, No. 13, pp. 981-983, 2015. https://doi.org/10.1049/el.2015.0697
  8. J.-Y. Baek, H.-S. Lee, S.-G. Kong, J.-H. Choi, Y.-M. Yang, and H.-Y. Lee, “Color Laser Printer Identification through Discrete Wavelet Transform and Gray Level Co-occurrence Matrix,” KIPS Transactions: PartB, Vol. 17, No. 3, pp. 197-206, 2010.
  9. H.-Y. Lee, J.-Y. Baek, S.-G. Kong, H.-S. Lee, and J.-H. Choi, “Color Laser Printer Forensics through Wiener Filter and Gray Level Co-occurrence Matrix,” Journal of KIISE: Software and Applications, Vol. 37, No. 8, pp. 599-610, 2010.
  10. M.-J. Tsai, J. Liu, C.-S. Wang, and C.-H. Chuang, "Source color laser printer identification using discrete wavelet transform and feature selection algorithms," in Proceedings of the IEEE International Symposium on Circuits and Systems, pp. 2633-2636, 2011.
  11. D.-G. Kim, J.-U. Hou, and H.-K. Lee, "Learning deep features for source color laser printer identification based on cascaded learning," arXiv preprint arXiv:1711.00207, 2017.
  12. V. Dumoulin, and F. Visin, "A guide to convolution arithmetic for deep learning," arXiv preprint arXiv:1603.07285, 2016.
  13. V. Nair and G. E. Hinton, "Rectified linear units improve restricted boltzmann machines," in Proceedings of the International Conference on Machine Learning, pp. 807-814, 2010.
  14. N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: a simple way to prevent neural networks from overfitting,” Journal of Machine Learning Research, Vol. 15, No. 1, pp. 1929-1958, 2014.
  15. S. Ioffe, and C. Szegedy, "Batch normalization: Accelerating deep network training by reducing internal covariate shift," in Proceedings of the International Conference on Machine Learning, pp. 448-456, 2015.
  16. A. Tuama, F. Comby, and M. Chaumont, "Camera model identification with the use of deep convolutional neural networks," in Proceedings of the IEEE International Workshop on Information Forensics and Security, pp. 1-6, 2016.
  17. R. M. Haralick, K. Shanmugam, and I. H. Dinstein, “Textural features for image classification,” IEEE Transactions on Systems, Man, and Cybernetics, Vol. 3, No. 6, pp. 610-621, 1973. https://doi.org/10.1109/TSMC.1973.4309314