DOI QR코드

DOI QR Code

Keyed learning: An adversarial learning framework-formalization, challenges, and anomaly detection applications

  • Received : 2019.03.20
  • Accepted : 2019.08.28
  • Published : 2019.10.01

Abstract

We propose a general framework for keyed learning, where a secret key is used as an additional input of an adversarial learning system. We also define models and formal challenges for an adversary who knows the learning algorithm and its input data but has no access to the key value. This adversarial learning framework is subsequently applied to a more specific context of anomaly detection, where the secret key finds additional practical uses and guides the entire learning and alarm-generating procedure.

Keywords

References

  1. M. Bellare, R. Canetti, and H. Krawczyk, Keying hash functions for message authentication, in Proc. Auun. Int. Cryptology Conf., Santa Barbara, CA, USA, Sug. 1996, pp. 1-15.
  2. M. Barreno et al., Can machine learning be secure? in Proc. ACM Symp. Inf., Comput. Commun. Security (AsiaCCS), ACM, Taipei, Taiwan, Mar. 2006, pp. 16-25.
  3. R. S. Mrdovic and B. Drazenovic, KIDS: a Keyed Intrusion Detection System, in Proc. Int. Conf. Detection Intrusions Malware, Vulnerability Assessment (DIMVA), IEEE, Bonn, Germany, July 2010, pp. 173-182.
  4. B. Biggio, G. Fumera, and F. Roli, Adversarial pattern classification using multiple classifiers and randomization, in Proc. Joint IAPR Int. Workshop Structural, Syntactic, Statistical Pattern Recogn., Springer, Orlando, FL, USA, Dec. 2008, pp. 500-509.
  5. K. Wang, J. Parekh, and S. Stolfo, Anagram: a Content Anomaly Detector Resistant to Mimicry Attack, in Proc. Int. Conf. Recent Adv. Intrusion Detection, Springer, Hamburg, Germany, Sept. 2005, pp. 226-248.
  6. V. M. Lomte and D. Patil, Survey on keyed IDS and key recovery attacks, Int. J. Sci. Research 4 (2015), no. 12, 846-849.
  7. R. Perdisci et al., McPAD: a multiple classifier system for accurate payload-based anomaly detection, Comput. Netw. 53 (2009), no. 6, 864-881. https://doi.org/10.1016/j.comnet.2008.11.011
  8. F. Bergadano et al., Defacement response via keyed learning, in Proc. Int. Conf. Inf. Intell. Syst. Applicat., Larnaca, Cyprus, Aug. 2017, pp. 1-6.
  9. G. Davanzo, E. Medvet, and A. Bartoli, Anomaly detection techniques for a web defacement monitoring service, Expert Syst. Applicat. 38 (2011), no. 10, 12521-12530. https://doi.org/10.1016/j.eswa.2011.04.038
  10. K. Scarfone, W. Jansenamd, and M. Tracy, Guide to General Server Security, Section 2.4, Special Publication 800-123, NIST, Gaithersburg, MD, 2008.
  11. F. Bergadano, D. Gunetti, and C. Picardi, Identity verification through dynamic keystroke analysis, Intell. Data Analysis J. 7 (2003), no. 5, 469-496. https://doi.org/10.3233/IDA-2003-7506
  12. G. Ruffo and F. Bergadano, Enfilter: a password enforcement and filter tool based on pattern recognition techniques, in Proc. Int. Conf. Image Analysis Process., Cagliari, Italy, Sept. 2005, pp. 75-82.
  13. S. Y. Ooi, S. C. Tan, and C. W. Ping, Anomaly Based Intrusion Detection through Temporal Classification, in Proc. Int. Conf. Neural Inf. Process., Kuching, Malaysia, Nov. 2014, pp. 612-619.
  14. J. Kim et al., A lightweight network anomaly detection technique, in Int. Conf. Comput., Netw. Commun.(ICNC), Santa Clara, CA, USA, Jan. 2017, pp. 1-5.
  15. G. Wang, J. Yang, and R. Li, Imbalanced SVM based anomaly detection algorithm for imbalanced training datasets, ETRI J. 39 (2017), no. 5, 621-631. https://doi.org/10.4218/etrij.17.0116.0879
  16. P. Parrend et al., Foundations and applications of artificial Intelligence for zero-day and multi-step attack detection, EURASIP J. Inf. Security 4 (2018), 1-21.
  17. F. Maggi et al., Investigating web defacement campaigns at large, in Proc. Asia Conf. Comput. Commun. Security, Incheon, Rep. of Korea, June 2018, pp. 443-456.
  18. C. Shen et al., Touch-interaction behavior for continuous user authentication on smartphones, in Proc. IEEE Int. Conf. Biometrics, Phuket, Thailand, May 2015, pp. 157-162.
  19. M. Kearns and M. Li, Learning in the presence of malicious errors, SIAM J. Comput. 22 (1993), no. 4, 807-837. https://doi.org/10.1137/0222052
  20. D. Lowd and C. Meek, Adversarial Learning, in Proc. ACM Conf. Knowledge Discovery Data Mining, ACM, Chicago, IL, UDS, Aug. 2005, pp. 641-647.
  21. L. Huang et al., Adversarial machine learning, in Proc. ACM Workshop Security Artif. Intell., Chicago, IL, USA, Oct. 2011, pp. 43-58.
  22. N. Srndic and P. Laskov, Practical evasion of a learning-based classifier: A case study, in Proc. IEEE Symp.Security Privacy, San Jose, CA, USA, May 2014, pp. 197-211.
  23. J. E. Tapiador et al., Key-recovery attacks on KIDS, a keyed anomaly detection system, IEEE Trans. Dependable Secure Comput. 12 (2015), no. 3, 312-325. https://doi.org/10.1109/TDSC.2013.39
  24. C. Aggarwal, J. Pei, and B. Zhang, On privacy preservation against adversarial data mining, in Proc. ACM SIGKDD Int. Conf. Knowled. Discovery Data Mining, ACM, Philadelphia, PA, USA, Aug. 2006, pp. 510-516.
  25. R. Bendale et al., KIDS: Keyed Anomaly Detection System, Int. J. Adv. Eng. Res. Dev. 12 (2017), 312-325.
  26. H. Xiao et al., Is feature selection secure against training data poisoning? in Proc. Int. Conf. Mach. Learn., Lille, France, July 2015, pp. 1689-1698.
  27. F. Bergadano and A. Giordana, A knowledge intensive Approach to concept induction, in Proc. Fifth Int. Conf. Mach. Learn., Margan Kaufmann Publishers, Ann Arbor, MI, USA, June 1988, pp. 305-317.
  28. T. Dierks and E. Rescorla, The Transport Layer Security (TLS) Protocol Version 1.2, RFC 5246, IETF, 2008, https://doi.org/10.17487/rfc5246.
  29. J. Arkkom et al., MIKEY: Multimedia Internet KEYing, RFC3820, IETF, 2004, https://doi.org/10.6028/NIST.SP.800-108.
  30. L. Chen, Recommendation for Key Derivation Using Pseudorandom Functions, NIST Special Publication 800-108, NIST, 2009.
  31. Y. Vorobeychik and B. Li, Optimal randomized classification in adversarial settings, in Proc. Conf. Autonomous Agents Multiagent Syst., Paris, France, May 2014, pp. 485-492.
  32. S. Rota Bulo et al., Randomized prediction games for adversarial machine, learning, IEEE Trans. Neural Netw. Learn. Syst. 28 (2017), no. 11, 2466-2478. https://doi.org/10.1109/TNNLS.2016.2593488
  33. N. Munaiah et al., Are Intrusion Detection Studies Evaluated Consistently? A Systematic Literature Review, Technical Report, University of Rochester, 2016.
  34. S. Anwar et al., From intrusion detection to an intrusion response system: fundamentals, requirements, and future directions, MDPI Algorithms J. 10 (2017), no. 39, 1-24.
  35. S. Kim and B. B. Kang, FriSM: malicious exploit kit detection via feature-based string-similarity matching, in Proc. Int. Conf. Security Privacy Commun. Netw., Singapore, Aug. 2018, pp. 416-432.
  36. Y. Bae, I. Kim, and S. O. Hwang, An efficient detection of TCP Syn flood attacks with spoofed IP addresses, J. Intell. Fuzzy Syst. 35 (2018), no. 6, 5983-5991. https://doi.org/10.3233/JIFS-169839
  37. Q. T. Hai and S. O. Hwang, An efficient classification of malware behavior using deep neural network, J. Intell. Fuzzy Syst. 35 (2018), no. 6, 5801-5814. https://doi.org/10.3233/JIFS-169823
  38. K. Borgolte, C. Kruegel, and G. Vigna, Meerkat: Detecting website defacements through image-based object recognition, in Proc. USENIX Security Symp., Washington, D.C., USA, Aug. 2015, pp. 595-610.
  39. A. Bartoli, G. Davanzo, and E. Medvet, A framework for largescale detection of web site defacements, ACM Trans. Internet Technol. 10 (2010), no. 3, 1-3.
  40. X. Liao et al., Seeking nonsense, looking for trouble: efficient promotional-infection detection through semantic inconsistency search, in Proc. IEEE Symp. Security Privacy, San Jose, CA, USA, May 2016, pp. 707-723.
  41. A. Basso and F. Bergadano, Anti-bot strategies based on human interactive proofs, in Handbook of Information and Communication Security, Springer, New York, 2010, pp. 273-291.