DOI QR코드

DOI QR Code

Roadmap Toward Certificate Program for Trustworthy Artificial Intelligence

  • Han, Min-gyu (ICT Convergence Program, Hansung University) ;
  • Kang, Dae-Ki (Dept. of Computer Engineering, Dongseo University)
  • Received : 2021.07.09
  • Accepted : 2021.07.15
  • Published : 2021.09.30

Abstract

In this paper, we propose the AI certification standardization activities for systematic research and planning for the standardization of trustworthy artificial intelligence (AI). The activities will be in two-fold. In the stage 1, we investigate the scope and possibility of standardization through AI reliability technology research targeting international standards organizations. And we establish the AI reliability technology standard and AI reliability verification for the feasibility of the AI reliability technology/certification standards. In the stage 2, based on the standard technical specifications established in the previous stage, we establish AI reliability certification program for verification of products, systems and services. Along with the establishment of the AI reliability certification system, a global InterOp (Interoperability test) event, an AI reliability certification international standard meetings and seminars are to be held for the spread of AI reliability certification. Finally, TAIPP (Trustworthy AI Partnership Project) will be established through the participation of relevant standards organizations and industries to overall maintain and develop standards and certification programs to ensure the governance of AI reliability certification standards.

Keywords

Acknowledgement

This work was supported by Electronics and Telecommunications Research Institute (ETRI) grant funded by ICT R&D program of MSIT/IITP [2021-0-00193, Development of photorealistic digital human creation and 30fps realistic rendering technology].

References

  1. C. Guo, J. R. Gardner, Y. You, A. G. Wilson, and K. Q. Weinberger, "Simple Black-box Adversarial Attacks," in Proc. ICML 2019, Jun. 9-15, 2019.
  2. K. Lee, K. Lee, H. Lee, and J. Shin, "A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks," in Proc. NIPS 2018, Dec. 3-8, 2018.
  3. Z. Huang, and T. Zhang, "Black-Box Adversarial Attack with Transferable Model-based Embedding," in Proc. ICLR 2020, Apr. 26-May 1, 2020.
  4. H. Kwon, H. Yoon, and K.-W. Park, "Selective Poisoning Attack on Deep Neural Network to Induce Fine-Grained Recognition Error," in Proc. IEEE 2 nd International Conference on Artificial Intelligence and Knowledge Engineering (AIKE), pp. 136-139, Jun. 3-5, 2019.
  5. A. Raghunathan, J. Steinhardt, and P. Liang, "Certified Defenses against Adversarial Examples," in Proc. ICLR 2018, Apr. 30-May 3, 2018.
  6. A. Shafahi, W. R. Huang, M. Najibi, O. Suciu, C. Studer, T. Dumitras, and T. Goldstein, "Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks," in Proc. NIPS 2018, Dec. 3-8, 2018.
  7. E. Bagdasaryan, A. Veit, Y. Hua, D. Estrin, and V. Shmatikov, "How To Backdoor Federated Learning," in Proc. 23rd International Conference on Artificial Intelligence and Statistics (AISTATS), Apr. 13- 15, 2021.
  8. D. Yin, Y. Chen, K. Ramchandran, and P. Bartlett, "Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates," in Proc. ICML 2018, Jul 10-15, 2018.
  9. L. Zhu, Z. Liu, and S. Han, "Deep Leakage from Gradients," in Proc. NeurIPS 2019, Dec. 8-14, 2019.