DOI QR코드

DOI QR Code

머신러닝포키즈를 활용한 데이터 편향 인식 학습: AI야구심판 사례

Learning Method of Data Bias employing MachineLearningforKids: Case of AI Baseball Umpire

  • 김효은 (국립한밭대학교 인문교양학부)
  • Kim, Hyo-eun (Dept. of Humanities, Hanbat National University)
  • 투고 : 2022.08.08
  • 심사 : 2022.08.26
  • 발행 : 2022.08.31

초록

본고의 목표는 데이터 편향 인식 교육에서 기계학습 플랫폼의 사용을 제안하는 것이다. 학습자들이 인공지능 데이터 및 시스템을 다루거나 인공지능윤리 요소 중 데이터 편향에 의한 피해를 방지하고자 할 때 인지할 수 있는 역량을 배양할 수 있다. 구체적으로, 머신러닝포키즈를 활용해 데이터편향 학습을 하는 방법을 AI야구심판 사례를 통해 제시한다. 학습자는 구체적 주제선정, 선행연구 검토, 기계학습 플랫폼에서 편향/비편향 데이터의 입력 및 테스트 데이터 구성, 기계학습의 결과 비교, 결과를 통해 얻을 수 있는 데이터 편향에 대한 함의를 제시한다. 이러한 과정을 통해서 학습자는 인공지능 데이터 편향이 최소화되어야 한다는 점과 데이터 수집 및 선정이 사회에 미치는 영향을 체험적으로 배울 수 있다. 이 학습방법은 문제기반의 자기주도 학습의 용이성, 코딩교육과의 결합가능성, 그리고 인문사회적 주제와 인공지능 리터러시와 결합을 추동한다는 의의를 가진다.

The goal of this paper is to propose the use of machine learning platforms in education to train learners to recognize data biases. Learners can cultivate the ability to recognize when learners deal with AI data and systems when they want to prevent damage caused by data bias. Specifically, this paper presents a method of data bias education using MachineLearningforKids, focusing on the case of AI baseball referee. Learners take the steps of selecting a specific topic, reviewing prior research, inputting biased/unbiased data on a machine learning platform, composing test data, comparing the results of machine learning, and present implications. Learners can learn that AI data bias should be minimized and the impact of data collection and selection on society. This learning method has the significance of promoting the ease of problem-based self-directed learning, the possibility of combining with coding education, and the combination of humanities and social topics with artificial intelligence literacy.

키워드

과제정보

이 논문은 2021학년도 한밭대학교 대학회계 연구비를 지원받아 작성되었음.

참고문헌

  1. Kim, Hyo-eun (2021). Fairness Criteria and Mitigation of AI Bias, Kor. J. Psychol.: Gen., 2021, 40(4), 459-485.
  2. Joint ministries (2021). Reliable artificial intelligence realization strategy, May 13. Artificial Intelligence-based Policy Division, Ministry of Science and ICT. https://www.korea.kr/common/download.do?fileId=195009613&tblKey=GMN
  3. Barocas, S., Hardt, M., & Narayanan, A. (2017). Fairness in machine learning. Nips tutorial, 1. https://arxiv.org/ct?url=https%3A%2F%2Fdx.doi.org%2F10.1007%2F978-3-030-43883-8_7&v=31e44ca0hn https://doi.org/10.1007%2F978-3-030-43883-8_7&v=31e44ca0hn
  4. "U.S. Open without referee, AI referee fills in", The Chosun Ilbo, Korea. 2020.09.01. https://www.chosun.com/sports/2020/09/01/J2KCIOURXRBWJARQB7DRHHZXPQ
  5. "KBO, This year, the 2nd district 'robot referee' pilot operation... Ball strike judgment" Younghap News, Korea, 2021. 6.29. https://www.yna.co.kr/view/AKR20210629159600007"
  6. "International Gymnastics Federation's '3D technology and AI referee' World Cup competition", Younghap News, Korea, 2018.11.21. https://www.yna.co.kr/view/AKR20181121057600007
  7. Parsons, Christopher A., Johan Sulaeman, Michael C. Yates, and Daniel S. Hamermesh. 2011. "Strike Three: Discrimination, Incentives, and Evaluation." American Economic Review, 101 (4): 1410-35. https://doi.org/10.1257/aer.101.4.1410
  8. Teachable Machine, https://teachablemachine.withgoogle.com
  9. Machine Learning for Kids, https://machinelearningforkids.co.uk
  10. Pariser, E. (2011). The filter bubble: What the Internet is hiding from you. Penguin UK. https://dl.acm.org/doi/10.5555/2029079
  11. Vries, T., Misra, I., Wang, C., & van der Maaten, L. (2019). Does object recognition work for every one?. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 52-59.
  12. Prates,M. O., Avelar, P. H., & Lamb, L. C.(2020). Assessing gender bias in machine translation: a case study with google translate. Neural Computing and Applications, 32(10), 6363-6381. https://doi.org/10.1007/s00521-019-04144-6
  13. Tan, S., Caruana, R., Hooker, G., & Lou, Y. (2017). Detecting bias in black-box models using transparent model distillation. arXiv preprint. Retrieved from https://arXiv:1710.06169 https://doi.org/10.06169
  14. Skeem, J. L., & Lowenkamp, C. T. (2016). Risk, race, and recidivism: Predictive bias and disparate impact. Criminology, 54(4), 680-712. https://doi.org/10.1111/1745-9125.12123
  15. Bigman, Y. E., Yam, K. C., Marciano, D., Reynolds, S. J., & Gray, K. (2021). Threat of racial and economic inequality increases preference for algorithm decision-making. Computers in Human Behavior, 122, 106859. https://doi.org/10.1016/j.chb.2021.106859