참고문헌
- 권헌영 (2019). 인공지능(AI)과 법조 분야: 윤리적․규제적 고려사항. 경제규제와 법, 12(2), 69-80.
- Adler-Milstein, J., Holmgren, A. J., Kralovec, P., Worzala, C., Searcy, T., & Patel, V. (2017). Electronic health record adoption in US hospitals: the emergence of a digital "advanced use" divide. Journal of the American Medical Informatics Association: JAMIA, 24(6), 1142-1148. https://doi.org/10.1093/jamia/ocx080
- Ajunwa, I., Friedler, S., Scheidegger, C. E., & Venkatasubramanian, S. (2016). Hiring by algorithm: predicting and preventing disparate impact. Available at SSRN.
- Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias. ProPublica, May 23, 2016.
- Artificial intelligence: Go master Lee Se-dol wins against AlphaGo program (2016, March 13). BBC News Online. https://www.bbc.com/news/technology-35797102.
- Asaro, P. M. (2011). 11 A Body to Kick, but Still No Soul to Damn: Legal Perspectives on Robotics. Robot ethics: The ethical and social implications of robotics, 169.
- Ayasdi (2018). Ayasdi for Payers: white paper. Ayasdi. https://s3.amazonaws.com/cdn.ayasdi.com/wp-content/uploads/2018/10/05102657/WP-Ayasdi-for-Payers.pdf
- Bigman, Y. E., & Gray, K. (2018). People are averse to machines making moral decisions. Cognition, 181, 21-34. https://doi.org/10.1016/j.cognition.2018.08.003
- Cantarero, K., Szarota, P., Stamkou, E., Navas, M., & Dominguez Espinosa, A. D. C. (2021). The effects of culture and moral foundations on moral judgments: The ethics of authority mediates the relationship between power distance and attitude towards lying to one's supervisor. Current Psychology, 40(2), 675-683.
- Corbett-Davies, S., Pierson, E., Feller, A., Goel, S., & Huq, A. (2017, August). Algorithmic decision making and the cost of fairness. In Proceedings of the 23rd acm sigkdd international conference on knowledge discovery and data mining (pp. 797-806).
- Curry, O. S., Chesters, M. J., & Van Lissa, C. J. (2019). Mapping morality with a compass: Testing the theory of 'morality-as-cooperation'with a new questionnaire. Journal of Research in Personality, 78, 106-124. https://doi.org/10.1016/j.jrp.2018.10.008
- Dash, S., Shakyawar, S. K., Sharma, M., & Kaushik, S. (2019). Big data in healthcare: management, analysis and future prospects. Journal of Big Data, 6(1), 1-25. https://doi.org/10.1186/s40537-018-0162-3
- Diakopoulos, N. (2016). Accountability in algorithmic decision making. Communications of the ACM, 59(2), 56-62. https://doi.org/10.1145/2844110
- Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144(1), 114. https://doi.org/10.1037/xge0000033
- Epley, N., Waytz, A., & Cacioppo, J. T. (2007). On seeing human: a three-factor theory of anthropomorphism. Psychological review, 114(4), 864. https://doi.org/10.1037/0033-295X.114.4.864
- Foot, P. (1967). The problem of abortion and the doctrine of the double effect. Oxford review, 5.
- Graham, J., Haidt, J., Koleva, S., Motyl, M., Iyer, R., Wojcik, S. P., & Ditto, P. H. (2013). Moral foundations theory: The pragmatic validity of moral pluralism. In Advances in experimental social psychology (Vol. 47, pp. 55-130). Academic Press.
- Graham, J., Haidt, J., Motyl, M., Meindl, P., Iskiwitch, C., & Mooijman, M. (2018). Moral foundations theory: On the advantages of moral pluralism over moral monism. In K. Gray & J. Graham (Eds.), Atlas of moral psychology (pp. 211-222). The Guilford Press.
- Graham, J., Haidt, J., & Nosek, B. A. (2009). Liberals and conservatives rely on different sets of moral foundations. Journal of personality and social psychology, 96(5), 1029. https://doi.org/10.1037/a0015141
- Gray, H. M., Gray, K., & Wegner, D. M. (2007). Dimensions of mind perception. science, 315(5812), 619-619. https://doi.org/10.1126/science.1134475
- Gray, K., Jenkins, A. C., Heberlein, A. S., & Wegner, D. M. (2011). Distortions of mind perception in psychopathology. Proceedings of the National Academy of Sciences, 108(2), 477-479. https://doi.org/10.1073/pnas.1015493108
- Gray, K., & Wegner, D. M. (2012). Feeling robots and human zombies: Mind perception and the uncanny valley. Cognition, 125(1), 125-130. https://doi.org/10.1016/j.cognition.2012.06.007
- Gray, K., & Wegner, D. M. (2012). Morality takes two: Dyadic morality and mind perception.
- Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., & Cohen, J. D. (2001). An fMRI investigation of emotional engagement in moral judgment. Science, 293(5537), 2105-2108. https://doi.org/10.1126/science.1062872
- Gunkel, D. J. (2012). The machine question: Critical perspectives on AI, robots, and ethics. mit Press.
- Haidt, J., Koller, S. H., & Dias, M. G. (1993). Affect, culture, and morality, or is it wrong to eat your dog?. Journal of personality and social psychology, 65(4), 613. https://doi.org/10.1037/0022-3514.65.4.613
- Haidt, J. (2001). The emotional dog and its rational tail: a social intuitionist approach to moral judgment. Psychological review, 108(4), 814. https://doi.org/10.1037/0033-295X.108.4.814
- Haidt, J. (2012). The righteous mind: Why good people are divided by politics and religion. Vintage.
- HLEG, A. I. (2019). High-level expert group on artificial intelligence: Ethics guidelines for trustworthy AI. European Commission, 09.04.
- Hollister, B., & Bonham, V. L. (2018). Should electronic health record-derived social and behavioral data be used in precision medicine research?. AMA journal of ethics, 20(9), 873-880.
- Kohlberg, L. (1969). Stage and sequence: The cognitive-developmental approach to socialization. Handbook of socialization theory and research, 347, 480.
- Kohlberg, L. (2016). 1. Stages of moral development as a basis for moral education. In C. Beck, B. Crittenden & E. Sullivan (Ed.), Moral Education (pp. 23-92). Toronto: University of Toronto Press. https://doi.org/10.3138/9781442656758-004
- Kuncel, N. R., Klieger, D. M., & Ones, D. S. (2014). In hiring, algorithms beat instinct. Harvard business review, 92(5), p32-32.
- Laakasuo, M., Palomaki, J., & Kobis, N. (2021). Moral uncanny valley: a robot's appearance moderates how its decisions are judged. International Journal of Social Robotics, 1-10.
- Larsen, R. R. (2020). Psychopathy as moral blindness: a qualifying exploration of the blindness-analogy in psychopathy theory and research. Philosophical Explorations, 23(3), 214-233. https://doi.org/10.1080/13869795.2020.1799662
- Lee, D. (2016, March 25). Tay: Microsoft issues apology over racist chatbot fiasco. BBC News Online. https://www.bbc.com/news/technology-35902104
- Li, M., & Suh, A. (2021, January). Machinelike or Humanlike? A Literature Review of Anthropomorphism in AI-Enabled Technology. In Proceedings of the 54th Hawaii International Conference on System Sciences (p. 4053).
- MacDorman, K. F. (2005, July). Androids as an experimental apparatus: Why is there an uncanny valley and can we exploit it. In CogSci-2005 workshop: toward social mechanisms of android science (Vol. 106118).
- MacDorman, K. F., & Entezari, S. O. (2015). Individual differences predict sensitivity to the uncanny valley. Interaction Studies, 16(2), 141-172. https://doi.org/10.1075/is.16.2.01mac
- MacDorman, K. F., Green, R. D., Ho, C. C., & Koch, C. T. (2009). Too real for comfort? Uncanny responses to computer generated faces. Computers in human behavior, 25(3), 695-710. https://doi.org/10.1016/j.chb.2008.12.026
- Malle, B. F., Scheutz, M., Arnold, T., Voiklis, J., & Cusimano, C. (2015, March). Sacrifice one for the good of many? People apply different moral norms to human and robot agents. In 2015 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 117-124). IEEE.
- Malle, B. F., Scheutz, M., Forlizzi, J., & Voiklis, J. (2016, March). Which robot am I thinking about? The impact of action and appearance on people's evaluations of a moral robot. In 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 125-132). IEEE.
- Min, J., Kim, S., Park, Y., & Sohn, Y. W. (2018). A Comparative Study of Potential Job Candidates' Perceptions of an AI Recruiter and a Human Recruiter. Journal of the Korea Convergence Society, 9(5), 191-202. https://doi.org/10.15207/JKCS.2018.9.5.191
- Moosa, M. M., & Ud-Dean, S. M. (2010). Danger avoidance: An evolutionary explanation of uncanny valley. Biological Theory, 5(1), 12-14. https://doi.org/10.1162/BIOT_a_00016
- Mori, M. (1970). Bukimi no tani [the uncanny valley]. Energy, 7, 33-35.
- Morse, S. J. (2008). Psychopathy and criminal responsibility. Neuroethics, 1(3), 205-212. https://doi.org/10.1007/s12152-008-9021-9
- Nass, C., & Moon, Y. (2000). Machines and mindlessness: Social responses to computers. Journal of social issues, 56(1), 81-103.
- Natarajan, M., & Gombolay, M. (2020, March). Effects of anthropomorphism and accountability on trust in human robot interaction. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (pp. 33-42).
- Newborn, M. (2012). Kasparov versus Deep Blue: Computer chess comes of age. Springer Science & Business Media.
- O'neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown.
- Oxford Dictionary. (n.d.). artificial intelligence. In Oxford English Dictionary. Retrieved October 28, 2021, from https://www.oed.com/viewdictionaryentry/Entry/271625
- Otting, S. K., & Maier, G. W. (2018). The importance of procedural justice in human-machine interactions: Intelligent systems as new decision agents in organizations. Computers in Human Behavior, 89, 27-39. https://doi.org/10.1016/j.chb.2018.07.022
- Savage, M. (2019, March 19). Meet Tengai, the job interview robot who won't judge you. BBC News Online. https://www.bbc.com/news/business-47442953
- Schein, C., & Gray, K. (2018). The theory of dyadic morality: Reinventing moral judgment by redefining harm. Personality and Social Psychology Review, 22(1), 32-70. https://doi.org/10.1177/1088868317698288
- Schein, C., Ritter, R. S., & Gray, K. (2016). Harm mediates the disgust-immorality link. Emotion, 16(6), 862. https://doi.org/10.1037/emo0000167
- Tollon, F. (2021). The artificial view: toward a non-anthropocentric account of moral patiency. Ethics and Information Technology, 23(2), 147-155. https://doi.org/10.1007/s10676-020-09540-4
- Torrance, S. (2006). The ethical status of artificial agents-with and without consciousness. Ethics of human interaction with robotic, bionic and AI systems: concepts and policies. Istituto Italiano per gli Studi Filosofici, Napoli, 60-66.
- Torrance, S. (2008). Ethics and consciousness in artificial agents. Ai & Society, 22(4), 495-521. https://doi.org/10.1007/s00146-007-0091-8
- Verma, N., & Dombrowski, L. (2018, April). Confronting social criticisms: Challenges when adopting data-driven policing strategies. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 1-13).
- Wang, W. (2017). Smartphones as social actors? Social dispositional factors in assessing anthropomorphism. Computers in Human Behavior, 68, 334-344. https://doi.org/10.1016/j.chb.2016.11.022
- Wang, R., Harper, F. M., & Zhu, H. (2020, April). Factors influencing perceived fairness in algorithmic decision-making: Algorithm outcomes, development procedures, and individual differences. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1-14).
- Waytz, A., Cacioppo, J., & Epley, N. (2010). Who sees human? The stability and importance of individual differences in anthropomorphism. Perspectives on Psychological Science, 5(3), 219-232. https://doi.org/10.1177/1745691610369336
- Waytz, A., Heafner, J., & Epley, N. (2014). The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle. Journal of Experimental Social Psychology, 52, 113-117. https://doi.org/10.1016/j.jesp.2014.01.005
- Wegner, D. M., & Gray, K. (2017). The mind club: Who thinks, what feels, and why it matters. Penguin.
- Yam, K. C., Bigman, Y. E., Tang, P. M., Ilies, R., De Cremer, D., Soh, H., & Gray, K. (2020). Robots at work: People prefer-and forgive-service robots with perceived feelings. Journal of Applied Psychology.