• Title/Summary/Keyword: 우호적인 인공지능

Search Result 1, Processing Time 0.017 seconds

Why should we worry about controlling AI? (우리는 왜 인공지능에 대한 통제를 고민해야 하는가?)

  • Rheey, Sang-hun
    • Journal of Korean Philosophical Society
    • /
    • v.147
    • /
    • pp.261-281
    • /
    • 2018
  • This paper will cover recent discussions on the risks of human being due to the development of artificial intelligence(AI). We will consider AI research as artificial narrow intelligence(ANI), artificial general intelligence(AGI), and artificial super intelligence(ASI). First, we examine the risks of ANI, or weak AI systems. To maximize efficiency, humans will use autonomous AI extensively. At this time, we can predict the risks that can arise by transferring a great deal of authority to autonomous AI and AI's judging and acting without human intervention. Even a sophisticated system, human-made artificial intelligence systems are incomplete, and virus infections or bugs can cause errors. So I think there should be a limit to what I entrust to artificial intelligence. Typically, we do not believe that lethal autonomous weapons systems should be allowed. Strong AI researchers are optimistic about the emergence of artificial general intelligence(AGI) and artificial superintelligence(ASI). Superintelligence is an AI system that surpasses human ability in all respects, so it may act against human interests or harm human beings. So the problem of controlling superintelligence, i.e. control problem is being seriously considered. In this paper, we have outlined how to control superintelligence based on the proposed control schemes. If superintelligence emerges, it is judged that there is no way for humans to completely control superintelligence at this time. But the emergence of superintelligence may be a fictitious assumption. Even in this case, research on control problems is of practical value in setting the direction of future AI research.