• Title/Summary/Keyword: generative chat system

Search Result 12, Processing Time 0.015 seconds

Efficiency Analysis of Integrated Defense System Using Artificial Intelligence (인공지능을 활용한 통합방위체계의 효율성 분석)

  • Yoo Byung Duk;Shin Jin
    • Convergence Security Journal
    • /
    • v.23 no.1
    • /
    • pp.147-159
    • /
    • 2023
  • Recently, Chat GPT artificial intelligence (AI) is of keen interest to all governments, companies, and military sectors around the world. In the existing era of literacy AI, it has entered an era in which communication with humans is possible with generative AI that creates words, writings, and pictures. Due to the complexity of the current laws and ordinances issued during the recent national crisis in Korea and the ambiguity of the timing of application of laws and ordinances, the golden time of situational measures was often missed. For these reasons, it was not able to respond properly to every major disaster and military conflict with North Korea. Therefore, the purpose of this study was to revise the National Crisis Management Basic Act, which can act as a national tower in the event of a national crisis, and to promote artificial intelligence governance by linking artificial intelligence technology with the civil, government, military, and police.

Inducing Harmful Speech in Large Language Models through Korean Malicious Prompt Injection Attacks (한국어 악성 프롬프트 주입 공격을 통한 거대 언어 모델의 유해 표현 유도)

  • Ji-Min Suh;Jin-Woo Kim
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.34 no.3
    • /
    • pp.451-461
    • /
    • 2024
  • Recently, various AI chatbots based on large language models have been released. Chatbots have the advantage of providing users with quick and easy information through interactive prompts, making them useful in various fields such as question answering, writing, and programming. However, a vulnerability in chatbots called "prompt injection attacks" has been proposed. This attack involves injecting instructions into the chatbot to violate predefined guidelines. Such attacks can be critical as they may lead to the leakage of confidential information within large language models or trigger other malicious activities. However, the vulnerability of Korean prompts has not been adequately validated. Therefore, in this paper, we aim to generate malicious Korean prompts and perform attacks on the popular chatbot to analyze their feasibility. To achieve this, we propose a system that automatically generates malicious Korean prompts by analyzing existing prompt injection attacks. Specifically, we focus on generating malicious prompts that induce harmful expressions from large language models and validate their effectiveness in practice.