• Title/Summary/Keyword: Large Language Model (LLM)

Search Result 42, Processing Time 0.016 seconds

Application Strategies of Superintelligent AI in the Defense Sector: Emphasizing the Exploration of New Domains and Centralizing Combat Scenario Modeling (초거대 인공지능의 국방 분야 적용방안: 새로운 영역 발굴 및 전투시나리오 모델링을 중심으로)

  • PARK GUNWOO
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.3
    • /
    • pp.19-24
    • /
    • 2024
  • The future military combat environment is rapidly expanding the role and importance of artificial intelligence (AI) in defense, aligning with the current trends of declining military populations and evolving dynamics. Particularly, in the civilian sector, AI development has surged into new domains based on foundation models, such as OpenAI's Chat-GPT, categorized as Super-Giant AI or Hyperscale AI. The U.S. Department of Defense has organized Task Force Lima under the Chief Digital and AI Office (CDAO) to conduct research on the application of Large Language Models (LLM) and generative AI. Advanced military nations like China and Israel are also actively researching the integration of Super-Giant AI into their military capabilities. Consequently, there is a growing need for research within our military regarding the potential applications and fields of application for Super-Giant AI in weapon systems. In this paper, we compare the characteristics and pros and cons of specialized AI and Super-Giant AI (Foundation Models) and explore new application areas for Super-Giant AI in weapon systems. Anticipating future application areas and potential challenges, this research aims to provide insights into effectively integrating Super-Giant Artificial Intelligence into defense operations. It is expected to contribute to the development of military capabilities, policy formulation, and international security strategies in the era of advanced artificial intelligence.

A Study on the Medical Application and Personal Information Protection of Generative AI (생성형 AI의 의료적 활용과 개인정보보호)

  • Lee, Sookyoung
    • The Korean Society of Law and Medicine
    • /
    • v.24 no.4
    • /
    • pp.67-101
    • /
    • 2023
  • The utilization of generative AI in the medical field is also being rapidly researched. Access to vast data sets reduces the time and energy spent in selecting information. However, as the effort put into content creation decreases, there is a greater likelihood of associated issues arising. For example, with generative AI, users must discern the accuracy of results themselves, as these AIs learn from data within a set period and generate outcomes. While the answers may appear plausible, their sources are often unclear, making it challenging to determine their veracity. Additionally, the possibility of presenting results from a biased or distorted perspective cannot be discounted at present on ethical grounds. Despite these concerns, the field of generative AI is continually advancing, with an increasing number of users leveraging it in various sectors, including biomedical and life sciences. This raises important legal considerations regarding who bears responsibility and to what extent for any damages caused by these high-performance AI algorithms. A general overview of issues with generative AI includes those discussed above, but another perspective arises from its fundamental nature as a large-scale language model ('LLM') AI. There is a civil law concern regarding "the memorization of training data within artificial neural networks and its subsequent reproduction". Medical data, by nature, often reflects personal characteristics of patients, potentially leading to issues such as the regeneration of personal information. The extensive application of generative AI in scenarios beyond traditional AI brings forth the possibility of legal challenges that cannot be ignored. Upon examining the technical characteristics of generative AI and focusing on legal issues, especially concerning the protection of personal information, it's evident that current laws regarding personal information protection, particularly in the context of health and medical data utilization, are inadequate. These laws provide processes for anonymizing and de-identification, specific personal information but fall short when generative AI is applied as software in medical devices. To address the functionalities of generative AI in clinical software, a reevaluation and adjustment of existing laws for the protection of personal information are imperative.