• Title/Summary/Keyword: LLMs (Large Language Models)

Search Result 19, Processing Time 0.023 seconds

Enhancing Empathic Reasoning of Large Language Models Based on Psychotherapy Models for AI-assisted Social Support (인공지능 기반 사회적 지지를 위한 대형언어모형의 공감적 추론 향상: 심리치료 모형을 중심으로)

  • Yoon Kyung Lee;Inju Lee;Minjung Shin;Seoyeon Bae;Sowon Hahn
    • Korean Journal of Cognitive Science
    • /
    • v.35 no.1
    • /
    • pp.23-48
    • /
    • 2024
  • Building human-aligned artificial intelligence (AI) for social support remains challenging despite the advancement of Large Language Models. We present a novel method, the Chain of Empathy (CoE) prompting, that utilizes insights from psychotherapy to induce LLMs to reason about human emotional states. This method is inspired by various psychotherapy approaches-Cognitive-Behavioral Therapy (CBT), Dialectical Behavior Therapy (DBT), Person-Centered Therapy (PCT), and Reality Therapy (RT)-each leading to different patterns of interpreting clients' mental states. LLMs without CoE reasoning generated predominantly exploratory responses. However, when LLMs used CoE reasoning, we found a more comprehensive range of empathic responses aligned with each psychotherapy model's different reasoning patterns. For empathic expression classification, the CBT-based CoE resulted in the most balanced classification of empathic expression labels and the text generation of empathic responses. However, regarding emotion reasoning, other approaches like DBT and PCT showed higher performance in emotion reaction classification. We further conducted qualitative analysis and alignment scoring of each prompt-generated output. The findings underscore the importance of understanding the emotional context and how it affects human-AI communication. Our research contributes to understanding how psychotherapy models can be incorporated into LLMs, facilitating the development of context-aware, safe, and empathically responsive AI.

A Study on Code Vulnerability Repair via Large Language Models (대규모 언어모델을 활용한 코드 취약점 리페어)

  • Woorim Han;Miseon Yu;Yunheung Paek
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2024.05a
    • /
    • pp.757-759
    • /
    • 2024
  • Software vulnerabilities represent security weaknesses in software systems that attackers exploit for malicious purposes, resulting in potential system compromise and data breaches. Despite the increasing prevalence of these vulnerabilities, manual repair efforts by security analysts remain time-consuming. The emergence of deep learning technologies has provided promising opportunities for automating software vulnerability repairs, but existing AIbased approaches still face challenges in effectively handling complex vulnerabilities. This paper explores the potential of large language models (LLMs) in addressing these limitations, examining their performance in code vulnerability repair tasks. It introduces the latest research on utilizing LLMs to enhance the efficiency and accuracy of fixing security bugs.

Question Answering that leverage the inherent knowledge of large language models (거대 언어 모델의 내재된 지식을 활용한 질의 응답 방법)

  • Myoseop Sim;Kyungkoo Min;Minjun Park;Jooyoung Choi;Haemin Jung;Stanley Jungkyu Choi
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.31-35
    • /
    • 2023
  • 최근에는 질의응답(Question Answering, QA) 분야에서 거대 언어 모델(Large Language Models, LLMs)의 파라미터에 내재된 지식을 활용하는 방식이 활발히 연구되고 있다. Open Domain QA(ODQA) 분야에서는 기존에 정보 검색기(retriever)-독해기(reader) 파이프라인이 주로 사용되었으나, 최근에는 거대 언어 모델이 독해 뿐만 아니라 정보 검색기의 역할까지 대신하고 있다. 본 논문에서는 거대 언어 모델의 내재된 지식을 사용해서 질의 응답에 활용하는 방법을 제안한다. 질문에 대해 답변을 하기 전에 질문과 관련된 구절을 생성하고, 이를 바탕으로 질문에 대한 답변을 생성하는 방식이다. 이 방법은 Closed-Book QA 분야에서 기존 프롬프팅 방법 대비 우수한 성능을 보여주며, 이를 통해 대형 언어 모델에 내재된 지식을 활용하여 질의 응답 능력을 향상시킬 수 있음을 입증한다.

  • PDF

Leveraging LLMs for Corporate Data Analysis: Employee Turnover Prediction with ChatGPT (대형 언어 모델을 활용한 기업데이터 분석: ChatGPT를 활용한 직원 이직 예측)

  • Sungmin Kim;Jee Yong Chung
    • Knowledge Management Research
    • /
    • v.25 no.2
    • /
    • pp.19-47
    • /
    • 2024
  • Organizational ability to analyze and utilize data plays an important role in knowledge management and decision-making. This study aims to investigate the potential application of large language models in corporate data analysis. Focusing on the field of human resources, the research examines the data analysis capabilities of these models. Using the widely studied IBM HR dataset, the study reproduces machine learning-based employee turnover prediction analyses from previous research through ChatGPT and compares its predictive performance. Unlike past research methods that required advanced programming skills, ChatGPT-based machine learning data analysis, conducted through the analyst's natural language requests, offers the advantages of being much easier and faster. Moreover, its prediction accuracy was found to be competitive compared to previous studies. This suggests that large language models could serve as effective and practical alternatives in the field of corporate data analysis, which has traditionally demanded advanced programming capabilities. Furthermore, this approach is expected to contribute to the popularization of data analysis and the spread of data-driven decision-making (DDDM). The prompts used during the data analysis process and the program code generated by ChatGPT are also included in the appendix for verification, providing a foundation for future data analysis research using large language models.

Knowledge Transfer in Multilingual LLMs Based on Code-Switching Corpora (코드 스위칭 코퍼스 기반 다국어 LLM의 지식 전이 연구)

  • Seonghyun Kim;Kanghee Lee;Minsu Jeong;Jungwoo Lee
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.301-305
    • /
    • 2023
  • 최근 등장한 Large Language Models (LLM)은 자연어 처리 분야에서 눈에 띄는 성과를 보여주었지만, 주로 영어 중심의 연구로 진행되어 그 한계를 가지고 있다. 본 연구는 사전 학습된 LLM의 언어별 지식 전이 가능성을 한국어를 중심으로 탐구하였다. 이를 위해 한국어와 영어로 구성된 코드 스위칭 코퍼스를 구축하였으며, 기본 모델인 LLAMA-2와 코드 스위칭 코퍼스를 추가 학습한 모델 간의 성능 비교를 수행하였다. 결과적으로, 제안하는 방법론으로 학습한 모델은 두 언어 간의 희미론적 정보가 효과적으로 전이됐으며, 두 언어 간의 지식 정보 연계가 가능했다. 이 연구는 다양한 언어와 문화를 반영하는 다국어 LLM 연구와, 소수 언어를 포함한 AI 기술의 확산 및 민주화에 기여할 수 있을 것으로 기대된다.

  • PDF

Automation of M.E.P Design Using Large Language Models (대형 언어 모델을 활용한 설비설계의 자동화)

  • Park, Kyung Kyu;Lee, Seung-Been;Seo, Min Jo;Kim, Si Uk;Choi, Won Jun;Kim, Chee Kyung
    • Proceedings of the Korean Institute of Building Construction Conference
    • /
    • 2023.11a
    • /
    • pp.237-238
    • /
    • 2023
  • Urbanization and the increase in building scale have amplified the complexity of M.E.P design. Traditional design methods face limitations when considering intricate pathways and variables, leading to an emergent need for research in automated design. Initial algorithmic approaches encountered challenges in addressing complex architectural structures and the diversity of M.E.P types. However, with the launch of OpenAI's ChatGPT-3.5 beta version in 2022, new opportunities in the automated design sector were unlocked. ChatGPT, based on the Large Language Model (LLM), has the capability to deeply comprehend the logical structures and meanings within training data. This study analyzed the potential application and latent value of LLMs in M.E.P design. Ultimately, the implementation of LLM in M.E.P design will make genuine automated design feasible, which is anticipated to drive advancements across designs in the construction sector.

  • PDF

A Study on the Evaluation of LLM's Gameplay Capabilities in Interactive Text-Based Games (대화형 텍스트 기반 게임에서 LLM의 게임플레이 기능 평가에 관한 연구)

  • Dongcheul Lee
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.3
    • /
    • pp.87-94
    • /
    • 2024
  • We investigated the feasibility of utilizing Large Language Models (LLMs) to perform text-based games without training on game data in advance. We adopted ChatGPT-3.5 and its state-of-the-art, ChatGPT-4, as the systems that implemented LLM. In addition, we added the persistent memory feature proposed in this paper to ChatGPT-4 to create three game player agents. We used Zork, one of the most famous text-based games, to see if the agents could navigate through complex locations, gather information, and solve puzzles. The results showed that the agent with persistent memory had the widest range of exploration and the best score among the three agents. However, all three agents were limited in solving puzzles, indicating that LLM is vulnerable to problems that require multi-level reasoning. Nevertheless, the proposed agent was still able to visit 37.3% of the total locations and collect all the items in the locations it visited, demonstrating the potential of LLM.

Hallucination Detection for Generative Large Language Models Exploiting Consistency and Fact Checking Technique (생성형 거대 언어 모델에서 일관성 확인 및 사실 검증을 활 용한 Hallucination 검출 기법)

  • Myeong Jin;Gun-Woo Kim
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.461-464
    • /
    • 2023
  • 최근 GPT-3 와 LLaMa 같은 생성형 거대 언어모델을 활용한 서비스가 공개되었고, 실제로 많은 사람들이 사용하고 있다. 해당 모델들은 사용자들의 다양한 질문에 대해 유창한 답변을 한다는 이유로 주목받고 있다. 하지만 LLMs 의 답변에는 종종 Inconsistent content 와 non-factual statement 가 존재하며, 이는 사용자들로 하여금 잘못된 정보의 전파 등의 문제를 야기할 수 있다. 이에 논문에서는 동일한 질문에 대한 LLM 의 답변 샘플과 외부 지식을 활용한 Hallucination Detection 방법을 제안한다. 제안한 방법은 동일한 질문에 대한 LLM 의 답변들을 이용해 일관성 점수(Consistency score)를 계산한다. 거기에 외부 지식을 이용한 사실검증을 통해 사실성 점수(Factuality score)를 계산한다. 계산된 일관성 점수와 사실성 점수를 활용하여 문장 수준의 Hallucination Detection 을 가능하게 했다. 실험에는 GPT-3 를 이용하여 WikiBio dataset 에 있는 인물에 대한 passage 를 생성한 데이터셋을 사용하였으며, 우리는 해당 방법을 통해 문장 수준에서의 Hallucination Detection 성능이 baseline 보다 AUC-PR scores 에서 향상됨을 보였다.

Generative AI service implementation using LLM application architecture: based on RAG model and LangChain framework (LLM 애플리케이션 아키텍처를 활용한 생성형 AI 서비스 구현: RAG모델과 LangChain 프레임워크 기반)

  • Cheonsu Jeong
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.4
    • /
    • pp.129-164
    • /
    • 2023
  • In a situation where the use and introduction of Large Language Models (LLMs) is expanding due to recent developments in generative AI technology, it is difficult to find actual application cases or implementation methods for the use of internal company data in existing studies. Accordingly, this study presents a method of implementing generative AI services using the LLM application architecture using the most widely used LangChain framework. To this end, we reviewed various ways to overcome the problem of lack of information, focusing on the use of LLM, and presented specific solutions. To this end, we analyze methods of fine-tuning or direct use of document information and look in detail at the main steps of information storage and retrieval methods using the retrieval augmented generation (RAG) model to solve these problems. In particular, similar context recommendation and Question-Answering (QA) systems were utilized as a method to store and search information in a vector store using the RAG model. In addition, the specific operation method, major implementation steps and cases, including implementation source and user interface were presented to enhance understanding of generative AI technology. This has meaning and value in enabling LLM to be actively utilized in implementing services within companies.