• Title/Summary/Keyword: LLM Prompting

Search Result 5, Processing Time 0.023 seconds

Application of Llama2 LLM and prompting in Financial QA (Llama2 LLM과 prompting을 통한 Financial QA 풀이)

  • NaKyung Lee;Kyung Seo Ki;Gahgene Gweon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.487-488
    • /
    • 2023
  • 본 논문에서는 RLHF 기반의 오픈소스 LLM인 llama-2-13b model을 FinQA task에 적용하여 그 성능을 확인해 보았다. 이때, CoT, few-shot과 같은 다양한 prompting 기법들을 적용해보며 어떤 방법이 가장 효과적인지 비교했다. 그 결과, 한 번(total)에 task를 수행한 경우 few-shot 예시를 2개 사용했을 때보다 3개 사용했을 때, subtask로 나누어 수행한 경우 prompt로 답(simple)만 제시했을 때보다 CoT 형식으로 주었을 때, 각각 24.85%의 정확도로 가장 높은 성능을 보였다.

Utilizing Large Language Models for Non-trained Binary Sentiment Classification (거대 언어 모델(LLM)을 이용한 비훈련 이진 감정 분류)

  • Hyungjin Ahn;Taewook Hwang;Sangkeun Jung
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.66-71
    • /
    • 2023
  • ChatGPT가 등장한 이후 다양한 거대 언어 모델(Large Language Model, LLM)이 등장하였고, 이러한 LLM을 목적에 맞게 파인튜닝하여 사용할 수 있게 되었다. 하지만 LLM을 새로 학습하는 것은 물론이고, 단순 튜닝만 하더라도 일반인은 시도하기 어려울 정도의 많은 컴퓨팅 자원이 필요하다. 본 연구에서는 공개된 LLM을 별도의 학습 없이 사용하여 zero-shot 프롬프팅으로 이진 분류 태스크에 대한 성능을 확인하고자 했다. 학습이나 추가적인 튜닝 없이도 기존 선학습 언어 모델들에 준하는 이진 분류 성능을 확인할 수 있었고, 성능이 좋은 LLM의 경우 분류 실패율이 낮고 일관적인 성능을 보여 상당히 높은 활용성을 확인하였다.

  • PDF

Pilot Development of a 'Clinical Performance Examination (CPX) Practicing Chatbot' Utilizing Prompt Engineering (프롬프트 엔지니어링(Prompt Engineering)을 활용한 '진료수행시험 연습용 챗봇(CPX Practicing Chatbot)' 시범 개발)

  • Jundong Kim;Hye-Yoon Lee;Ji-Hwan Kim;Chang-Eop Kim
    • The Journal of Korean Medicine
    • /
    • v.45 no.1
    • /
    • pp.203-214
    • /
    • 2024
  • Objectives: In the context of competency-based education emphasized in Korean Medicine, this study aimed to develop a pilot version of a CPX (Clinical Performance Examination) Practicing Chatbot utilizing large language models with prompt engineering. Methods: A standardized patient scenario was acquired from the National Institute of Korean Medicine and transformed into text format. Prompt engineering was then conducted using role prompting and few-shot prompting techniques. The GPT-4 API was employed, and a web application was created using the gradio package. An internal evaluation criterion was established for the quantitative assessment of the chatbot's performance. Results: The chatbot was implemented and evaluated based on the internal evaluation criterion. It demonstrated relatively high correctness and compliance. However, there is a need for improvement in confidentiality and naturalness. Conclusions: This study successfully piloted the CPX Practicing Chatbot, revealing the potential for developing educational models using AI technology in the field of Korean Medicine. Additionally, it identified limitations and provided insights for future developmental directions.

Utilizing AI Foundation Models for Language-Driven Zero-Shot Object Navigation Tasks (언어-기반 제로-샷 물체 목표 탐색 이동 작업들을 위한 인공지능 기저 모델들의 활용)

  • Jeong-Hyun Choi;Ho-Jun Baek;Chan-Sol Park;Incheol Kim
    • The Journal of Korea Robotics Society
    • /
    • v.19 no.3
    • /
    • pp.293-310
    • /
    • 2024
  • In this paper, we propose an agent model for Language-Driven Zero-Shot Object Navigation (L-ZSON) tasks, which takes in a freeform language description of an unseen target object and navigates to find out the target object in an inexperienced environment. In general, an L-ZSON agent should able to visually ground the target object by understanding the freeform language description of it and recognizing the corresponding visual object in camera images. Moreover, the L-ZSON agent should be also able to build a rich spatial context map over the unknown environment and decide efficient exploration actions based on the map until the target object is present in the field of view. To address these challenging issues, we proposes AML (Agent Model for L-ZSON), a novel L-ZSON agent model to make effective use of AI foundation models such as Large Language Model (LLM) and Vision-Language model (VLM). In order to tackle the visual grounding issue of the target object description, our agent model employs GLEE, a VLM pretrained for locating and identifying arbitrary objects in images and videos in the open world scenario. To meet the exploration policy issue, the proposed agent model leverages the commonsense knowledge of LLM to make sequential navigational decisions. By conducting various quantitative and qualitative experiments with RoboTHOR, the 3D simulation platform and PASTURE, the L-ZSON benchmark dataset, we show the superior performance of the proposed agent model.

Enhancing Empathic Reasoning of Large Language Models Based on Psychotherapy Models for AI-assisted Social Support (인공지능 기반 사회적 지지를 위한 대형언어모형의 공감적 추론 향상: 심리치료 모형을 중심으로)

  • Yoon Kyung Lee;Inju Lee;Minjung Shin;Seoyeon Bae;Sowon Hahn
    • Korean Journal of Cognitive Science
    • /
    • v.35 no.1
    • /
    • pp.23-48
    • /
    • 2024
  • Building human-aligned artificial intelligence (AI) for social support remains challenging despite the advancement of Large Language Models. We present a novel method, the Chain of Empathy (CoE) prompting, that utilizes insights from psychotherapy to induce LLMs to reason about human emotional states. This method is inspired by various psychotherapy approaches-Cognitive-Behavioral Therapy (CBT), Dialectical Behavior Therapy (DBT), Person-Centered Therapy (PCT), and Reality Therapy (RT)-each leading to different patterns of interpreting clients' mental states. LLMs without CoE reasoning generated predominantly exploratory responses. However, when LLMs used CoE reasoning, we found a more comprehensive range of empathic responses aligned with each psychotherapy model's different reasoning patterns. For empathic expression classification, the CBT-based CoE resulted in the most balanced classification of empathic expression labels and the text generation of empathic responses. However, regarding emotion reasoning, other approaches like DBT and PCT showed higher performance in emotion reaction classification. We further conducted qualitative analysis and alignment scoring of each prompt-generated output. The findings underscore the importance of understanding the emotional context and how it affects human-AI communication. Our research contributes to understanding how psychotherapy models can be incorporated into LLMs, facilitating the development of context-aware, safe, and empathically responsive AI.