• Title/Summary/Keyword: Retrieval Augmented Language Model

Search Result 11, Processing Time 0.026 seconds

In-Context Retrieval-Augmented Korean Language Model (In-Context 검색 증강형 한국어 언어 모델)

  • Sung-Min Lee;Joung Lee;Daeryong Seo;Donghyeon Jeon;Inho Kang;Seung-Hoon Na
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.443-447
    • /
    • 2023
  • 검색 증강형 언어 모델은 입력과 연관된 문서들을 검색하고 텍스트 생성 과정에 통합하여 언어 모델의 생성 능력을 강화한다. 본 논문에서는 사전 학습된 대규모 언어 모델의 추가적인 학습 없이 In-Context 검색 증강으로 한국어 언어 모델의 생성 능력을 강화하고 기존 언어 모델 대비 성능이 증가함을 보인다. 특히 다양한 크기의 사전 학습된 언어 모델을 활용하여 검색 증강 결과를 보여 모든 규모의 사전 학습 모델에서 Perplexity가 크게 개선된 결과를 확인하였다. 또한 오픈 도메인 질의응답(Open-Domain Question Answering) 과업에서도 EM-19, F1-27.8 향상된 결과를 보여 In-Context 검색 증강형 언어 모델의 성능을 입증한다.

  • PDF

A Survey on the Latest Research Trends in Retrieval-Augmented Generation (검색 증강 생성(RAG) 기술의 최신 연구 동향에 대한 조사)

  • Eunbin Lee;Ho Bae
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.9
    • /
    • pp.429-436
    • /
    • 2024
  • As Large Language Models (LLMs) continue to advance, effectively harnessing their potential has become increasingly important. LLMs, trained on vast datasets, are capable of generating text across a wide range of topics, making them useful in applications such as content creation, machine translation, and chatbots. However, they often face challenges in generalization due to gaps in specific or specialized knowledge, and updating these models with the latest information post-training remains a significant hurdle. To address these issues, Retrieval-Augmented Generation (RAG) models have been introduced. These models enhance response generation by retrieving information from continuously updated external databases, thereby reducing the hallucination phenomenon often seen in LLMs while improving efficiency and accuracy. This paper presents the foundational architecture of RAG, reviews recent research trends aimed at enhancing the retrieval capabilities of LLMs through RAG, and discusses evaluation techniques. Additionally, it explores performance optimization and real-world applications of RAG in various industries. Through this analysis, the paper aims to propose future research directions for the continued development of RAG models.

Design of a Question-Answering System based on RAG Model for Domestic Companies

  • Gwang-Wu Yi;Soo Kyun Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.7
    • /
    • pp.81-88
    • /
    • 2024
  • Despite the rapid growth of the generative AI market and significant interest from domestic companies and institutions, concerns about the provision of inaccurate information and potential information leaks have emerged as major factors hindering the adoption of generative AI. To address these issues, this paper designs and implements a question-answering system based on the Retrieval-Augmented Generation (RAG) architecture. The proposed method constructs a knowledge database using Korean sentence embeddings and retrieves information relevant to queries through optimized searches, which is then provided to the generative language model. Additionally, it allows users to directly manage the knowledge database to efficiently update changing business information, and it is designed to operate in a private network to reduce the risk of corporate confidential information leakage. This study aims to serve as a useful reference for domestic companies seeking to adopt and utilize generative AI.

Ontology-based Points of Interest Data Model for Mobile Augmented Reality (모바일 증강현실을 위한 온톨로지 기반 POI 데이터 모델)

  • Kim, Byung-Ho
    • Journal of Information Technology Services
    • /
    • v.10 no.4
    • /
    • pp.269-280
    • /
    • 2011
  • Mobile Augmented Reality (mobile AR), as one of the most prospective mobile applications, intends to provide richer experiences by annotating tags or virtual objects over the scene observed through camera embedded in a handheld device like smartphone or pad. In this paper, we analyzed the current status of the art of mobile AR and proposed a novel Points of Interest (POIs) data model based on ontology to provide context-aware information retrievals on lots of POIs data. Proposed ontology was expanded from the standard POIs data model of W3C POIs Working Group and established using OWL (Web Ontology Language) and Protege. We also proposed a context-aware mobile AR platform which can resolve three distinguished issues in current platforms : interoperability problem of POI tags, POIs data retrieval issue, and context-aware service issue.

A Survey on Retrieval-Augmented Generation (검색 증강 생성(RAG) 기술에 대한 최신 연구 동향)

  • Eun-Bin Lee;Ho Bae
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2024.05a
    • /
    • pp.745-748
    • /
    • 2024
  • 글로벌 시장에서 Large Language Model(LLM)의 발전이 급속하게 이루어지며 활용도가 높아지고 있지만 특정 유형이나 전문적 지식이 부족할 수 있어 일반화하기 어려우며, 새로운 데이터로 업데이트하기 어렵다는 한계점이 있다. 이를 극복하기 위해 지속적으로 업데이트되는 최신 정보를 포함한 외부 데이터베이스에서 정보를 검색해 응답을 생성하는 Retrieval- Augmented Generation(RAG, 검색 증강 생성) 모델을 도입하여 LLM의 환각 현상을 최소화하고 효율성과 정확성을 향상시키려는 연구가 활발히 이루어지고 있다. 본 논문에서는 LLM의 검색 기능을 강화하기 위한 RAG의 연구 및 평가기법에 대한 최신 연구 동향을 소개하고 실제 산업에서 활용하기 위한 최적화 및 응용 사례를 소개하며 이를 바탕으로 향후 연구 방향성을 제시하고자 한다.

Zero-shot Dialogue System Grounded in Multiple Documents (Zero-shot 기반 다중 문서 그라운딩된 대화 시스템)

  • Jun-Bum Park;Beomseok Hong;Wonseok Choi;Youngsub Han;Byoung-Ki Jeon;Seung-Hoon Na
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.399-403
    • /
    • 2023
  • 본 논문에서는 다중 문서 기반의 대화 시스템을 통한 효율적인 정보 검색과 응답 생성에 중점을 둡니다. 대규모 데이터 집합에서 정확한 문서를 선택하는 데 필요한 검색의 중요성을 강조하며, 현재 검색 방법의 한계와 문제점을 지적합니다. 또한 더 자연스러운 답변을 생성하기 위해 대규모 언어 모델을 사용하게 되면서 fine-tuning 시에 발생하는 제약과 낭비를 모델의 제로샷 생성 능력을 활용하여 개선하려는 방안을 제안하며, 모델의 크기와 자원의 효율성에 대한 고려사항을 논의합니다. 우리의 접근 방식은 대규모 언어 모델을 프롬프트와 함께 다중 문서로 학습 없이 정보를 검색하고 응답을 생성하는 방향으로 접근하여 대화 시스템의 효율성과 유용성을 향상시킬 수 있음을 제시합니다.

  • PDF

BIM Knowledge Expert Agent Research Based on LLM and RAG (LLM과 RAG 기반 BIM 지식 전문가 에이전트 연구)

  • Kang, Tae-Wook;Park, Seung-Hwa
    • Journal of KIBIM
    • /
    • v.14 no.3
    • /
    • pp.22-30
    • /
    • 2024
  • Recently, LLM (Large Language Model), a rapidly developing generative AI technology, is receiving much attention in the smart construction field. This study proposes a methodology for implementing an knowledge expert system by linking BIM (Building Information Modeling), which supports data hub functions in the smart construction domain with LLM. In order to effectively utilize LLM in a BIM expert system, excessive model learning costs, BIM big data processing, and hallucination problems must be solved. This study proposes an LLM-based BIM expert system architecture that considers these problems. This study focuses on the RAG (Retrieval-Augmented Generation) document generation method and search algorithm for effective BIM data retrieval, with the goal of implementing an LLM-based BIM expert system within a small GPU resource. For performance comparison and analysis, a prototype of the designed system is developed, and implications to be considered when developing an LLM-based BIM expert system are derived.

Korean QA with Retrieval Augmented LLM (검색 증강 LLM을 통한 한국어 질의응답)

  • Mintaek Seo;Seung-Hoon Na;Joon-Ho Lim;Tae-Hyeong Kim;Hwi-Jung Ryu;Du-Seong Chang
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.690-693
    • /
    • 2023
  • 언어 모델의 파라미터 수의 지속적인 증가로 100B 단위의 거대 언어모델 LLM(Large Language Model)을 구성 할 정도로 언어 모델의 크기는 증가 해 왔다. 이런 모델의 크기와 함께 성장한 다양한 Task의 작업 성능의 향상과 함께, 발전에는 환각(Hallucination) 및 윤리적 문제도 함께 떠오르고 있다. 이러한 문제 중 특히 환각 문제는 모델이 존재하지도 않는 정보를 실제 정보마냥 생성한다. 이러한 잘못된 정보 생성은 훌륭한 성능의 LLM에 신뢰성 문제를 야기한다. 환각 문제는 정보 검색을 통하여 입력 혹은 내부 표상을 증강하면 증상이 완화 되고 추가적으로 성능이 향상된다. 본 논문에서는 한국어 질의 응답에서 검색 증강을 통하여 모델의 개선점을 확인한다.

  • PDF

Development of Dental Consultation Chatbot using Retrieval Augmented LLM (검색 증강 LLM을 이용한 치과 상담용 챗봇 개발)

  • Jongjin Park
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.2
    • /
    • pp.87-92
    • /
    • 2024
  • In this paper, a RAG system was implemented using an existing Large Language Model (LLM) and Langchain library to develop a dental consultation chatbot. For this purpose, we collected contents from the webpage bulletin boards of domestic dental university hospitals and constructed consultation data with the advice and supervision of dental specialists. In order to divide the input consultation data into appropriate sizes, the chunk size and the size of the overlapping text in each chunk were set to 1001 and 100, respectively. As a result of the simulation, the Retrieval Augmented LLM searched for and output the consultation content that was most similar to the user input. It was confirmed that the accessibility of dental consultation and the accuracy of consultation content could be improved through the built chatbot.

Generative AI service implementation using LLM application architecture: based on RAG model and LangChain framework (LLM 애플리케이션 아키텍처를 활용한 생성형 AI 서비스 구현: RAG모델과 LangChain 프레임워크 기반)

  • Cheonsu Jeong
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.4
    • /
    • pp.129-164
    • /
    • 2023
  • In a situation where the use and introduction of Large Language Models (LLMs) is expanding due to recent developments in generative AI technology, it is difficult to find actual application cases or implementation methods for the use of internal company data in existing studies. Accordingly, this study presents a method of implementing generative AI services using the LLM application architecture using the most widely used LangChain framework. To this end, we reviewed various ways to overcome the problem of lack of information, focusing on the use of LLM, and presented specific solutions. To this end, we analyze methods of fine-tuning or direct use of document information and look in detail at the main steps of information storage and retrieval methods using the retrieval augmented generation (RAG) model to solve these problems. In particular, similar context recommendation and Question-Answering (QA) systems were utilized as a method to store and search information in a vector store using the RAG model. In addition, the specific operation method, major implementation steps and cases, including implementation source and user interface were presented to enhance understanding of generative AI technology. This has meaning and value in enabling LLM to be actively utilized in implementing services within companies.