• Title/Summary/Keyword: Large language models

Search Result 155, Processing Time 0.027 seconds

Deletion-Based Sentence Compression Using Sentence Scoring Reflecting Linguistic Information (언어 정보가 반영된 문장 점수를 활용하는 삭제 기반 문장 압축)

  • Lee, Jun-Beom;Kim, So-Eon;Park, Seong-Bae
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.3
    • /
    • pp.125-132
    • /
    • 2022
  • Sentence compression is a natural language processing task that generates concise sentences that preserves the important meaning of the original sentence. For grammatically appropriate sentence compression, early studies utilized human-defined linguistic rules. Furthermore, while the sequence-to-sequence models perform well on various natural language processing tasks, such as machine translation, there have been studies that utilize it for sentence compression. However, for the linguistic rule-based studies, all rules have to be defined by human, and for the sequence-to-sequence model based studies require a large amount of parallel data for model training. In order to address these challenges, Deleter, a sentence compression model that leverages a pre-trained language model BERT, is proposed. Because the Deleter utilizes perplexity based score computed over BERT to compress sentences, any linguistic rules and parallel dataset is not required for sentence compression. However, because Deleter compresses sentences only considering perplexity, it does not compress sentences by reflecting the linguistic information of the words in the sentences. Furthermore, since the dataset used for pre-learning BERT are far from compressed sentences, there is a problem that this can lad to incorrect sentence compression. In order to address these problems, this paper proposes a method to quantify the importance of linguistic information and reflect it in perplexity-based sentence scoring. Furthermore, by fine-tuning BERT with a corpus of news articles that often contain proper nouns and often omit the unnecessary modifiers, we allow BERT to measure the perplexity appropriate for sentence compression. The evaluations on the English and Korean dataset confirm that the sentence compression performance of sentence-scoring based models can be improved by utilizing the proposed method.

Application of ChatGPT text extraction model in analyzing rhetorical principles of COVID-19 pandemic information on a question-and-answer community

  • Hyunwoo Moon;Beom Jun Bae;Sangwon Bae
    • International journal of advanced smart convergence
    • /
    • v.13 no.2
    • /
    • pp.205-213
    • /
    • 2024
  • This study uses a large language model (LLM) to identify Aristotle's rhetorical principles (ethos, pathos, and logos) in COVID-19 information on Naver Knowledge-iN, South Korea's leading question-and-answer community. The research analyzed the differences of these rhetorical elements in the most upvoted answers with random answers. A total of 193 answer pairs were randomly selected, with 135 pairs for training and 58 for testing. These answers were then coded in line with the rhetorical principles to refine GPT 3.5-based models. The models achieved F1 scores of .88 (ethos), .81 (pathos), and .69 (logos). Subsequent analysis of 128 new answer pairs revealed that logos, particularly factual information and logical reasoning, was more frequently used in the most upvoted answers than the random answers, whereas there were no differences in ethos and pathos between the answer groups. The results suggest that health information consumers value information including logos while ethos and pathos were not associated with consumers' preference for health information. By utilizing an LLM for the analysis of persuasive content, which has been typically conducted manually with much labor and time, this study not only demonstrates the feasibility of using an LLM for latent content but also contributes to expanding the horizon in the field of AI text extraction.

Generative AI service implementation using LLM application architecture: based on RAG model and LangChain framework (LLM 애플리케이션 아키텍처를 활용한 생성형 AI 서비스 구현: RAG모델과 LangChain 프레임워크 기반)

  • Cheonsu Jeong
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.4
    • /
    • pp.129-164
    • /
    • 2023
  • In a situation where the use and introduction of Large Language Models (LLMs) is expanding due to recent developments in generative AI technology, it is difficult to find actual application cases or implementation methods for the use of internal company data in existing studies. Accordingly, this study presents a method of implementing generative AI services using the LLM application architecture using the most widely used LangChain framework. To this end, we reviewed various ways to overcome the problem of lack of information, focusing on the use of LLM, and presented specific solutions. To this end, we analyze methods of fine-tuning or direct use of document information and look in detail at the main steps of information storage and retrieval methods using the retrieval augmented generation (RAG) model to solve these problems. In particular, similar context recommendation and Question-Answering (QA) systems were utilized as a method to store and search information in a vector store using the RAG model. In addition, the specific operation method, major implementation steps and cases, including implementation source and user interface were presented to enhance understanding of generative AI technology. This has meaning and value in enabling LLM to be actively utilized in implementing services within companies.

VIRTUAL REALITY SHIP SIMULATOR

  • Yim, Jeong-Bin
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2000.06a
    • /
    • pp.93-105
    • /
    • 2000
  • This paper describes prototype Virtual Reality Ship Simulator (VRSS) that we have recently developed next-generation training equipment based on the virtual reality (VR) technology. The inherent defects of conventional ship simulators are enormous costs and difficult system upgrade due to the system construction, such as large mock-up bridge system, wide visual presentations, In this paper, to cope with those problems, we explored VR technology that can give realistic environments in a virtual world. Then we constructed prototype VRSS system, which is, consists of PC-based human sensors, and Databases set having 3D object models and coefficients of Head Related Transfer Functions (HRTFs). 3D-WEBMASTER authoring tool was used as Virtual Reality Modeling Language (VRML). Using the VRSS system, we constructed Port an Passage Simulator for the harbor of INCHON in Korea, and Ship and Sea State Simulator for an arbitrary given sea environmental states by user. Through many simulation tests, we testified the efficiency of developed prototype VRSS by subject assessment with five participants. Then, we present results on the simulation experiments and conclude with discussion of evaluation results.

  • PDF

A Deeping Learning-based Article- and Paragraph-level Classification

  • Kim, Euhee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.23 no.11
    • /
    • pp.31-41
    • /
    • 2018
  • Text classification has been studied for a long time in the Natural Language Processing field. In this paper, we propose an article- and paragraph-level genre classification system using Word2Vec-based LSTM, GRU, and CNN models for large-scale English corpora. Both article- and paragraph-level classification performed best in accuracy with LSTM, which was followed by GRU and CNN in accuracy performance. Thus, it is to be confirmed that in evaluating the classification performance of LSTM, GRU, and CNN, the word sequential information for articles is better than the word feature extraction for paragraphs when the pre-trained Word2Vec-based word embeddings are used in both deep learning-based article- and paragraph-level classification tasks.

Development of Card News Generation Platform Using Generative AI (생성형 AI를 이용한 카드뉴스 생성 플랫폼 개발)

  • Yang Ha-yeon;Eom Chae-yeon;Lee Soo-yeon;Lee Tae-ran;Cho Young-seo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.820-821
    • /
    • 2023
  • 본 프로젝트는 Azure OpenAI Service (large language models and generative AI) 를 이용하여 IT 기술 및 현황을 생성형 AI (GPT-4)를 활용한 IT 카드 뉴스 서비스로서 업계 현직자들에게 정보를 제공하는 시스템을 구현하였다. IT 카드 뉴스 서비스의 부재와 뉴스 제작의 비용 및 시간 소요의 문제를 해결하기 위해 생성형 AI 시스템을 고안하였다. 해당 서비스를 통해 IT 업계에 관심이 많은 사용자에게 정리된 뉴스를 한 번에 제공하는 효과를 가져올 것으로 예상한다.

Morpheme-Based Few-Shot Learning with Large Language Models for Korean Healthcare Named Entity Recognition (한국어 헬스케어 개체명 인식을 위한 거대 언어 모델에서의 형태소 기반 Few-Shot 학습 기법)

  • Su-Yeon Kang;Gun-Woo Kim
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.428-429
    • /
    • 2023
  • 개체명 인식은 자연어 처리의 핵심적인 작업으로, 특정 범주의 명칭을 문장에서 식별하고 분류한다. 이러한 기술은 헬스케어 분야에서 진단 지원 및 데이터 관리에 필수적이다. 그러나 기존의 사전 학습된 모델을 특정 도메인에 대해 전이학습하는 방법은 대량의 데이터에 크게 의존하는 한계를 가지고 있다. 본 연구는 방대한 데이터로 학습된 거대 언어 모델(LLM) 활용을 중심으로, 한국어의 교착어 특성을 반영하여 형태소 정보를 활용한 Few-Shot 프롬프트를 통해 한국어 헬스케어 도메인에서의 개체명 인식 방법을 제안한다.

A Study on the Data Literacy Education in the Library of the Chat GPT, Generative AI Era (ChatGPT, 생성형 AI 시대 도서관의 데이터 리터러시 교육에 대한 연구)

  • Jeong-Mee Lee
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.57 no.3
    • /
    • pp.303-323
    • /
    • 2023
  • The purpose of this study is to introduce this language model in the era of generative AI such as ChatGPT, and to provide direction for data literacy education components in libraries using it. To this end, the following three research questions are proposed. First, the technical features of ChatGPT-like language models are examined, and then, it is argued that data literacy education is necessary for the proper and accurate use of information by users using a service platform based on generative AI technology. Finally, for library data literacy education in the ChatGPT era, it is proposed a data literacy education scheme including seven components such as data understanding, data generation, data collection, data verification, data management, data use and sharing, and data ethics. In conclusion, since generative AI technologies such as ChatGPT are expected to have a significant impact on users' information utilization, libraries should think about the advantages, disadvantages, and problems of these technologies first, and use them as a basis for further improving library information services.

A Study on Efficient Natural Language Processing Method based on Transformer (트랜스포머 기반 효율적인 자연어 처리 방안 연구)

  • Seung-Cheol Lim;Sung-Gu Youn
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.4
    • /
    • pp.115-119
    • /
    • 2023
  • The natural language processing models used in current artificial intelligence are huge, causing various difficulties in processing and analyzing data in real time. In order to solve these difficulties, we proposed a method to improve the efficiency of processing by using less memory and checked the performance of the proposed model. The technique applied in this paper to evaluate the performance of the proposed model is to divide the large corpus by adjusting the number of attention heads and embedding size of the BERT[1] model to be small, and the results are calculated by averaging the output values of each forward. In this process, a random offset was assigned to the sentences at every epoch to provide diversity in the input data. The model was then fine-tuned for classification. We found that the split processing model was about 12% less accurate than the unsplit model, but the number of parameters in the model was reduced by 56%.

Cross-Lingual Style-Based Title Generation Using Multiple Adapters (다중 어댑터를 이용한 교차 언어 및 스타일 기반의 제목 생성)

  • Yo-Han Park;Yong-Seok Choi;Kong Joo Lee
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.8
    • /
    • pp.341-354
    • /
    • 2023
  • The title of a document is the brief summarization of the document. Readers can easily understand a document if we provide them with its title in their preferred styles and the languages. In this research, we propose a cross-lingual and style-based title generation model using multiple adapters. To train the model, we need a parallel corpus in several languages with different styles. It is quite difficult to construct this kind of parallel corpus; however, a monolingual title generation corpus of the same style can be built easily. Therefore, we apply a zero-shot strategy to generate a title in a different language and with a different style for an input document. A baseline model is Transformer consisting of an encoder and a decoder, pre-trained by several languages. The model is then equipped with multiple adapters for translation, languages, and styles. After the model learns a translation task from parallel corpus, it learns a title generation task from monolingual title generation corpus. When training the model with a task, we only activate an adapter that corresponds to the task. When generating a cross-lingual and style-based title, we only activate adapters that correspond to a target language and a target style. An experimental result shows that our proposed model is only as good as a pipeline model that first translates into a target language and then generates a title. There have been significant changes in natural language generation due to the emergence of large-scale language models. However, research to improve the performance of natural language generation using limited resources and limited data needs to continue. In this regard, this study seeks to explore the significance of such research.