• Title/Summary/Keyword: GPT-2 모델

Search Result 85, Processing Time 0.023 seconds

Transformer-based Language model Bert And GPT-2 Performance Comparison Study (Transformer기반의 언어모델 Bert와 GPT-2 성능 비교 연구)

  • Yoo, Yean-Jun;Hong, Seok-Min;Lee, Hyeop-Geon;Kim, Young-Woone
    • Annual Conference of KIPS
    • /
    • 2022.05a
    • /
    • pp.381-383
    • /
    • 2022
  • 최근 자연어처리 분야에서는 Bert, GPT 등 Transformer기반의 언어모델 연구가 활발히 이뤄지고 있다. 이러한 언어모델은 대용량의 말뭉치 데이터와 많은 파라미터를 이용하여 사전학습을 진행하여 다양한 자연어처리 테스트에서 높은 성능을 보여주고 있다. 이에 본 논문에서는 Transformer기반의 언어모델인 Bert와 GPT-2의 성능평가를 진행한다. 성능평가는 '네이버 영화 리뷰' 데이터 셋을 통해 긍정 부정의 정확도와 학습시간을 측정한다. 측정결과 정확도에서는 GPT-2가 Bert보다 최소 4.16%에서 최대 5.32% 높은 정확도를 나타내었지만 학습시간에서는 Bert가 GPT-2보다 최소 104초에서 116초 빠르게 나타났다. 향후 성능 비교는 더 많은 데이터와 다양한 조건을 통해 구체적인 성능 비교가 필요하다.

KoDialoGPT2 : Modeling Chit-Chat Dialog in Korean (KoDialoGPT2 : 한국어 일상 대화 생성 모델)

  • Oh, Dongsuk;Park, Sungjin;Lee, Hanna;Jang, Yoonna;Lim, Heuiseok
    • Annual Conference on Human and Language Technology
    • /
    • 2021.10a
    • /
    • pp.457-460
    • /
    • 2021
  • 대화 시스템은 인공지능과 사람이 자연어로 의사 소통을 하는 시스템으로 크게 목적 지향 대화와 일상대화 시스템으로 연구되고 있다. 목적 지향 대화 시스템의 경우 날씨 확인, 호텔 및 항공권 예약, 일정 관리 등의 사용자가 생활에 필요한 도메인들로 이루어져 있으며 각 도메인 별로 목적에 따른 시나리오들이 존재한다. 이러한 대화는 사용자에게 명확한 발화을 제공할 수 있으나 자연스러움은 떨어진다. 일상 대화의 경우 다양한 도메인이 존재하며, 시나리오가 존재하지 않기 때문에 사용자에게 자연스러운 발화를 제공할 수 있다. 또한 일상 대화의 경우 검색 기반이나 생성 기반으로 시스템이 개발되고 있다. 검색 기반의 경우 발화 쌍에 대한 데이터베이스가 필요하지만, 생성 기반의 경우 이러한 데이터베이스가 없이 모델의 Language Modeling (LM)으로 부터 생성된 발화에 의존한다. 따라서 모델의 성능에 따라 발화의 품질이 달라진다. 최근에는 사전학습 모델이 자연어처리 작업에서 높은 성능을 보이고 있으며, 일상 대화 도메인에서도 역시 높은 성능을 보이고 있다. 일상 대화에서 가장 높은 성능을 보이고 있는 사전학습 모델은 Auto Regressive 기반 생성모델이고, 한국어에서는 대표적으로 KoGPT2가 존재한다. 그러나, KoGPT2의 경우 문어체 데이터만 학습되어 있기 때문에 대화체에서는 낮은 성능을 보이고 있다. 본 논문에서는 대화체에서 높은 성능을 보이는 한국어 기반 KoDialoGPT2를 개발하였고, 기존의 KoGPT2보다 높은 성능을 보였다.

  • PDF

Exploring automatic scoring of mathematical descriptive assessment using prompt engineering with the GPT-4 model: Focused on permutations and combinations (프롬프트 엔지니어링을 통한 GPT-4 모델의 수학 서술형 평가 자동 채점 탐색: 순열과 조합을 중심으로)

  • Byoungchul Shin;Junsu Lee;Yunjoo Yoo
    • The Mathematical Education
    • /
    • v.63 no.2
    • /
    • pp.187-207
    • /
    • 2024
  • In this study, we explored the feasibility of automatically scoring descriptive assessment items using GPT-4 based ChatGPT by comparing and analyzing the scoring results between teachers and GPT-4 based ChatGPT. For this purpose, three descriptive items from the permutation and combination unit for first-year high school students were selected from the KICE (Korea Institute for Curriculum and Evaluation) website. Items 1 and 2 had only one problem-solving strategy, while Item 3 had more than two strategies. Two teachers, each with over eight years of educational experience, graded answers from 204 students and compared these with the results from GPT-4 based ChatGPT. Various techniques such as Few-Shot-CoT, SC, structured, and Iteratively prompts were utilized to construct prompts for scoring, which were then inputted into GPT-4 based ChatGPT for scoring. The scoring results for Items 1 and 2 showed a strong correlation between the teachers' and GPT-4's scoring. For Item 3, which involved multiple problem-solving strategies, the student answers were first classified according to their strategies using prompts inputted into GPT-4 based ChatGPT. Following this classification, scoring prompts tailored to each type were applied and inputted into GPT-4 based ChatGPT for scoring, and these results also showed a strong correlation with the teachers' scoring. Through this, the potential for GPT-4 models utilizing prompt engineering to assist in teachers' scoring was confirmed, and the limitations of this study and directions for future research were presented.

Zero-shot Korean Sentiment Analysis with Large Language Models: Comparison with Pre-trained Language Models

  • Soon-Chan Kwon;Dong-Hee Lee;Beak-Cheol Jang
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.2
    • /
    • pp.43-50
    • /
    • 2024
  • This paper evaluates the Korean sentiment analysis performance of large language models like GPT-3.5 and GPT-4 using a zero-shot approach facilitated by the ChatGPT API, comparing them to pre-trained Korean models such as KoBERT. Through experiments utilizing various Korean sentiment analysis datasets in fields like movies, gaming, and shopping, the efficiency of these models is validated. The results reveal that the LMKor-ELECTRA model displayed the highest performance based on F1-score, while GPT-4 particularly achieved high accuracy and F1-scores in movie and shopping datasets. This indicates that large language models can perform effectively in Korean sentiment analysis without prior training on specific datasets, suggesting their potential in zero-shot learning. However, relatively lower performance in some datasets highlights the limitations of the zero-shot based methodology. This study explores the feasibility of using large language models for Korean sentiment analysis, providing significant implications for future research in this area.

Development of Block-based Code Generation and Recommendation Model Using Natural Language Processing Model (자연어 처리 모델을 활용한 블록 코드 생성 및 추천 모델 개발)

  • Jeon, In-seong;Song, Ki-Sang
    • Journal of The Korean Association of Information Education
    • /
    • v.26 no.3
    • /
    • pp.197-207
    • /
    • 2022
  • In this paper, we develop a machine learning based block code generation and recommendation model for the purpose of reducing cognitive load of learners during coding education that learns the learner's block that has been made in the block programming environment using natural processing model and fine-tuning and then generates and recommends the selectable blocks for the next step. To develop the model, the training dataset was produced by pre-processing 50 block codes that were on the popular block programming language web site 'Entry'. Also, after dividing the pre-processed blocks into training dataset, verification dataset and test dataset, we developed a model that generates block codes based on LSTM, Seq2Seq, and GPT-2 model. In the results of the performance evaluation of the developed model, GPT-2 showed a higher performance than the LSTM and Seq2Seq model in the BLEU and ROUGE scores which measure sentence similarity. The data results generated through the GPT-2 model, show that the performance was relatively similar in the BLEU and ROUGE scores except for the case where the number of blocks was 1 or 17.

Evaluating the Impact of Training Conditions on the Performance of GPT-2-Small Based Korean-English Bilingual Models

  • Euhee Kim;Keonwoo Koo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.9
    • /
    • pp.69-77
    • /
    • 2024
  • This study evaluates the performance of second language acquisition models learning Korean and English using the GPT-2-Small model, analyzing the impact of various training conditions on performance. Four training conditions were used: monolingual learning, sequential learning, sequential-interleaved learning, and sequential-EWC learning. The model was trained using datasets from the National Institute of Korean Language and English from BabyLM Challenge, with performance measured through PPL and BLiMP metrics. Results showed that monolingual learning had the best performance with a PPL of 16.2 and BLiMP accuracy of 73.7%. In contrast, sequential-EWC learning had the highest PPL of 41.9 and the lowest BLiMP accuracy of 66.3%(p < 0.05). Monolingual learning proved most effective for optimizing model performance. The EWC regularization in sequential-EWC learning degraded performance by limiting weight updates, hindering new language learning. This research improves understanding of language modeling and contributes to cognitive similarity in AI language learning.

Semi-supervised GPT2 for News Article Recommendation with Curriculum Learning (준 지도 학습과 커리큘럼 학습을 이용한 유사 기사 추천 모델)

  • Seo, Jaehyung;Oh, Dongsuk;Eo, Sugyeong;Park, Sungjin;Lim, Heuiseok
    • Annual Conference on Human and Language Technology
    • /
    • 2020.10a
    • /
    • pp.495-500
    • /
    • 2020
  • 뉴스 기사는 반드시 객관적이고 넓은 시각으로 정보를 전달하지 않는다. 따라서 뉴스 기사를 기존의 추천 시스템과 같이 개인의 관심사나 사적 정보를 바탕으로 선별적으로 추천하는 것은 바람직하지 않다. 본 논문에서는 최대한 객관적으로 다양한 시각에서 비슷한 사건과 인물에 대해서 판단할 수 있도록 유사도 기반의 기사 추천 모델을 제시한다. 길이가 긴 문서 사이의 유사도를 측정하기 위해 GPT2 [1]언어 모델을 활용했다. 이 과정에서 단방향 디코더 모델인 GPT2 [1]의 단점을 추가 학습으로 개선했으며, 저장 공간의 효율과 핵심 문단 추출을 위해 BM25 [2]함수를 사용했다. 그리고 준 지도 학습 [3]을 통해 유사도 레이블링이 되어있지 않은 최신 뉴스 기사에 대해서도 자가 학습을 진행했으며, 이와 함께 길이가 긴 문단에 대해서도 효과적으로 학습할 수 있도록 문장 길이를 기준으로 3개의 단계로 나누어진 커리큘럼 학습 [4]방식을 적용했다.

  • PDF

A study on semantic ambiguity in the Korean Named Entity Recognition (한국어 개체명 인식 과제에서의 의미 모호성 연구)

  • Kim, Seonghyun;Song, Youngsook;Song, Chisung;Han, Jiyoon
    • Annual Conference on Human and Language Technology
    • /
    • 2021.10a
    • /
    • pp.203-208
    • /
    • 2021
  • 본 논문에서는 맥락에 따라 개체명의 범주가 달라지는 어휘를 중심으로 교차 태깅된 개체명의 성능을 레이블과 스팬 정답률, 문장 성분과 문장 위치에 따른 정답률로 나누어 살펴 보았다. 레이블의 정확도는 KoGPT2, mBERT, KLUE-RoBERTa 순으로 정답률이 높아지는 양상을 보였다. 스팬 정답률에서는 mBERT가 KLUE-RoBERTa보다 근소하게 성능이 높았고 KoGPT2는 매우 낮은 정확도를 보였다. 다만, KoGPT2는 개체명이 문장의 끝에 위치할 때는 다른 모델과 비슷한 정도로 성능이 개선되는 결과를 보였다. 문장 종결 위치에서 인식기의 성능이 좋은 것은 실험에 사용된 말뭉치의 문장 성분이 서술어일 때 명사의 중첩이 적고 구문이 패턴화되어 있다는 특징과 KoGPT2가 decoder기반의 모델이기 때문으로 여겨지나 이에 대해서는 후속 연구가 필요하다.

  • PDF

Leveraging LLMs for Corporate Data Analysis: Employee Turnover Prediction with ChatGPT (대형 언어 모델을 활용한 기업데이터 분석: ChatGPT를 활용한 직원 이직 예측)

  • Sungmin Kim;Jee Yong Chung
    • Knowledge Management Research
    • /
    • v.25 no.2
    • /
    • pp.19-47
    • /
    • 2024
  • Organizational ability to analyze and utilize data plays an important role in knowledge management and decision-making. This study aims to investigate the potential application of large language models in corporate data analysis. Focusing on the field of human resources, the research examines the data analysis capabilities of these models. Using the widely studied IBM HR dataset, the study reproduces machine learning-based employee turnover prediction analyses from previous research through ChatGPT and compares its predictive performance. Unlike past research methods that required advanced programming skills, ChatGPT-based machine learning data analysis, conducted through the analyst's natural language requests, offers the advantages of being much easier and faster. Moreover, its prediction accuracy was found to be competitive compared to previous studies. This suggests that large language models could serve as effective and practical alternatives in the field of corporate data analysis, which has traditionally demanded advanced programming capabilities. Furthermore, this approach is expected to contribute to the popularization of data analysis and the spread of data-driven decision-making (DDDM). The prompts used during the data analysis process and the program code generated by ChatGPT are also included in the appendix for verification, providing a foundation for future data analysis research using large language models.

GPT-enabled SNS Sentence writing support system Based on Image Object and Meta Information (이미지 객체 및 메타정보 기반 GPT 활용 SNS 문장 작성 보조 시스템)

  • Dong-Hee Lee;Mikyeong Moon;Bong-Jun, Choi
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.24 no.3
    • /
    • pp.160-165
    • /
    • 2023
  • In this study, we propose an SNS sentence writing assistance system that utilizes YOLO and GPT to assist users in writing texts with images, such as SNS. We utilize the YOLO model to extract objects from images inserted during writing, and also extract meta-information such as GPS information and creation time information, and use them as prompt values for GPT. To use the YOLO model, we trained it on form image data, and the mAP score of the model is about 0.25 on average. GPT was trained on 1,000 blog text data with the topic of 'restaurant reviews', and the model trained in this study was used to generate sentences with two types of keywords extracted from the images. A survey was conducted to evaluate the practicality of the generated sentences, and a closed-ended survey was conducted to clearly analyze the survey results. There were three evaluation items for the questionnaire by providing the inserted image and keyword sentences. The results showed that the keywords in the images generated meaningful sentences. Through this study, we found that the accuracy of image-based sentence generation depends on the relationship between image keywords and GPT learning contents.