• Title/Summary/Keyword: GPT-2 모델

Search Result 85, Processing Time 0.024 seconds

Can ChatGPT Pass the National Korean Occupational Therapy Licensure Examination? (ChatGPT는 한국작업치료사면허시험에 합격할 수 있을까?)

  • Hong, Junhwa;Kim, Nayeon;Min, Hyemin;Yang, Hamin;Lee, Sihyun;Choi, Seojin;Park, Jin-Hyuck
    • Therapeutic Science for Rehabilitation
    • /
    • v.13 no.1
    • /
    • pp.65-74
    • /
    • 2024
  • Objective : This study assessed ChatGPT, an artificial intelligence system based on a large language model, for its ability to pass the National Korean Occupational Therapy Licensure Examination (NKOTLE). Methods : Using NKOTLE questions from 2018 to 2022, provided by the Korea Health and Medical Personnel Examination Institute, this study employed English prompts to determine the accuracy of ChatGPT in providing correct answers. Two researchers independently conducted the entire process, and the average accuracy of both researchers was used to determine whether ChatGPT passed over the 5-year period. The degree of agreement between ChatGPT answers of the two researchers was assessed. Results : ChatGPT passed the 2020 examination but failed to pass the other 4 years' examination. Specifically, its accuracy in questions related to medical regulations ranged from 25% to 57%, whereas its accuracy in other questions exceeded 60%. ChatGPT exhibited a strong agreement between researchers, except for medical regulation questions, and this agreement was significantly correlated with accuracy. Conclusion : There are still limitations to the application of ChatGPT to answer questions influenced by language or culture. Future studies should explore its potential as an educational tool for students majoring in occupational therapy through optimized prompts and continuous learning from the data.

Proposal for the Utilization and Refinement Techniques of LLMs for Automated Research Generation (관련 연구 자동 생성을 위한 LLM의 활용 및 정제 기법 제안)

  • Seung-min Choi;Yu-chul, Jung
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.17 no.4
    • /
    • pp.275-287
    • /
    • 2024
  • Research on the integration of Knowledge Graphs (KGs) and Language Models (LMs) has been consistently explored over the years. However, studies focusing on the automatic generation of text using the structured knowledge from KGs have not been as widely developed. In this study, we propose a methodology for automatically generating specific domain-related research items (Related Work) at a level comparable to existing papers. This methodology involves: 1) selecting optimal prompts, 2) extracting triples through a four-step refinement process, 3) constructing a knowledge graph, and 4) automatically generating related research. The proposed approach utilizes GPT-4, one of the large language models (LLMs), and is desigend to automatically generate related research by applying the four-step refinement process. The model demonstrated performance metrics of 17.3, 14.1, and 4.2 in Triple extraction across #Supp, #Cont, and Fluency, respectively. According to the GPT-4 automatic evaluation criteria, the model's performamce improved from 88.5 points vefore refinement to 96.5 points agter refinement out of 100, indicating a significant capability to automatically generate related research at a level similar to that of existing papers.

Shopping Mall Review Generator usin KoGPT2 (KoGPT2를 이용한 쇼핑몰 리뷰 생성기)

  • Park, Gyu-Hyeon;Kwon, Hee-Yun
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2022.01a
    • /
    • pp.31-33
    • /
    • 2022
  • 쇼핑몰 리뷰 생성기는 사용자로 하여금 사용자를 대신해서 리뷰를 생성할 수 있는 기술이고, 옷 상태, 배송 상태, 사이즈와 관련된 세 가지의 카테고리를 이용하여 부분마다 점수를 부여하여 점수에 맞는 리뷰를 생성할 수 있도록 하는 기술이다. 해당 리뷰 생성기는 점수마다 생성되는 리뷰가 달라지기 때문에 다양한 리뷰 생성을 원하는 웹, 앱 쇼핑몰 사이트에서 적용이 가능한 기술이다. 본 논문에서는 KoGPT2를 이용한 리뷰 생성과 카테고리와 점수에 따른 다르게 생성되는 리뷰의 방식을 제안한다. 그리고 두 방식을 결합한 리뷰 생성의 방식을 제안한다. 제안하는 방식들은 카테고리고리 마다 학습하는 모델을 다르게 적용하고 있다.

  • PDF

Long-tail Query Expansion using Extractive and Generative Methods (롱테일 질의 확장을 위한 추출 및 생성 기반 모델)

  • Kim, Lae-Seon;Kim, Seong-soon;Jang, Heon-Seok;Park, Seok-Won;Kang, In-Ho
    • Annual Conference on Human and Language Technology
    • /
    • 2020.10a
    • /
    • pp.267-273
    • /
    • 2020
  • 검색 엔진에 입력되는 질의 중 입력 빈도는 낮지만 상대적으로 길이가 긴 질의를 롱테일 질의라고 일컫는다. 롱테일 질의가 전체 검색 로그에서 차지하는 비중은 높은 반면, 그 형태가 매우 다양하고 검색 의도가 상세하며 개별 질의의 양은 충분하지 않은 경우가 많기 때문에 해당 질의에 대한 적절한 검색어를 추천하는 것은 어려운 문제다. 본 논문에서는 롱테일 질의 입력 시 적절한 검색어 추천을 제공하기 위하여 질의-문서 클릭 정보를 활용한 추출기반 모델 및 Seq2seq와 GPT-2 기반 생성모델을 활용한 질의 확장 방법론을 제안한다. 실험 및 결과 분석을 통하여 제안 방법이 기존에 대응하지 못했던 롱테일 질의를 자연스럽게 확장할 수 있음을 보였다. 본 연구 결과를 실제 서비스에 접목함으로써 사용자의 검색 편리성을 증대하는 동시에, 언어 모델링 기반 질의 확장에 대한 가능성을 확인하였다.

  • PDF

Exploring Predictive Models for Student Success in National Physical Therapy Examination: Machine Learning Approach

  • Bokyung Kim;Yeonseop Lee;Jang-hoon Shin;Yusung Jang;Wansuk Choi
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.10
    • /
    • pp.113-120
    • /
    • 2024
  • This study aims to assess the effectiveness of machine learning models in predicting the pass rates of physical therapy students in national exams. Traditional grade prediction methods primarily rely on past academic performance or demographic data. However, this study employed machine learning and deep learning techniques to analyze mock test scores with the goal of improving prediction accuracy. Data from 1,242 students across five Korean universities were collected and preprocessed, followed by analysis using various models. Models, including those generated and fine-tuned with the assistance of ChatGPT-4, were applied to the dataset. The results showed that H2OAutoML (GBM2) performed the best with an accuracy of 98.4%, while TabNet, LightGBM, and RandomForest also demonstrated high performance. This study demonstrates the exceptional effectiveness of H2OAutoML (GBM2) in predicting national exam pass rates and suggests that these AI-assisted models can significantly contribute to medical education and policy.

A Study on the Web Building Assistant System Using GUI Object Detection and Large Language Model (웹 구축 보조 시스템에 대한 GUI 객체 감지 및 대규모 언어 모델 활용 연구)

  • Hyun-Cheol Jang;Hyungkuk Jang
    • Annual Conference of KIPS
    • /
    • 2024.05a
    • /
    • pp.830-833
    • /
    • 2024
  • As Large Language Models (LLM) like OpenAI's ChatGPT[1] continue to grow in popularity, new applications and services are expected to emerge. This paper introduces an experimental study on a smart web-builder application assistance system that combines Computer Vision with GUI object recognition and the ChatGPT (LLM). First of all, the research strategy employed computer vision technology in conjunction with Microsoft's "ChatGPT for Robotics: Design Principles and Model Abilities"[2] design strategy. Additionally, this research explores the capabilities of Large Language Model like ChatGPT in various application design tasks, specifically in assisting with web-builder tasks. The study examines the ability of ChatGPT to synthesize code through both directed prompts and free-form conversation strategies. The researchers also explored ChatGPT's ability to perform various tasks within the builder domain, including functions and closure loop inferences, basic logical and mathematical reasoning. Overall, this research proposes an efficient way to perform various application system tasks by combining natural language commands with computer vision technology and LLM (ChatGPT). This approach allows for user interaction through natural language commands while building applications.

Automatic Review Generation for Delivery Restaurant using Deep Learning Models (딥러닝을 이용한 배달 음식점 리뷰 자동 생성)

  • Kim, Nagyeong;Jo, Hyejin;Lee, Hyejin;Jung, Yuchul
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2021.01a
    • /
    • pp.231-232
    • /
    • 2021
  • 본 논문에서는 딥러닝 모델 중 Keras 기반 LSTM 모델과 KoGPT-2 모델을 이용하여 학습한 결과를 바탕으로 카테고리 별 키워드 기반의 배달 음식점 리뷰를 생성하는 방법을 제안한다. 데이터는 주로 맛, 양, 배달, 가격으로 구성되어 있으며 이를 카테고리 별로 구분하였다. 또한 새롭게 생성된 텍스트는 의미와 문맥을 판단하여 기존 리뷰 데이터와 비슷하게 구현하였다. 모델마다 성능을 비교하기 위해 정량적, 정성적 평가를 진행하였다.

  • PDF

Structured Pruning for Efficient Transformer Model compression (효율적인 Transformer 모델 경량화를 위한 구조화된 프루닝)

  • Eunji Yoo;Youngjoo Lee
    • Transactions on Semiconductor Engineering
    • /
    • v.1 no.1
    • /
    • pp.23-30
    • /
    • 2023
  • With the recent development of Generative AI technology by IT giants, the size of the transformer model is increasing exponentially over trillion won. In order to continuously enable these AI services, it is essential to reduce the weight of the model. In this paper, we find a hardware-friendly structured pruning pattern and propose a lightweight method of the transformer model. Since compression proceeds by utilizing the characteristics of the model algorithm, the size of the model can be reduced and performance can be maintained as much as possible. Experiments show that the structured pruning proposed when pruning GPT-2 and BERT language models shows almost similar performance to fine-grained pruning even in highly sparse regions. This approach reduces model parameters by 80% and allows hardware acceleration in structured form with 0.003% accuracy loss compared to fine-tuned pruning.

Analysis of Users' Sentiments and Needs for ChatGPT through Social Media on Reddit (Reddit 소셜미디어를 활용한 ChatGPT에 대한 사용자의 감정 및 요구 분석)

  • Hye-In Na;Byeong-Hee Lee
    • Journal of Internet Computing and Services
    • /
    • v.25 no.2
    • /
    • pp.79-92
    • /
    • 2024
  • ChatGPT, as a representative chatbot leveraging generative artificial intelligence technology, is used valuable not only in scientific and technological domains but also across diverse sectors such as society, economy, industry, and culture. This study conducts an explorative analysis of user sentiments and needs for ChatGPT by examining global social media discourse on Reddit. We collected 10,796 comments on Reddit from December 2022 to August 2023 and then employed keyword analysis, sentiment analysis, and need-mining-based topic modeling to derive insights. The analysis reveals several key findings. The most frequently mentioned term in ChatGPT-related comments is "time," indicative of users' emphasis on prompt responses, time efficiency, and enhanced productivity. Users express sentiments of trust and anticipation in ChatGPT, yet simultaneously articulate concerns and frustrations regarding its societal impact, including fears and anger. In addition, the topic modeling analysis identifies 14 topics, shedding light on potential user needs. Notably, users exhibit a keen interest in the educational applications of ChatGPT and its societal implications. Moreover, our investigation uncovers various user-driven topics related to ChatGPT, encompassing language models, jobs, information retrieval, healthcare applications, services, gaming, regulations, energy, and ethical concerns. In conclusion, this analysis provides insights into user perspectives, emphasizing the significance of understanding and addressing user needs. The identified application directions offer valuable guidance for enhancing existing products and services or planning the development of new service platforms.

A Self-Guided Approach to Enhance Korean Text Generation in Writing Assistants (A Self-Guided Approach을 활용한 한국어 텍스트 생성 쓰기 보조 기법의 향상 방법)

  • Donghyeon Jang;Jinsu Kim;Minho Lee
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2023.07a
    • /
    • pp.541-544
    • /
    • 2023
  • LLM(Largescale Language Model)의 성능 향상을 위한 비용 효율적인 방법으로 ChatGPT, GPT-4와 같은 초거대 모델의 output에 대해 SLM(Small Language Model)을 finetune하는 방법이 주목받고 있다. 그러나, 이러한 접근법은 주로 범용적인 지시사항 모델을 위한 학습 방법으로 사용되며, 제한된 특정 도메인에서는 추가적인 성능 개선의 여지가 있다. 본 연구는 특정 도메인(Writing Assistant)에서의 성능 향상을 위한 새로운 방법인 Self-Guided Approach를 제안한다. Self-Guided Approach는 (1) LLM을 활용해 시드 데이터에 대해 도메인 특화된 metric(유용성, 관련성, 정확성, 세부사항의 수준별) 점수를 매기고, (2) 점수가 매겨진 데이터와 점수가 매겨지지 않은 데이터를 모두 활용하여 supervised 방식으로 SLM을 미세 조정한다. Vicuna에서 제안된 평가 방법인, GPT-4를 활용한 자동평가 프레임워크를 사용하여 Self-Guided Approach로 학습된 SLM의 성능을 평가하였다. 평가 결과 Self-Guided Approach가 Self-instruct, alpaca와 같이, 생성된 instruction 데이터에 튜닝하는 기존의 훈련 방법에 비해 성능이 향상됨을 확인했다. 다양한 스케일의 한국어 오픈 소스 LLM(Polyglot1.3B, PolyGlot3.8B, PolyGlot5.8B)에 대해서 Self-Guided Approach를 활용한 성능 개선을 확인했다. 평가는 GPT-4를 활용한 자동 평가를 진행했으며, Korean Novel Generation 도메인의 경우, 테스트 셋에서 4.547점에서 6.286점의 성능 향상이 발생했으며, Korean scenario Genration 도메인의 경우, 테스트 셋에서 4.038점에서 5.795 점의 성능 향상이 발생했으며, 다른 유사 도메인들에서도 비슷한 점수 향상을 확인했다. Self-Guided Approach의 활용을 통해 특정 도메인(Writing Assistant)에서의 SLM의 성능 개선 가능성을 확인했으며 이는 LLM에 비용부담을 크게 줄이면서도 제한된 도메인에서 성능을 유지하며, LLM을 활용한 응용 서비스에 있어 실질적인 도움을 제공할 수 있을 것으로 기대된다.

  • PDF