• Title/Summary/Keyword: GPT-based

Search Result 201, Processing Time 0.023 seconds

Analyzing financial time series data using the GARCH model (일반 자기회귀 이분산 모형을 이용한 시계열 자료 분석)

  • Kim, Sahm;Kim, Jin-A
    • Journal of the Korean Data and Information Science Society
    • /
    • v.20 no.3
    • /
    • pp.475-483
    • /
    • 2009
  • In this paper we introduced a class of nonlinear time series models to analyse KOSPI data. We introduce the Generalized Power-Transformation TGARCH (GPT-TGARCH) model and the model includes Zakoian (1993) and Li and Li (1996) models as the special cases. We showed the effectiveness and efficiency of the new model based on KOSPI data.

  • PDF

Generative Interactive Psychotherapy Expert (GIPE) Bot

  • Ayesheh Ahrari Khalaf;Aisha Hassan Abdalla Hashim;Akeem Olowolayemo;Rashidah Funke Olanrewaju
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.4
    • /
    • pp.15-24
    • /
    • 2023
  • One of the objectives and aspirations of scientists and engineers ever since the development of computers has been to interact naturally with machines. Hence features of artificial intelligence (AI) like natural language processing and natural language generation were developed. The field of AI that is thought to be expanding the fastest is interactive conversational systems. Numerous businesses have created various Virtual Personal Assistants (VPAs) using these technologies, including Apple's Siri, Amazon's Alexa, and Google Assistant, among others. Even though many chatbots have been introduced through the years to diagnose or treat psychological disorders, we are yet to have a user-friendly chatbot available. A smart generative cognitive behavioral therapy with spoken dialogue systems support was then developed using a model Persona Perception (P2) bot with Generative Pre-trained Transformer-2 (GPT-2). The model was then implemented using modern technologies in VPAs like voice recognition, Natural Language Understanding (NLU), and text-to-speech. This system is a magnificent device to help with voice-based systems because it can have therapeutic discussions with the users utilizing text and vocal interactive user experience.

QA Pair Passage RAG-based LLM Korean chatbot service (QA Pair Passage RAG 기반 LLM 한국어 챗봇 서비스)

  • Joongmin Shin;Jaewwook Lee;Kyungmin Kim;Taemin Lee;Sungmin Ahn;JeongBae Park;Heuiseok Lim
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.683-689
    • /
    • 2023
  • 자연어 처리 분야는 최근에 큰 발전을 보였으며, 특히 초대규모 언어 모델의 등장은 이 분야에 큰 영향을 미쳤다. GPT와 같은 모델은 다양한 NLP 작업에서 높은 성능을 보이고 있으며, 특히 챗봇 분야에서 중요하게 다루어지고 있다. 하지만, 이러한 모델에도 여러 한계와 문제점이 있으며, 그 중 하나는 모델이 기대하지 않은 결과를 생성하는 것이다. 이를 해결하기 위한 다양한 방법 중, Retrieval-Augmented Generation(RAG) 방법이 주목받았다. 이 논문에서는 지식베이스와의 통합을 통한 도메인 특화형 질의응답 시스템의 효율성 개선 방안과 벡터 데이터 베이스의 수정을 통한 챗봇 답변 수정 및 업데이트 방안을 제안한다. 본 논문의 주요 기여는 다음과 같다: 1) QA Pair Passage RAG을 활용한 새로운 RAG 시스템 제안 및 성능 향상 분석 2) 기존의 LLM 및 RAG 시스템의 성능 측정 및 한계점 제시 3) RDBMS 기반의 벡터 검색 및 업데이트를 활용한 챗봇 제어 방법론 제안

  • PDF

Applying NIST AI Risk Management Framework: Case Study on NTIS Database Analysis Using MAP, MEASURE, MANAGE Approaches (NIST AI 위험 관리 프레임워크 적용: NTIS 데이터베이스 분석의 MAP, MEASURE, MANAGE 접근 사례 연구)

  • Jung Sun Lim;Seoung Hun, Bae;Taehoon Kwon
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.47 no.2
    • /
    • pp.21-29
    • /
    • 2024
  • Fueled by international efforts towards AI standardization, including those by the European Commission, the United States, and international organizations, this study introduces a AI-driven framework for analyzing advancements in drone technology. Utilizing project data retrieved from the NTIS DB via the "drone" keyword, the framework employs a diverse toolkit of supervised learning methods (Keras MLP, XGboost, LightGBM, and CatBoost) enhanced by BERTopic (natural language analysis tool). This multifaceted approach ensures both comprehensive data quality evaluation and in-depth structural analysis of documents. Furthermore, a 6T-based classification method refines non-applicable data for year-on-year AI analysis, demonstrably improving accuracy as measured by accuracy metric. Utilizing AI's power, including GPT-4, this research unveils year-on-year trends in emerging keywords and employs them to generate detailed summaries, enabling efficient processing of large text datasets and offering an AI analysis system applicable to policy domains. Notably, this study not only advances methodologies aligned with AI Act standards but also lays the groundwork for responsible AI implementation through analysis of government research and development investments.

A Study on Generative AI-Based Feedback Techniques for Tutoring Beginners' Error Codes on Online Judge Platforms

  • Juyeon Lee;Seung-Hyun Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.8
    • /
    • pp.191-200
    • /
    • 2024
  • The rapid advancement of computer technology and artificial intelligence has significantly impacted software education in Korea. Consequently, the 2022 revised curriculum demands personalized education. However, implementing personalized education in schools is challenging. This study aims to facilitate personalized education by utilizing incorrect codes and error information submitted by beginners to construct prompts. And the difference in the frequency of correct feedback generated by the generative AI model and the prompts was examined. The results indicated that providing appropriate error information in the prompts yields better performance than relying solely on the excellence of the generative AI model itself. Through this research, we hope to establish a foundation for the realization of personalized education in programming education in Korea.

Evaluating the Accuracy of Artificial Intelligence-Based Chatbots on Pediatric Dentistry Questions in the Korean National Dental Board Exam

  • Yun Sun Jung;Yong Kwon Chae;Mi Sun Kim;Hyo-Seol Lee;Sung Chul Choi;Ok Hyung Nam
    • Journal of the korean academy of Pediatric Dentistry
    • /
    • v.51 no.3
    • /
    • pp.299-309
    • /
    • 2024
  • This study aimed to assess the competency of artificial intelligence (AI) in pediatric dentistry and compare it with that of dentists. We used open-source data obtained from the Korea Health Personnel Licensing Examination Institute. A total of 32 item multiple-choice pediatric dentistry exam questions were included. Two AI-based chatbots (ChatGPT 3.5 and Gemini) were evaluated. Each chatbot received the same questions seven times in separate chat sessions initiated on April 25, 2024. The accuracy was assessed by measuring the percentage of correct answers, and consistency was evaluated using Cronbach's alpha coefficient. Both ChatGPT 3.5 and Gemini demonstrated similar accuracy, with no significant differences observed between them. However, neither chatbot achieved the minimum passing score set by the Pediatric Dentistry National Examination. However, both chatbots exhibited acceptable consistency in their responses. Within the limits of this study, both AI-based chatbots did not sufficiently answer the pediatric dentistry exam questions. This finding suggests that pediatric dentists should be aware of the advantages and limitations of this new tool and effectively utilize it to promote patient health.

Development of a case-based nursing education program using generative artificial intelligence (생성형 인공지능을 활용한 사례 기반 간호 교육 프로그램 개발)

  • Ahn, Jeonghee;Park, Hye Ok
    • The Journal of Korean Academic Society of Nursing Education
    • /
    • v.29 no.3
    • /
    • pp.234-246
    • /
    • 2023
  • Purpose: This study aimed to develop a case-based nursing education program using generative artificial intelligence and to assess its usability and applicability in nursing curriculums. Methods: The program was developed by following the five steps of the ADDIE model: analysis, design, development, implementation, and evaluation. A panel of five nursing professors served as experts to implement and evaluate the program. Results: Utilizing ChatGPT, six program modules were designed and developed based on experiential learning theory. The experts' evaluations confirmed that the program was suitable for case-based learning, highly usable, and applicable to nursing education. Conclusion: Generative artificial intelligence was identified as a valuable tool for enhancing the effectiveness of case-based learning. This study provides insights and future directions for integrating generative artificial intelligence into nursing education. Further research should be attempted to implement and evaluate this program with nursing students.

FinBERT Fine-Tuning for Sentiment Analysis: Exploring the Effectiveness of Datasets and Hyperparameters (감성 분석을 위한 FinBERT 미세 조정: 데이터 세트와 하이퍼파라미터의 효과성 탐구)

  • Jae Heon Kim;Hui Do Jung;Beakcheol Jang
    • Journal of Internet Computing and Services
    • /
    • v.24 no.4
    • /
    • pp.127-135
    • /
    • 2023
  • This research paper explores the application of FinBERT, a variational BERT-based model pre-trained on financial domain, for sentiment analysis in the financial domain while focusing on the process of identifying suitable training data and hyperparameters. Our goal is to offer a comprehensive guide on effectively utilizing the FinBERT model for accurate sentiment analysis by employing various datasets and fine-tuning hyperparameters. We outline the architecture and workflow of the proposed approach for fine-tuning the FinBERT model in this study, emphasizing the performance of various datasets and hyperparameters for sentiment analysis tasks. Additionally, we verify the reliability of GPT-3 as a suitable annotator by using it for sentiment labeling tasks. Our results show that the fine-tuned FinBERT model excels across a range of datasets and that the optimal combination is a learning rate of 5e-5 and a batch size of 64, which perform consistently well across all datasets. Furthermore, based on the significant performance improvement of the FinBERT model with our Twitter data in general domain compared to our news data in general domain, we also express uncertainty about the model being further pre-trained only on financial news data. We simplify the complex process of determining the optimal approach to the FinBERT model and provide guidelines for selecting additional training datasets and hyperparameters within the fine-tuning process of financial sentiment analysis models.

A Proposal of Evaluation of Large Language Models Built Based on Research Data (연구데이터 관점에서 본 거대언어모델 품질 평가 기준 제언)

  • Na-eun Han;Sujeong Seo;Jung-ho Um
    • Journal of the Korean Society for information Management
    • /
    • v.40 no.3
    • /
    • pp.77-98
    • /
    • 2023
  • Large Language Models (LLMs) are becoming the major trend in the natural language processing field. These models were built based on research data, but information such as types, limitations, and risks of using research data are unknown. This research would present how to analyze and evaluate the LLMs that were built with research data: LLaMA or LLaMA base models such as Alpaca of Stanford, Vicuna of the large model systems organization, and ChatGPT from OpenAI from the perspective of research data. This quality evaluation focuses on the validity, functionality, and reliability of Data Quality Management (DQM). Furthermore, we adopted the Holistic Evaluation of Language Models (HELM) to understand its evaluation criteria and then discussed its limitations. This study presents quality evaluation criteria for LLMs using research data and future development directions.

Performance Evaluation of Pre-trained Language Models in Multi-Goal Conversational Recommender Systems (다중목표 대화형 추천시스템을 위한 사전 학습된 언어모델들에 대한 성능 평가)

  • Taeho Kim;Hyung-Jun Jang;Sang-Wook Kim
    • Smart Media Journal
    • /
    • v.12 no.6
    • /
    • pp.35-40
    • /
    • 2023
  • In this study paper, we examine pre-trained language models used in Multi-Goal Conversational Recommender Systems (MG-CRS), comparing and analyzing their performances of various pre-trained language models. Specifically, we investigates the impact of the sizes of language models on the performance of MG-CRS. The study targets three types of language models - of BERT, GPT2, and BART, and measures and compares their accuracy in two tasks of 'type prediction' and 'topic prediction' on the MG-CRS dataset, DuRecDial 2.0. Experimental results show that all models demonstrated excellent performance in the type prediction task, but there were notable provide significant performance differences in performance depending on among the models or based on their sizes in the topic prediction task. Based on these findings, the study provides directions for improving the performance of MG-CRS.