• Title/Summary/Keyword: language models

Search Result 858, Processing Time 0.031 seconds

A Comparative Study on Discrimination Issues in Large Language Models (거대언어모델의 차별문제 비교 연구)

  • Wei Li;Kyunghwa Hwang;Jiae Choi;Ohbyung Kwon
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.125-144
    • /
    • 2023
  • Recently, the use of Large Language Models (LLMs) such as ChatGPT has been increasing in various fields such as interactive commerce and mobile financial services. However, LMMs, which are mainly created by learning existing documents, can also learn various human biases inherent in documents. Nevertheless, there have been few comparative studies on the aspects of bias and discrimination in LLMs. The purpose of this study is to examine the existence and extent of nine types of discrimination (Age, Disability status, Gender identity, Nationality, Physical appearance, Race ethnicity, Religion, Socio-economic status, Sexual orientation) in LLMs and suggest ways to improve them. For this purpose, we utilized BBQ (Bias Benchmark for QA), a tool for identifying discrimination, to compare three large-scale language models including ChatGPT, GPT-3, and Bing Chat. As a result of the evaluation, a large number of discriminatory responses were observed in the mega-language models, and the patterns differed depending on the mega-language model. In particular, problems were exposed in elder discrimination and disability discrimination, which are not traditional AI ethics issues such as sexism, racism, and economic inequality, and a new perspective on AI ethics was found. Based on the results of the comparison, this paper describes how to improve and develop large-scale language models in the future.

A Proposal of Evaluation of Large Language Models Built Based on Research Data (연구데이터 관점에서 본 거대언어모델 품질 평가 기준 제언)

  • Na-eun Han;Sujeong Seo;Jung-ho Um
    • Journal of the Korean Society for information Management
    • /
    • v.40 no.3
    • /
    • pp.77-98
    • /
    • 2023
  • Large Language Models (LLMs) are becoming the major trend in the natural language processing field. These models were built based on research data, but information such as types, limitations, and risks of using research data are unknown. This research would present how to analyze and evaluate the LLMs that were built with research data: LLaMA or LLaMA base models such as Alpaca of Stanford, Vicuna of the large model systems organization, and ChatGPT from OpenAI from the perspective of research data. This quality evaluation focuses on the validity, functionality, and reliability of Data Quality Management (DQM). Furthermore, we adopted the Holistic Evaluation of Language Models (HELM) to understand its evaluation criteria and then discussed its limitations. This study presents quality evaluation criteria for LLMs using research data and future development directions.

A Survey on Deep Learning-based Pre-Trained Language Models (딥러닝 기반 사전학습 언어모델에 대한 이해와 현황)

  • Sangun Park
    • The Journal of Bigdata
    • /
    • v.7 no.2
    • /
    • pp.11-29
    • /
    • 2022
  • Pre-trained language models are the most important and widely used tools in natural language processing tasks. Since those have been pre-trained for a large amount of corpus, high performance can be expected even with fine-tuning learning using a small number of data. Since the elements necessary for implementation, such as a pre-trained tokenizer and a deep learning model including pre-trained weights, are distributed together, the cost and period of natural language processing has been greatly reduced. Transformer variants are the most representative pre-trained language models that provide these advantages. Those are being actively used in other fields such as computer vision and audio applications. In order to make it easier for researchers to understand the pre-trained language model and apply it to natural language processing tasks, this paper describes the definition of the language model and the pre-learning language model, and discusses the development process of the pre-trained language model and especially representative Transformer variants.

Robust Sentiment Classification of Metaverse Services Using a Pre-trained Language Model with Soft Voting

  • Haein Lee;Hae Sun Jung;Seon Hong Lee;Jang Hyun Kim
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.9
    • /
    • pp.2334-2347
    • /
    • 2023
  • Metaverse services generate text data, data of ubiquitous computing, in real-time to analyze user emotions. Analysis of user emotions is an important task in metaverse services. This study aims to classify user sentiments using deep learning and pre-trained language models based on the transformer structure. Previous studies collected data from a single platform, whereas the current study incorporated the review data as "Metaverse" keyword from the YouTube and Google Play Store platforms for general utilization. As a result, the Bidirectional Encoder Representations from Transformers (BERT) and Robustly optimized BERT approach (RoBERTa) models using the soft voting mechanism achieved a highest accuracy of 88.57%. In addition, the area under the curve (AUC) score of the ensemble model comprising RoBERTa, BERT, and A Lite BERT (ALBERT) was 0.9458. The results demonstrate that the ensemble combined with the RoBERTa model exhibits good performance. Therefore, the RoBERTa model can be applied on platforms that provide metaverse services. The findings contribute to the advancement of natural language processing techniques in metaverse services, which are increasingly important in digital platforms and virtual environments. Overall, this study provides empirical evidence that sentiment analysis using deep learning and pre-trained language models is a promising approach to improving user experiences in metaverse services.

A study on the didactical application of ChatGPT for mathematical word problem solving (수학 문장제 해결과 관련한 ChatGPT의 교수학적 활용 방안 모색)

  • Kang, Yunji
    • Communications of Mathematical Education
    • /
    • v.38 no.1
    • /
    • pp.49-67
    • /
    • 2024
  • Recent interest in the diverse applications of artificial intelligence (AI) language models has highlighted the need to explore didactical uses in mathematics education. AI language models, capable of natural language processing, show promise in solving mathematical word problems. This study tested the capability of ChatGPT, an AI language model, to solve word problems from elementary school textbooks, and analyzed both the solutions and errors made. The results showed that the AI language model achieved an accuracy rate of 81.08%, with errors in problem comprehension, equation formulation, and calculation. Based on this analysis of solution processes and error types, the study suggests implications for the didactical application of AI language models in education.

Enhancing LoRA Fine-tuning Performance Using Curriculum Learning

  • Daegeon Kim;Namgyu Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.3
    • /
    • pp.43-54
    • /
    • 2024
  • Recently, there has been a lot of research on utilizing Language Models, and Large Language Models have achieved innovative results in various tasks. However, the practical application faces limitations due to the constrained resources and costs required to utilize Large Language Models. Consequently, there has been recent attention towards methods to effectively utilize models within given resources. Curriculum Learning, a methodology that categorizes training data according to difficulty and learns sequentially, has been attracting attention, but it has the limitation that the method of measuring difficulty is complex or not universal. Therefore, in this study, we propose a methodology based on data heterogeneity-based Curriculum Learning that measures the difficulty of data using reliable prior information and facilitates easy utilization across various tasks. To evaluate the performance of the proposed methodology, experiments were conducted using 5,000 specialized documents in the field of information communication technology and 4,917 documents in the field of healthcare. The results confirm that the proposed methodology outperforms traditional fine-tuning in terms of classification accuracy in both LoRA fine-tuning and full fine-tuning.

Enhancement of a language model using two separate corpora of distinct characteristics

  • Cho, Sehyeong;Chung, Tae-Sun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.14 no.3
    • /
    • pp.357-362
    • /
    • 2004
  • Language models are essential in predicting the next word in a spoken sentence, thereby enhancing the speech recognition accuracy, among other things. However, spoken language domains are too numerous, and therefore developers suffer from the lack of corpora with sufficient sizes. This paper proposes a method of combining two n-gram language models, one constructed from a very small corpus of the right domain of interest, the other constructed from a large but less adequate corpus, resulting in a significantly enhanced language model. This method is based on the observation that a small corpus from the right domain has high quality n-grams but has serious sparseness problem, while a large corpus from a different domain has more n-gram statistics but incorrectly biased. With our approach, two n-gram statistics are combined by extending the idea of Katz's backoff and therefore is called a dual-source backoff. We ran experiments with 3-gram language models constructed from newspaper corpora of several million to tens of million words together with models from smaller broadcast news corpora. The target domain was broadcast news. We obtained significant improvement (30%) by incorporating a small corpus around one thirtieth size of the newspaper corpus.

Multi-level Product Information Modeling for Managing Long-term Life-cycle Product Information (수명주기가 긴 제품의 설계정보관리를 위한 다층 제품정보 모델링 방안)

  • Lee, Jae-Hyun;Suh, Hyo-Won
    • Korean Journal of Computational Design and Engineering
    • /
    • v.17 no.4
    • /
    • pp.234-245
    • /
    • 2012
  • This paper proposes a multi-level product modeling framework for long-term lifecycle products. The framework can help engineers to define product models and relate them to physical instances. The framework is defined in three levels; data, design model, modeling language. The data level represents real-world products, The model level describes design models of real-world products. The modeling language level defines concepts and relationships to describe product design models. The concepts and relationships in the modeling language level enable engineers to express the semantics of product models in an engineering-friendly way. The interactions between these three levels are explained to show how the framework can manage long-term lifecycle product information. A prototype system is provided for further understanding of the framework.

Generation of 3D Sign Language Animation Using Spline Interpolation (스플라인 보간법을 이용한 3차원 수화 애니메이션의 생성)

  • ;吳芝英
    • Proceedings of the IEEK Conference
    • /
    • 1998.10a
    • /
    • pp.931-934
    • /
    • 1998
  • We have implemented a sign language animation system using 2D and 3D models. From the previous studies, we find out that both models have several limitations based on the linear interpolation and fixed number of frames, and they result in incorrectness of actions and unnatural movements. To solve the problems, in this paper, we propose a sign language animation system using spline interpolation method and variable number of frames. Experimental results show that the proposed method could generate animation more correctly and rapidly than previous methods.

  • PDF