• Title/Summary/Keyword: large language model

Search Result 300, Processing Time 0.031 seconds

A Comparative Study on Discrimination Issues in Large Language Models (거대언어모델의 차별문제 비교 연구)

  • Wei Li;Kyunghwa Hwang;Jiae Choi;Ohbyung Kwon
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.125-144
    • /
    • 2023
  • Recently, the use of Large Language Models (LLMs) such as ChatGPT has been increasing in various fields such as interactive commerce and mobile financial services. However, LMMs, which are mainly created by learning existing documents, can also learn various human biases inherent in documents. Nevertheless, there have been few comparative studies on the aspects of bias and discrimination in LLMs. The purpose of this study is to examine the existence and extent of nine types of discrimination (Age, Disability status, Gender identity, Nationality, Physical appearance, Race ethnicity, Religion, Socio-economic status, Sexual orientation) in LLMs and suggest ways to improve them. For this purpose, we utilized BBQ (Bias Benchmark for QA), a tool for identifying discrimination, to compare three large-scale language models including ChatGPT, GPT-3, and Bing Chat. As a result of the evaluation, a large number of discriminatory responses were observed in the mega-language models, and the patterns differed depending on the mega-language model. In particular, problems were exposed in elder discrimination and disability discrimination, which are not traditional AI ethics issues such as sexism, racism, and economic inequality, and a new perspective on AI ethics was found. Based on the results of the comparison, this paper describes how to improve and develop large-scale language models in the future.

Korean broadcast news transcription system with out-of-vocabulary(OOV) update module (한국어 방송 뉴스 인식 시스템을 위한 OOV update module)

  • Jung Eui-Jung;Yun Seung
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • spring
    • /
    • pp.33-36
    • /
    • 2002
  • We implemented a robust Korean broadcast news transcription system for out-of-vocabulary (OOV), tested its performance. The occurrence of OOV words in the input speech is inevitable in large vocabulary continuous speech recognition (LVCSR). The known vocabulary will never be complete due to the existence of for instance neologisms, proper names, and compounds in some languages. The fixed vocabulary and language model of LVCSR system directly face with these OOV words. Therefore our Broadcast news recognition system has an offline OOV update module of language model and vocabulary to solve OOV problem and selects morpheme-based recognition unit (so called, pseudo-morpheme) for OOV robustness.

  • PDF

Zero-shot Dialogue System Grounded in Multiple Documents (Zero-shot 기반 다중 문서 그라운딩된 대화 시스템)

  • Jun-Bum Park;Beomseok Hong;Wonseok Choi;Youngsub Han;Byoung-Ki Jeon;Seung-Hoon Na
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.399-403
    • /
    • 2023
  • 본 논문에서는 다중 문서 기반의 대화 시스템을 통한 효율적인 정보 검색과 응답 생성에 중점을 둡니다. 대규모 데이터 집합에서 정확한 문서를 선택하는 데 필요한 검색의 중요성을 강조하며, 현재 검색 방법의 한계와 문제점을 지적합니다. 또한 더 자연스러운 답변을 생성하기 위해 대규모 언어 모델을 사용하게 되면서 fine-tuning 시에 발생하는 제약과 낭비를 모델의 제로샷 생성 능력을 활용하여 개선하려는 방안을 제안하며, 모델의 크기와 자원의 효율성에 대한 고려사항을 논의합니다. 우리의 접근 방식은 대규모 언어 모델을 프롬프트와 함께 다중 문서로 학습 없이 정보를 검색하고 응답을 생성하는 방향으로 접근하여 대화 시스템의 효율성과 유용성을 향상시킬 수 있음을 제시합니다.

  • PDF

Exploring the feasibility of fine-tuning large-scale speech recognition models for domain-specific applications: A case study on Whisper model and KsponSpeech dataset

  • Jungwon Chang;Hosung Nam
    • Phonetics and Speech Sciences
    • /
    • v.15 no.3
    • /
    • pp.83-88
    • /
    • 2023
  • This study investigates the fine-tuning of large-scale Automatic Speech Recognition (ASR) models, specifically OpenAI's Whisper model, for domain-specific applications using the KsponSpeech dataset. The primary research questions address the effectiveness of targeted lexical item emphasis during fine-tuning, its impact on domain-specific performance, and whether the fine-tuned model can maintain generalization capabilities across different languages and environments. Experiments were conducted using two fine-tuning datasets: Set A, a small subset emphasizing specific lexical items, and Set B, consisting of the entire KsponSpeech dataset. Results showed that fine-tuning with targeted lexical items increased recognition accuracy and improved domain-specific performance, with generalization capabilities maintained when fine-tuned with a smaller dataset. For noisier environments, a trade-off between specificity and generalization capabilities was observed. This study highlights the potential of fine-tuning using minimal domain-specific data to achieve satisfactory results, emphasizing the importance of balancing specialization and generalization for ASR models. Future research could explore different fine-tuning strategies and novel technologies such as prompting to further enhance large-scale ASR models' domain-specific performance.

Privacy-Preserving Language Model Fine-Tuning Using Offsite Tuning (프라이버시 보호를 위한 오프사이트 튜닝 기반 언어모델 미세 조정 방법론)

  • Jinmyung Jeong;Namgyu Kim
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.4
    • /
    • pp.165-184
    • /
    • 2023
  • Recently, Deep learning analysis of unstructured text data using language models, such as Google's BERT and OpenAI's GPT has shown remarkable results in various applications. Most language models are used to learn generalized linguistic information from pre-training data and then update their weights for downstream tasks through a fine-tuning process. However, some concerns have been raised that privacy may be violated in the process of using these language models, i.e., data privacy may be violated when data owner provides large amounts of data to the model owner to perform fine-tuning of the language model. Conversely, when the model owner discloses the entire model to the data owner, the structure and weights of the model are disclosed, which may violate the privacy of the model. The concept of offsite tuning has been recently proposed to perform fine-tuning of language models while protecting privacy in such situations. But the study has a limitation that it does not provide a concrete way to apply the proposed methodology to text classification models. In this study, we propose a concrete method to apply offsite tuning with an additional classifier to protect the privacy of the model and data when performing multi-classification fine-tuning on Korean documents. To evaluate the performance of the proposed methodology, we conducted experiments on about 200,000 Korean documents from five major fields, ICT, electrical, electronic, mechanical, and medical, provided by AIHub, and found that the proposed plug-in model outperforms the zero-shot model and the offsite model in terms of classification accuracy.

Large-scale Language-image Model-based Bag-of-Objects Extraction for Visual Place Recognition (영상 기반 위치 인식을 위한 대규모 언어-이미지 모델 기반의 Bag-of-Objects 표현)

  • Seung Won Jung;Byungjae Park
    • Journal of Sensor Science and Technology
    • /
    • v.33 no.2
    • /
    • pp.78-85
    • /
    • 2024
  • We proposed a method for visual place recognition that represents images using objects as visual words. Visual words represent the various objects present in urban environments. To detect various objects within the images, we implemented and used a zero-shot detector based on a large-scale image language model. This zero-shot detector enables the detection of various objects in urban environments without additional training. In the process of creating histograms using the proposed method, frequency-based weighting was applied to consider the importance of each object. Through experiments with open datasets, the potential of the proposed method was demonstrated by comparing it with another method, even in situations involving environmental or viewpoint changes.

Updated Primer on Generative Artificial Intelligence and Large Language Models in Medical Imaging for Medical Professionals

  • Kiduk Kim;Kyungjin Cho;Ryoungwoo Jang;Sunggu Kyung;Soyoung Lee;Sungwon Ham;Edward Choi;Gil-Sun Hong;Namkug Kim
    • Korean Journal of Radiology
    • /
    • v.25 no.3
    • /
    • pp.224-242
    • /
    • 2024
  • The emergence of Chat Generative Pre-trained Transformer (ChatGPT), a chatbot developed by OpenAI, has garnered interest in the application of generative artificial intelligence (AI) models in the medical field. This review summarizes different generative AI models and their potential applications in the field of medicine and explores the evolving landscape of Generative Adversarial Networks and diffusion models since the introduction of generative AI models. These models have made valuable contributions to the field of radiology. Furthermore, this review also explores the significance of synthetic data in addressing privacy concerns and augmenting data diversity and quality within the medical domain, in addition to emphasizing the role of inversion in the investigation of generative models and outlining an approach to replicate this process. We provide an overview of Large Language Models, such as GPTs and bidirectional encoder representations (BERTs), that focus on prominent representatives and discuss recent initiatives involving language-vision models in radiology, including innovative large language and vision assistant for biomedicine (LLaVa-Med), to illustrate their practical application. This comprehensive review offers insights into the wide-ranging applications of generative AI models in clinical research and emphasizes their transformative potential.

Hypernetwork Memory-Based Model for Infant's Language Learning (유아 언어학습에 대한 하이퍼망 메모리 기반 모델)

  • Lee, Ji-Hoon;Lee, Eun-Seok;Zhang, Byoung-Tak
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.12
    • /
    • pp.983-987
    • /
    • 2009
  • One of the critical themes in the language acquisition is its exposure to linguistic environments. Linguistic environments, which interact with infants, include not only human beings such as its parents but also artificially crafted linguistic media as their functioning elements. An infant learns a language by exploring these extensive language environments around it. Based on such large linguistic data exposure, we propose a machine learning based method on the cognitive mechanism that simulate flexibly and appropriately infant's language learning. The infant's initial stage of language learning comes with sentence learning and creation, which can be simulated by exposing it to a language corpus. The core of the simulation is a memory-based learning model which has language hypernetwork structure. The language hypernetwork simulates developmental and progressive language learning using the structure of new data stream through making it representing of high level connection between language components possible. In this paper, we simulates an infant's gradual and developmental learning progress by training language hypernetwork gradually using 32,744 sentences extracted from video scripts of commercial animation movies for children.

Inducing Harmful Speech in Large Language Models through Korean Malicious Prompt Injection Attacks (한국어 악성 프롬프트 주입 공격을 통한 거대 언어 모델의 유해 표현 유도)

  • Ji-Min Suh;Jin-Woo Kim
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.34 no.3
    • /
    • pp.451-461
    • /
    • 2024
  • Recently, various AI chatbots based on large language models have been released. Chatbots have the advantage of providing users with quick and easy information through interactive prompts, making them useful in various fields such as question answering, writing, and programming. However, a vulnerability in chatbots called "prompt injection attacks" has been proposed. This attack involves injecting instructions into the chatbot to violate predefined guidelines. Such attacks can be critical as they may lead to the leakage of confidential information within large language models or trigger other malicious activities. However, the vulnerability of Korean prompts has not been adequately validated. Therefore, in this paper, we aim to generate malicious Korean prompts and perform attacks on the popular chatbot to analyze their feasibility. To achieve this, we propose a system that automatically generates malicious Korean prompts by analyzing existing prompt injection attacks. Specifically, we focus on generating malicious prompts that induce harmful expressions from large language models and validate their effectiveness in practice.

Korean Contextual Information Extraction System using BERT and Knowledge Graph (BERT와 지식 그래프를 이용한 한국어 문맥 정보 추출 시스템)

  • Yoo, SoYeop;Jeong, OkRan
    • Journal of Internet Computing and Services
    • /
    • v.21 no.3
    • /
    • pp.123-131
    • /
    • 2020
  • Along with the rapid development of artificial intelligence technology, natural language processing, which deals with human language, is also actively studied. In particular, BERT, a language model recently proposed by Google, has been performing well in many areas of natural language processing by providing pre-trained model using a large number of corpus. Although BERT supports multilingual model, we should use the pre-trained model using large amounts of Korean corpus because there are limitations when we apply the original pre-trained BERT model directly to Korean. Also, text contains not only vocabulary, grammar, but contextual meanings such as the relation between the front and the rear, and situation. In the existing natural language processing field, research has been conducted mainly on vocabulary or grammatical meaning. Accurate identification of contextual information embedded in text plays an important role in understanding context. Knowledge graphs, which are linked using the relationship of words, have the advantage of being able to learn context easily from computer. In this paper, we propose a system to extract Korean contextual information using pre-trained BERT model with Korean language corpus and knowledge graph. We build models that can extract person, relationship, emotion, space, and time information that is important in the text and validate the proposed system through experiments.