• Title/Summary/Keyword: 한국어 데이터 세트

Search Result 19, Processing Time 0.023 seconds

Building robust Korean speech recognition model by fine-tuning large pretrained model (대형 사전훈련 모델의 파인튜닝을 통한 강건한 한국어 음성인식 모델 구축)

  • Changhan Oh;Cheongbin Kim;Kiyoung Park
    • Phonetics and Speech Sciences
    • /
    • v.15 no.3
    • /
    • pp.75-82
    • /
    • 2023
  • Automatic speech recognition (ASR) has been revolutionized with deep learning-based approaches, among which self-supervised learning methods have proven to be particularly effective. In this study, we aim to enhance the performance of OpenAI's Whisper model, a multilingual ASR system on the Korean language. Whisper was pretrained on a large corpus (around 680,000 hours) of web speech data and has demonstrated strong recognition performance for major languages. However, it faces challenges in recognizing languages such as Korean, which is not major language while training. We address this issue by fine-tuning the Whisper model with an additional dataset comprising about 1,000 hours of Korean speech. We also compare its performance against a Transformer model that was trained from scratch using the same dataset. Our results indicate that fine-tuning the Whisper model significantly improved its Korean speech recognition capabilities in terms of character error rate (CER). Specifically, the performance improved with increasing model size. However, the Whisper model's performance on English deteriorated post fine-tuning, emphasizing the need for further research to develop robust multilingual models. Our study demonstrates the potential of utilizing a fine-tuned Whisper model for Korean ASR applications. Future work will focus on multilingual recognition and optimization for real-time inference.

Speech Data Collection for korean Speech Recognition (한국어 음성인식을 위한 음성 데이터 수집)

  • Park, Jong-Ryeal;Kwon, Oh-Wook;Kim, Do-Yeong;Choi, In-Jeong;Jeong, Ho-Young;Un, Chong-Kwan
    • The Journal of the Acoustical Society of Korea
    • /
    • v.14 no.4
    • /
    • pp.74-81
    • /
    • 1995
  • This paper describes the development of speech databases for the Korean language which were constructed at Communications Research Laboratory in KAIST. The procedure and environment to construct the speech database are presented in detail, and the phonetic and linguistic properties of the databases are presented. the databases were intended for use in designing and evaluating speech recognition algorithms. The databases consist of five different sets of speech contents : trade-related continuous speech with 3,000 words, variable-length connected digits, phoneme-balanced 75 isolated words, 500 isolated Korean provincial names, and Korean A-set words.

  • PDF

Unsupervised Abstractive Summarization Method that Suitable for Documents with Flows (흐름이 있는 문서에 적합한 비지도학습 추상 요약 방법)

  • Lee, Hoon-suk;An, Soon-hong;Kim, Seung-hoon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.11
    • /
    • pp.501-512
    • /
    • 2021
  • Recently, a breakthrough has been made in the NLP area by Transformer techniques based on encoder-decoder. However, this only can be used in mainstream languages where millions of dataset are well-equipped, such as English and Chinese, and there is a limitation that it cannot be used in non-mainstream languages where dataset are not established. In addition, there is a deflection problem that focuses on the beginning of the document in mechanical summarization. Therefore, these methods are not suitable for documents with flows such as fairy tales and novels. In this paper, we propose a hybrid summarization method that does not require a dataset and improves the deflection problem using GAN with two adaptive discriminators. We evaluate our model on the CNN/Daily Mail dataset to verify an objective validity. Also, we proved that the model has valid performance in Korean, one of the non-mainstream languages.

Comparison of Korean Classification Models' Korean Essay Score Range Prediction Performance (한국어 학습 모델별 한국어 쓰기 답안지 점수 구간 예측 성능 비교)

  • Cho, Heeryon;Im, Hyeonyeol;Yi, Yumi;Cha, Junwoo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.3
    • /
    • pp.133-140
    • /
    • 2022
  • We investigate the performance of deep learning-based Korean language models on a task of predicting the score range of Korean essays written by foreign students. We construct a data set containing a total of 304 essays, which include essays discussing the criteria for choosing a job ('job'), conditions of a happy life ('happ'), relationship between money and happiness ('econ'), and definition of success ('succ'). These essays were labeled according to four letter grades (A, B, C, and D), and a total of eleven essay score range prediction experiments were conducted (i.e., five for predicting the score range of 'job' essays, five for predicting the score range of 'happiness' essays, and one for predicting the score range of mixed topic essays). Three deep learning-based Korean language models, KoBERT, KcBERT, and KR-BERT, were fine-tuned using various training data. Moreover, two traditional probabilistic machine learning classifiers, naive Bayes and logistic regression, were also evaluated. Experiment results show that deep learning-based Korean language models performed better than the two traditional classifiers, with KR-BERT performing the best with 55.83% overall average prediction accuracy. A close second was KcBERT (55.77%) followed by KoBERT (54.91%). The performances of naive Bayes and logistic regression classifiers were 52.52% and 50.28% respectively. Due to the scarcity of training data and the imbalance in class distribution, the overall prediction performance was not high for all classifiers. Moreover, the classifiers' vocabulary did not explicitly capture the error features that were helpful in correctly grading the Korean essay. By overcoming these two limitations, we expect the score range prediction performance to improve.

A Survey on Open Source based Large Language Models (오픈 소스 기반의 거대 언어 모델 연구 동향: 서베이)

  • Ha-Young Joo;Hyeontaek Oh;Jinhong Yang
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.16 no.4
    • /
    • pp.193-202
    • /
    • 2023
  • In recent years, the outstanding performance of large language models (LLMs) trained on extensive datasets has become a hot topic. Since studies on LLMs are available on open-source approaches, the ecosystem is expanding rapidly. Models that are task-specific, lightweight, and high-performing are being actively disseminated using additional training techniques using pre-trained LLMs as foundation models. On the other hand, the performance of LLMs for Korean is subpar because English comprises a significant proportion of the training dataset of existing LLMs. Therefore, research is being carried out on Korean-specific LLMs that allow for further learning with Korean language data. This paper identifies trends of open source based LLMs and introduces research on Korean specific large language models; moreover, the applications and limitations of large language models are described.

A Study on Korean Poetry Generation System Based on Artificial Intelligence (인공지능 기반 한국어 시 생성 시스템 개발 연구)

  • Myung-sun Kim;Woo-Hyuk Jung;Jihwan Woo
    • Information Systems Review
    • /
    • v.25 no.3
    • /
    • pp.43-57
    • /
    • 2023
  • In this study, we developed an AI-based system to generate sentences that assist in creating Korean poetry. Instead of replacing the creative aspect of composition, which is considered a unique domain of humans, the focus was on generating foundational sentences to enhance human imagination efficiently. By conducting interviews with poets, the researchers extracted sentences from eight distinct datasets, enabling the generation of poetry across eight different genres. This study stands out for its innovation in developing a method for crafting literary works in Korean. Its significance lies in its potential to facilitate the creation of diverse literary forms such as essays, prose, or novels.

Named Entity Detection Using Generative Al for Personal Information-Specific Named Entity Annotation Conversation Dataset (개인정보 특화 개체명 주석 대화 데이터셋 기반 생성AI 활용 개체명 탐지)

  • Yejee Kang;Li Fei;Yeonji Jang;Seoyoon Park;Hansaem Kim
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.499-504
    • /
    • 2023
  • 본 연구에서는 민감한 개인정보의 유출과 남용 위험이 높아지고 있는 상황에서 정확한 개인정보 탐지 및 비식별화의 효율을 높이기 위해 개인정보 항목에 특화된 개체명 체계를 개발하였다. 개인정보 태그셋이 주석된 대화 데이터 4,981세트를 구축하고, 생성 AI 모델을 활용하여 개인정보 개체명 탐지 실험을 수행하였다. 실험을 위해 최적의 프롬프트를 설계하여 퓨샷러닝(few-shot learning)을 통해 탐지 결과를 평가하였다. 구축한 데이터셋과 영어 기반의 개인정보 주석 데이터셋을 비교 분석한 결과 고유식별번호 항목에 대해 본 연구에서 구축한 데이터셋에서 더 높은 탐지 성능이 나타났으며, 이를 통해 데이터셋의 필요성과 우수성을 입증하였다.

  • PDF

A Clustering-based Undersampling Method to Prevent Information Loss from Text Data (텍스트 데이터의 정보 손실을 방지하기 위한 군집화 기반 언더샘플링 기법)

  • Jong-Hwi Kim;Saim Shin;Jin Yea Jang
    • Annual Conference on Human and Language Technology
    • /
    • 2022.10a
    • /
    • pp.251-256
    • /
    • 2022
  • 범주 불균형은 분류 모델이 다수 범주에 편향되게 학습되어 소수 범주에 대한 분류 성능을 떨어뜨리는 문제를 야기한다. 언더 샘플링 기법은 다수 범주 데이터의 수를 줄여 소수 범주와 균형을 이루게하는 대표적인 불균형 해결 방법으로, 텍스트 도메인에서의 기존 언더 샘플링 연구에서는 단어 임베딩과 랜덤 샘플링과 같은 비교적 간단한 기법만이 적용되었다. 본 논문에서는 트랜스포머 기반 문장 임베딩과 군집화 기반 샘플링 방법을 통해 텍스트 데이터의 정보 손실을 최소화하는 언더샘플링 방법을 제안한다. 제안 방법의 검증을 위해, 감성 분석 실험에서 제안 방법과 랜덤 샘플링으로 추출한 훈련 세트로 모델을 학습하고 성능을 비교 평가하였다. 제안 방법을 활용한 모델이 랜덤 샘플링을 활용한 모델에 비해 적게는 0.2%, 많게는 2.0% 높은 분류 정확도를 보였고, 이를 통해 제안하는 군집화 기반 언더 샘플링 기법의 효과를 확인하였다.

  • PDF

KNU Korean Sentiment Lexicon: Bi-LSTM-based Method for Building a Korean Sentiment Lexicon (Bi-LSTM 기반의 한국어 감성사전 구축 방안)

  • Park, Sang-Min;Na, Chul-Won;Choi, Min-Seong;Lee, Da-Hee;On, Byung-Won
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.219-240
    • /
    • 2018
  • Sentiment analysis, which is one of the text mining techniques, is a method for extracting subjective content embedded in text documents. Recently, the sentiment analysis methods have been widely used in many fields. As good examples, data-driven surveys are based on analyzing the subjectivity of text data posted by users and market researches are conducted by analyzing users' review posts to quantify users' reputation on a target product. The basic method of sentiment analysis is to use sentiment dictionary (or lexicon), a list of sentiment vocabularies with positive, neutral, or negative semantics. In general, the meaning of many sentiment words is likely to be different across domains. For example, a sentiment word, 'sad' indicates negative meaning in many fields but a movie. In order to perform accurate sentiment analysis, we need to build the sentiment dictionary for a given domain. However, such a method of building the sentiment lexicon is time-consuming and various sentiment vocabularies are not included without the use of general-purpose sentiment lexicon. In order to address this problem, several studies have been carried out to construct the sentiment lexicon suitable for a specific domain based on 'OPEN HANGUL' and 'SentiWordNet', which are general-purpose sentiment lexicons. However, OPEN HANGUL is no longer being serviced and SentiWordNet does not work well because of language difference in the process of converting Korean word into English word. There are restrictions on the use of such general-purpose sentiment lexicons as seed data for building the sentiment lexicon for a specific domain. In this article, we construct 'KNU Korean Sentiment Lexicon (KNU-KSL)', a new general-purpose Korean sentiment dictionary that is more advanced than existing general-purpose lexicons. The proposed dictionary, which is a list of domain-independent sentiment words such as 'thank you', 'worthy', and 'impressed', is built to quickly construct the sentiment dictionary for a target domain. Especially, it constructs sentiment vocabularies by analyzing the glosses contained in Standard Korean Language Dictionary (SKLD) by the following procedures: First, we propose a sentiment classification model based on Bidirectional Long Short-Term Memory (Bi-LSTM). Second, the proposed deep learning model automatically classifies each of glosses to either positive or negative meaning. Third, positive words and phrases are extracted from the glosses classified as positive meaning, while negative words and phrases are extracted from the glosses classified as negative meaning. Our experimental results show that the average accuracy of the proposed sentiment classification model is up to 89.45%. In addition, the sentiment dictionary is more extended using various external sources including SentiWordNet, SenticNet, Emotional Verbs, and Sentiment Lexicon 0603. Furthermore, we add sentiment information about frequently used coined words and emoticons that are used mainly on the Web. The KNU-KSL contains a total of 14,843 sentiment vocabularies, each of which is one of 1-grams, 2-grams, phrases, and sentence patterns. Unlike existing sentiment dictionaries, it is composed of words that are not affected by particular domains. The recent trend on sentiment analysis is to use deep learning technique without sentiment dictionaries. The importance of developing sentiment dictionaries is declined gradually. However, one of recent studies shows that the words in the sentiment dictionary can be used as features of deep learning models, resulting in the sentiment analysis performed with higher accuracy (Teng, Z., 2016). This result indicates that the sentiment dictionary is used not only for sentiment analysis but also as features of deep learning models for improving accuracy. The proposed dictionary can be used as a basic data for constructing the sentiment lexicon of a particular domain and as features of deep learning models. It is also useful to automatically and quickly build large training sets for deep learning models.