• Title/Summary/Keyword: Large language models

Search Result 164, Processing Time 0.034 seconds

Korean Word Segmentation and Compound-noun Decomposition Using Markov Chain and Syllable N-gram (마코프 체인 밀 음절 N-그램을 이용한 한국어 띄어쓰기 및 복합명사 분리)

  • 권오욱
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.3
    • /
    • pp.274-284
    • /
    • 2002
  • Word segmentation errors occurring in text preprocessing often insert incorrect words into recognition vocabulary and cause poor language models for Korean large vocabulary continuous speech recognition. We propose an automatic word segmentation algorithm using Markov chains and syllable-based n-gram language models in order to correct word segmentation error in teat corpora. We assume that a sentence is generated from a Markov chain. Spaces and non-space characters are generated on self-transitions and other transitions of the Markov chain, respectively Then word segmentation of the sentence is obtained by finding the maximum likelihood path using syllable n-gram scores. In experimental results, the algorithm showed 91.58% word accuracy and 96.69% syllable accuracy for word segmentation of 254 sentence newspaper columns without any spaces. The algorithm improved the word accuracy from 91.00% to 96.27% for word segmentation correction at line breaks and yielded the decomposition accuracy of 96.22% for compound-noun decomposition.

Proposal for the Utilization and Refinement Techniques of LLMs for Automated Research Generation (관련 연구 자동 생성을 위한 LLM의 활용 및 정제 기법 제안)

  • Seung-min Choi;Yu-chul, Jung
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.17 no.4
    • /
    • pp.275-287
    • /
    • 2024
  • Research on the integration of Knowledge Graphs (KGs) and Language Models (LMs) has been consistently explored over the years. However, studies focusing on the automatic generation of text using the structured knowledge from KGs have not been as widely developed. In this study, we propose a methodology for automatically generating specific domain-related research items (Related Work) at a level comparable to existing papers. This methodology involves: 1) selecting optimal prompts, 2) extracting triples through a four-step refinement process, 3) constructing a knowledge graph, and 4) automatically generating related research. The proposed approach utilizes GPT-4, one of the large language models (LLMs), and is desigend to automatically generate related research by applying the four-step refinement process. The model demonstrated performance metrics of 17.3, 14.1, and 4.2 in Triple extraction across #Supp, #Cont, and Fluency, respectively. According to the GPT-4 automatic evaluation criteria, the model's performamce improved from 88.5 points vefore refinement to 96.5 points agter refinement out of 100, indicating a significant capability to automatically generate related research at a level similar to that of existing papers.

A Knowledge Graph-based Chatbot to Prevent the Leakage of LLM User's Sensitive Information (LLM 사용자의 민감정보 유출 방지를 위한 지식그래프 기반 챗봇)

  • Keedong Yoo
    • Knowledge Management Research
    • /
    • v.25 no.2
    • /
    • pp.1-18
    • /
    • 2024
  • With the increasing demand for and utilization of large language models (LLMs), the risk of user sensitive information being inputted and leaked during the use of LLMs also escalates. Typically recognized as a tool for mitigating the hallucination issues of LLMs, knowledge graphs, constructed independently from LLMs, can store and manage sensitive user information separately, thereby minimizing the potential for data breaches. This study, therefore, presents a knowledge graph-based chatbot that transforms user-inputted natural language questions into queries appropriate for the knowledge graph using LLMs, subsequently executing these queries and extracting the results. Furthermore, to evaluate the functional validity of the developed knowledge graph-based chatbot, performance tests are conducted to assess the comprehension and adaptability to existing knowledge graphs, the capability to create new entity classes, and the accessibility of LLMs to the knowledge graph content.

Sentence Filtering Dataset Construction Method about Web Corpus (웹 말뭉치에 대한 문장 필터링 데이터 셋 구축 방법)

  • Nam, Chung-Hyeon;Jang, Kyung-Sik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.11
    • /
    • pp.1505-1511
    • /
    • 2021
  • Pretrained models with high performance in various tasks within natural language processing have the advantage of learning the linguistic patterns of sentences using large corpus during the training, allowing each token in the input sentence to be represented with appropriate feature vectors. One of the methods of constructing a corpus required for a pre-trained model training is a collection method using web crawler. However, sentences that exist on web may contain unnecessary words in some or all of the sentences because they have various patterns. In this paper, we propose a dataset construction method for filtering sentences containing unnecessary words using neural network models for corpus collected from the web. As a result, we construct a dataset containing a total of 2,330 sentences. We also evaluated the performance of neural network models on the constructed dataset, and the BERT model showed the highest performance with an accuracy of 93.75%.

Empirical Study for Automatic Evaluation of Abstractive Summarization by Error-Types (오류 유형에 따른 생성요약 모델의 본문-요약문 간 요약 성능평가 비교)

  • Seungsoo Lee;Sangwoo Kang
    • Korean Journal of Cognitive Science
    • /
    • v.34 no.3
    • /
    • pp.197-226
    • /
    • 2023
  • Generative Text Summarization is one of the Natural Language Processing tasks. It generates a short abbreviated summary while preserving the content of the long text. ROUGE is a widely used lexical-overlap based metric for text summarization models in generative summarization benchmarks. Although it shows very high performance, the studies report that 30% of the generated summary and the text are still inconsistent. This paper proposes a methodology for evaluating the performance of the summary model without using the correct summary. AggreFACT is a human-annotated dataset that classifies the types of errors in neural text summarization models. Among all the test candidates, the two cases, generation summary, and when errors occurred throughout the summary showed the highest correlation results. We observed that the proposed evaluation score showed a high correlation with models finetuned with BART and PEGASUS, which is pretrained with a large-scale Transformer structure.

Korean Contextual Information Extraction System using BERT and Knowledge Graph (BERT와 지식 그래프를 이용한 한국어 문맥 정보 추출 시스템)

  • Yoo, SoYeop;Jeong, OkRan
    • Journal of Internet Computing and Services
    • /
    • v.21 no.3
    • /
    • pp.123-131
    • /
    • 2020
  • Along with the rapid development of artificial intelligence technology, natural language processing, which deals with human language, is also actively studied. In particular, BERT, a language model recently proposed by Google, has been performing well in many areas of natural language processing by providing pre-trained model using a large number of corpus. Although BERT supports multilingual model, we should use the pre-trained model using large amounts of Korean corpus because there are limitations when we apply the original pre-trained BERT model directly to Korean. Also, text contains not only vocabulary, grammar, but contextual meanings such as the relation between the front and the rear, and situation. In the existing natural language processing field, research has been conducted mainly on vocabulary or grammatical meaning. Accurate identification of contextual information embedded in text plays an important role in understanding context. Knowledge graphs, which are linked using the relationship of words, have the advantage of being able to learn context easily from computer. In this paper, we propose a system to extract Korean contextual information using pre-trained BERT model with Korean language corpus and knowledge graph. We build models that can extract person, relationship, emotion, space, and time information that is important in the text and validate the proposed system through experiments.

The Effect of Domain Specificity on the Performance of Domain-Specific Pre-Trained Language Models (도메인 특수성이 도메인 특화 사전학습 언어모델의 성능에 미치는 영향)

  • Han, Minah;Kim, Younha;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.4
    • /
    • pp.251-273
    • /
    • 2022
  • Recently, research on applying text analysis to deep learning has steadily continued. In particular, researches have been actively conducted to understand the meaning of words and perform tasks such as summarization and sentiment classification through a pre-trained language model that learns large datasets. However, existing pre-trained language models show limitations in that they do not understand specific domains well. Therefore, in recent years, the flow of research has shifted toward creating a language model specialized for a particular domain. Domain-specific pre-trained language models allow the model to understand the knowledge of a particular domain better and reveal performance improvements on various tasks in the field. However, domain-specific further pre-training is expensive to acquire corpus data of the target domain. Furthermore, many cases have reported that performance improvement after further pre-training is insignificant in some domains. As such, it is difficult to decide to develop a domain-specific pre-trained language model, while it is not clear whether the performance will be improved dramatically. In this paper, we present a way to proactively check the expected performance improvement by further pre-training in a domain before actually performing further pre-training. Specifically, after selecting three domains, we measured the increase in classification accuracy through further pre-training in each domain. We also developed and presented new indicators to estimate the specificity of the domain based on the normalized frequency of the keywords used in each domain. Finally, we conducted classification using a pre-trained language model and a domain-specific pre-trained language model of three domains. As a result, we confirmed that the higher the domain specificity index, the higher the performance improvement through further pre-training.

TAGS: Text Augmentation with Generation and Selection (생성-선정을 통한 텍스트 증강 프레임워크)

  • Kim Kyung Min;Dong Hwan Kim;Seongung Jo;Heung-Seon Oh;Myeong-Ha Hwang
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.10
    • /
    • pp.455-460
    • /
    • 2023
  • Text augmentation is a methodology that creates new augmented texts by transforming or generating original texts for the purpose of improving the performance of NLP models. However existing text augmentation techniques have limitations such as lack of expressive diversity semantic distortion and limited number of augmented texts. Recently text augmentation using large language models and few-shot learning can overcome these limitations but there is also a risk of noise generation due to incorrect generation. In this paper, we propose a text augmentation method called TAGS that generates multiple candidate texts and selects the appropriate text as the augmented text. TAGS generates various expressions using few-shot learning while effectively selecting suitable data even with a small amount of original text by using contrastive learning and similarity comparison. We applied this method to task-oriented chatbot data and achieved more than sixty times quantitative improvement. We also analyzed the generated texts to confirm that they produced semantically and expressively diverse texts compared to the original texts. Moreover, we trained and evaluated a classification model using the augmented texts and showed that it improved the performance by more than 0.1915, confirming that it helps to improve the actual model performance.

Class Language Model based on Word Embedding and POS Tagging (워드 임베딩과 품사 태깅을 이용한 클래스 언어모델 연구)

  • Chung, Euisok;Park, Jeon-Gue
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.7
    • /
    • pp.315-319
    • /
    • 2016
  • Recurrent neural network based language models (RNN LM) have shown improved results in language model researches. The RNN LMs are limited to post processing sessions, such as the N-best rescoring step of the wFST based speech recognition. However, it has considerable vocabulary problems that require large computing powers for the LM training. In this paper, we try to find the 1st pass N-gram model using word embedding, which is the simplified deep neural network. The class based language model (LM) can be a way to approach to this issue. We have built class based vocabulary through word embedding, by combining the class LM with word N-gram LM to evaluate the performance of LMs. In addition, we propose that part-of-speech (POS) tagging based LM shows an improvement of perplexity in all types of the LM tests.

Determination of Thematic Roles according to Syntactic Relations Using Rules and Statistical Models in Korean Language Processing (한국어 전산처리에서 규칙과 확률을 이용한 구문관계에 따른 의미역 결정)

  • 강신재;박정혜
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.8 no.1
    • /
    • pp.33-42
    • /
    • 2003
  • This paper presents an efficient determination method of thematic roles from syntactic relations using rules and statistical model in Korean language processing. This process is one of the main core of semantic analysis and an important issue to be solved in natural language processing. It is problematic to describe rules for determining thematic roles by only using general linguistic knowledge and experience, since the final result may be different according to the subjective views of researchers, and it is impossible to construct rules to cover all cases. However, our hybrid method is objective and efficient by considering large corpora, which contain practical usages of Korean language, and case frames in the Sejong Electronic Lexicon of Korean, which is being developed by dozens of Korean linguistic researchers. To determine thematic roles more correctly, our system uses syntactic relations, semantic classes, morpheme information, position of double subject. Especially by using semantic classes, we can increase the applicability of our system.

  • PDF