• Title/Summary/Keyword: Large language models

Search Result 155, Processing Time 0.024 seconds

Deep Learning Model Parallelism (딥러닝 모델 병렬 처리)

  • Park, Y.M.;Ahn, S.Y.;Lim, E.J.;Choi, Y.S.;Woo, Y.C.;Choi, W.
    • Electronics and Telecommunications Trends
    • /
    • v.33 no.4
    • /
    • pp.1-13
    • /
    • 2018
  • Deep learning (DL) models have been widely applied to AI applications such image recognition and language translation with big data. Recently, DL models have becomes larger and more complicated, and have merged together. For the accelerated training of a large-scale deep learning model, model parallelism that partitions the model parameters for non-shared parallel access and updates across multiple machines was provided by a few distributed deep learning frameworks. Model parallelism as a training acceleration method, however, is not as commonly used as data parallelism owing to the difficulty of efficient model parallelism. This paper provides a comprehensive survey of the state of the art in model parallelism by comparing the implementation technologies in several deep learning frameworks that support model parallelism, and suggests a future research directions for improving model parallelism technology.

A study on the development of customized intensive in-service teacher training program models for elementary/secondary school teachers of English (초.중등 영어교사를 위한 맞춤형 심화 연수 모형 개발 연구)

  • Lee, Moon-Bok;Lee, Noh-Shin;Cho, Min-Chul
    • English Language & Literature Teaching
    • /
    • v.16 no.3
    • /
    • pp.269-289
    • /
    • 2010
  • The present study reports on a study of the development of customized intensive in-service English teachers training programs (IIETTP) reflecting on the demands of elementary/secondary school English teachers. For the purpose of study, a survey was conducted with 1,033 English teachers at elementary/secondary schools across the country. The results showed by and large no significant differences by school level, albeit some slight differences were revealed such as in training times, training methods, the percentages of teaching English in English (TEE), and other things. Since the two IIETTP models are presented as basic formats, they can be modified and applied according to the contexts of schools and the demands of trainees.

  • PDF

A Model Management Framework for Supporting Departmental Collaborative Work (부서간 협동적 작업을 지원하는 모형관리 체계의 개발)

  • Huh, Soon-Young;Kim, Hyung-Min
    • Asia pacific journal of information systems
    • /
    • v.10 no.2
    • /
    • pp.51-69
    • /
    • 2000
  • Recently, as business problems become more complicated and require more precise quantitative results, large-scale model management systems are increasingly in demand for supporting the decision-making activities. In addition, as distributed computing over networks gains popularity, departmental computing systems are gradually adopted in an organization to facilitate collaboration of geographically dispersed multiple departments. In departmental collaborative model management systems, multiple departments share common models but approach them with different user-views depending on their departmental needs. Moreover, the shared models become evolved as their structures and the corresponding data sets change due to the dynamic nature of the operating environment and the inherent uncertainty associated with the problems. In such capacity, providing the multiple departmental users with synchronized and consistent views of the models is important to improve the overall productivity. In this paper, we propose a collaborative model management framework for coordinating model change and automatic user-view update in a departmental computing environment. To do so, we describes changes in the model and their effects occurred in departmental model management environments and identifies the constructs and processes for maintaining the consistency between a shared model and its departmental user-views. Especially, in this framework, generic model concept was adopted for accommodating diverse mathematical models in a uniform way in a modelbase and object-oriented database management systems(ODBMS) for combining the model management constructs and automatic user-view update mechanisms in a single formalism. A prototype object-oriented modeling environment was developed using an ODBMS called ObjectStore and $C^{++}$ programming language on Windows NT.

  • PDF

Korean Word Segmentation and Compound-noun Decomposition Using Markov Chain and Syllable N-gram (마코프 체인 밀 음절 N-그램을 이용한 한국어 띄어쓰기 및 복합명사 분리)

  • 권오욱
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.3
    • /
    • pp.274-284
    • /
    • 2002
  • Word segmentation errors occurring in text preprocessing often insert incorrect words into recognition vocabulary and cause poor language models for Korean large vocabulary continuous speech recognition. We propose an automatic word segmentation algorithm using Markov chains and syllable-based n-gram language models in order to correct word segmentation error in teat corpora. We assume that a sentence is generated from a Markov chain. Spaces and non-space characters are generated on self-transitions and other transitions of the Markov chain, respectively Then word segmentation of the sentence is obtained by finding the maximum likelihood path using syllable n-gram scores. In experimental results, the algorithm showed 91.58% word accuracy and 96.69% syllable accuracy for word segmentation of 254 sentence newspaper columns without any spaces. The algorithm improved the word accuracy from 91.00% to 96.27% for word segmentation correction at line breaks and yielded the decomposition accuracy of 96.22% for compound-noun decomposition.

A Knowledge Graph-based Chatbot to Prevent the Leakage of LLM User's Sensitive Information (LLM 사용자의 민감정보 유출 방지를 위한 지식그래프 기반 챗봇)

  • Keedong Yoo
    • Knowledge Management Research
    • /
    • v.25 no.2
    • /
    • pp.1-18
    • /
    • 2024
  • With the increasing demand for and utilization of large language models (LLMs), the risk of user sensitive information being inputted and leaked during the use of LLMs also escalates. Typically recognized as a tool for mitigating the hallucination issues of LLMs, knowledge graphs, constructed independently from LLMs, can store and manage sensitive user information separately, thereby minimizing the potential for data breaches. This study, therefore, presents a knowledge graph-based chatbot that transforms user-inputted natural language questions into queries appropriate for the knowledge graph using LLMs, subsequently executing these queries and extracting the results. Furthermore, to evaluate the functional validity of the developed knowledge graph-based chatbot, performance tests are conducted to assess the comprehension and adaptability to existing knowledge graphs, the capability to create new entity classes, and the accessibility of LLMs to the knowledge graph content.

Sentence Filtering Dataset Construction Method about Web Corpus (웹 말뭉치에 대한 문장 필터링 데이터 셋 구축 방법)

  • Nam, Chung-Hyeon;Jang, Kyung-Sik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.11
    • /
    • pp.1505-1511
    • /
    • 2021
  • Pretrained models with high performance in various tasks within natural language processing have the advantage of learning the linguistic patterns of sentences using large corpus during the training, allowing each token in the input sentence to be represented with appropriate feature vectors. One of the methods of constructing a corpus required for a pre-trained model training is a collection method using web crawler. However, sentences that exist on web may contain unnecessary words in some or all of the sentences because they have various patterns. In this paper, we propose a dataset construction method for filtering sentences containing unnecessary words using neural network models for corpus collected from the web. As a result, we construct a dataset containing a total of 2,330 sentences. We also evaluated the performance of neural network models on the constructed dataset, and the BERT model showed the highest performance with an accuracy of 93.75%.

Empirical Study for Automatic Evaluation of Abstractive Summarization by Error-Types (오류 유형에 따른 생성요약 모델의 본문-요약문 간 요약 성능평가 비교)

  • Seungsoo Lee;Sangwoo Kang
    • Korean Journal of Cognitive Science
    • /
    • v.34 no.3
    • /
    • pp.197-226
    • /
    • 2023
  • Generative Text Summarization is one of the Natural Language Processing tasks. It generates a short abbreviated summary while preserving the content of the long text. ROUGE is a widely used lexical-overlap based metric for text summarization models in generative summarization benchmarks. Although it shows very high performance, the studies report that 30% of the generated summary and the text are still inconsistent. This paper proposes a methodology for evaluating the performance of the summary model without using the correct summary. AggreFACT is a human-annotated dataset that classifies the types of errors in neural text summarization models. Among all the test candidates, the two cases, generation summary, and when errors occurred throughout the summary showed the highest correlation results. We observed that the proposed evaluation score showed a high correlation with models finetuned with BART and PEGASUS, which is pretrained with a large-scale Transformer structure.

Korean Contextual Information Extraction System using BERT and Knowledge Graph (BERT와 지식 그래프를 이용한 한국어 문맥 정보 추출 시스템)

  • Yoo, SoYeop;Jeong, OkRan
    • Journal of Internet Computing and Services
    • /
    • v.21 no.3
    • /
    • pp.123-131
    • /
    • 2020
  • Along with the rapid development of artificial intelligence technology, natural language processing, which deals with human language, is also actively studied. In particular, BERT, a language model recently proposed by Google, has been performing well in many areas of natural language processing by providing pre-trained model using a large number of corpus. Although BERT supports multilingual model, we should use the pre-trained model using large amounts of Korean corpus because there are limitations when we apply the original pre-trained BERT model directly to Korean. Also, text contains not only vocabulary, grammar, but contextual meanings such as the relation between the front and the rear, and situation. In the existing natural language processing field, research has been conducted mainly on vocabulary or grammatical meaning. Accurate identification of contextual information embedded in text plays an important role in understanding context. Knowledge graphs, which are linked using the relationship of words, have the advantage of being able to learn context easily from computer. In this paper, we propose a system to extract Korean contextual information using pre-trained BERT model with Korean language corpus and knowledge graph. We build models that can extract person, relationship, emotion, space, and time information that is important in the text and validate the proposed system through experiments.

The Effect of Domain Specificity on the Performance of Domain-Specific Pre-Trained Language Models (도메인 특수성이 도메인 특화 사전학습 언어모델의 성능에 미치는 영향)

  • Han, Minah;Kim, Younha;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.4
    • /
    • pp.251-273
    • /
    • 2022
  • Recently, research on applying text analysis to deep learning has steadily continued. In particular, researches have been actively conducted to understand the meaning of words and perform tasks such as summarization and sentiment classification through a pre-trained language model that learns large datasets. However, existing pre-trained language models show limitations in that they do not understand specific domains well. Therefore, in recent years, the flow of research has shifted toward creating a language model specialized for a particular domain. Domain-specific pre-trained language models allow the model to understand the knowledge of a particular domain better and reveal performance improvements on various tasks in the field. However, domain-specific further pre-training is expensive to acquire corpus data of the target domain. Furthermore, many cases have reported that performance improvement after further pre-training is insignificant in some domains. As such, it is difficult to decide to develop a domain-specific pre-trained language model, while it is not clear whether the performance will be improved dramatically. In this paper, we present a way to proactively check the expected performance improvement by further pre-training in a domain before actually performing further pre-training. Specifically, after selecting three domains, we measured the increase in classification accuracy through further pre-training in each domain. We also developed and presented new indicators to estimate the specificity of the domain based on the normalized frequency of the keywords used in each domain. Finally, we conducted classification using a pre-trained language model and a domain-specific pre-trained language model of three domains. As a result, we confirmed that the higher the domain specificity index, the higher the performance improvement through further pre-training.

TAGS: Text Augmentation with Generation and Selection (생성-선정을 통한 텍스트 증강 프레임워크)

  • Kim Kyung Min;Dong Hwan Kim;Seongung Jo;Heung-Seon Oh;Myeong-Ha Hwang
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.10
    • /
    • pp.455-460
    • /
    • 2023
  • Text augmentation is a methodology that creates new augmented texts by transforming or generating original texts for the purpose of improving the performance of NLP models. However existing text augmentation techniques have limitations such as lack of expressive diversity semantic distortion and limited number of augmented texts. Recently text augmentation using large language models and few-shot learning can overcome these limitations but there is also a risk of noise generation due to incorrect generation. In this paper, we propose a text augmentation method called TAGS that generates multiple candidate texts and selects the appropriate text as the augmented text. TAGS generates various expressions using few-shot learning while effectively selecting suitable data even with a small amount of original text by using contrastive learning and similarity comparison. We applied this method to task-oriented chatbot data and achieved more than sixty times quantitative improvement. We also analyzed the generated texts to confirm that they produced semantically and expressively diverse texts compared to the original texts. Moreover, we trained and evaluated a classification model using the augmented texts and showed that it improved the performance by more than 0.1915, confirming that it helps to improve the actual model performance.