• Title/Summary/Keyword: Language Models

Search Result 884, Processing Time 0.026 seconds

Relation Extraction using Generative Language Models (생성형 언어모델을 이용한 관계추출)

  • Jeong Heo;Jong-Hun Shin;Soo-Jong Lim;Oh-Woog Kwon
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.707-710
    • /
    • 2023
  • 관계추출은 문장 내 두 개체 간의 의미적 관계를 추론하는 자연어분석 태스크이다. 딥러닝의 발전과 더불어 관계추출은 BERT 계열의 이해형 언어모델을 이용하였다. 그러나, ChatGPT의 혁신적인 등장과 함께, GPT계열의 생성형 언어모델에 대한 연구가 활발해졌다. 본 논문에서는 소규모의 생성형 언어모델(Kebyt5)을 이용하여 관계추출 성능개선을 위한 프롬프트 구성 및 생각의 사슬(CoT) 학습 방법을 제안한다. 실험결과 Kebyt5-large 모델에서 CoT 학습을 수행하였을 경우, Klue-RoBERTa-base 모델보다 3.05%의 성능개선이 있었다.

  • PDF

An Interpretative Study on the Nam-Sa Village Space by Shamanistic Space Model (무속 공간모형에 의한 남사마을 공간 해석에 관한 연구)

  • 김동찬;이윤수;임상재
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.27 no.2
    • /
    • pp.95-107
    • /
    • 1999
  • Shamanism is an ancient culture that is also considered as a religious rite by most of people. So, shamanism is an important part of Korean tradition and should be a significant base to the Korean exterior space organization theme. However in the field of Landscape architecture th principle of exterior spacing has not yet clearly been identified as shamanistic. Therefore believe that this study can exhibit a model for the study of shaministic space language and its application to one of Korean's village Namsa. The results of this study are summarized below; 1. Extracted models are Unspecialized· Circular·Coninuous space. These are analyzed on the basis of the shaministic space language. Also shaministic space languages are based with Korean common belief of eternal human identify, circular view of the world. 2. Applying the shamanistic space models to Namsa village shows that shamanistic space models follow the Korean space organization principle. Some area of the village do not apply, because they were built on the structure of the social hierarchy between families or the difference between head households and collateral households. 3. Applying the shamanistic space model to Namsa village shows that the shamanistic space model follows the Korean space organization principle. Therefore can say that Namsa village was built by a shamanistic system that pursued eternal human identity.

  • PDF

An Object Model Verification System based on the Constraint Language (제약 언어를 이용한 객체 모델 검증시스템)

  • Kim, Jin-Soo;Kang, Kwon;Lee, Kyung-Hwan
    • The Transactions of the Korea Information Processing Society
    • /
    • v.3 no.6
    • /
    • pp.1453-1467
    • /
    • 1996
  • A software development process is regarded as the process of building a series of various models. But developers had no method to verify these models which created subjectively. This paper has adopted initiative of verification tools to an early phase of analysis, whereas the existing verification tools have been used for implementations. This paper has defined a constraint language that expresses disciplines suggested in the system has been built upon the constraint language, which is proven to enhance the quality and consistency of models constructed by object model editor.

  • PDF

Dual-scale BERT using multi-trait representations for holistic and trait-specific essay grading

  • Minsoo Cho;Jin-Xia Huang;Oh-Woog Kwon
    • ETRI Journal
    • /
    • v.46 no.1
    • /
    • pp.82-95
    • /
    • 2024
  • As automated essay scoring (AES) has progressed from handcrafted techniques to deep learning, holistic scoring capabilities have merged. However, specific trait assessment remains a challenge because of the limited depth of earlier methods in modeling dual assessments for holistic and multi-trait tasks. To overcome this challenge, we explore providing comprehensive feedback while modeling the interconnections between holistic and trait representations. We introduce the DualBERT-Trans-CNN model, which combines transformer-based representations with a novel dual-scale bidirectional encoder representations from transformers (BERT) encoding approach at the document-level. By explicitly leveraging multi-trait representations in a multi-task learning (MTL) framework, our DualBERT-Trans-CNN emphasizes the interrelation between holistic and trait-based score predictions, aiming for improved accuracy. For validation, we conducted extensive tests on the ASAP++ and TOEFL11 datasets. Against models of the same MTL setting, ours showed a 2.0% increase in its holistic score. Additionally, compared with single-task learning (STL) models, ours demonstrated a 3.6% enhancement in average multi-trait performance on the ASAP++ dataset.

Multi-source information integration framework using self-supervised learning-based language model (자기 지도 학습 기반의 언어 모델을 활용한 다출처 정보 통합 프레임워크)

  • Kim, Hanmin;Lee, Jeongbin;Park, Gyudong;Sohn, Mye
    • Journal of Internet Computing and Services
    • /
    • v.22 no.6
    • /
    • pp.141-150
    • /
    • 2021
  • Based on Artificial Intelligence technology, AI-enabled warfare is expected to become the main issue in the future warfare. Natural language processing technology is a core technology of AI technology, and it can significantly contribute to reducing the information burden of underrstanidng reports, information objects and intelligences written in natural language by commanders and staff. In this paper, we propose a Language model-based Multi-source Information Integration (LAMII) framework to reduce the information overload of commanders and support rapid decision-making. The proposed LAMII framework consists of the key steps of representation learning based on language models in self-supervsied way and document integration using autoencoders. In the first step, representation learning that can identify the similar relationship between two heterogeneous sentences is performed using the self-supervised learning technique. In the second step, using the learned model, documents that implies similar contents or topics from multiple sources are found and integrated. At this time, the autoencoder is used to measure the information redundancy of the sentences in order to remove the duplicate sentences. In order to prove the superiority of this paper, we conducted comparison experiments using the language models and the benchmark sets used to evaluate their performance. As a result of the experiment, it was demonstrated that the proposed LAMII framework can effectively predict the similar relationship between heterogeneous sentence compared to other language models.

A Comparative Study of Second Language Acquisition Models: Focusing on Vowel Acquisition by Chinese Learners of Korean (중국인 학습자의 한국어 모음 습득에 대한 제2언어 습득 모델 비교 연구)

  • Kim, Jooyeon
    • Phonetics and Speech Sciences
    • /
    • v.6 no.4
    • /
    • pp.27-36
    • /
    • 2014
  • This study provided longitudinal examination of the Chinese learners' acquisition of Korean vowels. Specifically, I examined the Chinese learners' Korean monophthongs /i, e, ɨ, ${\Lambda}$, a, u, o/ that were created at the time of 1 month and 12 months, tried to verify empirically how they learn by dealing with their mother tongue, and Korean vowels through dealing with pattern of the Perceptual Assimilation Model (henceforth PAM) of Best (Best, 1993; 1994; Best & Tyler, 2007) and the Speech Learning Model (henceforth SLM) of Flege (Flege, 1987; Bohn & Flege, 1992, Flege, 1995). As a result, most of the present results are shown to be similarly explained by the PAM and SLM, and the only discrepancy between these two models is found in the 'similar' category of sounds between the learners' native language and the target language. Specifically, the acquisition pattern of /u/ and /o/ in Korean is well accounted for the PAM, but not in the SLM. The SLM did not explain why the Chinese learners had difficulty in acquiring the Korean vowel /u/, because according to the SLM, the vowel /u/ in Chinese (the native language) is matched either to the vowel /u/ or /o/ in Korean (the target language). Namely, there is only a one-to-one matching relationship between the native language and the target language. In contrast, the Chinese learners' difficulty for the Korean vowel /u/ is well accounted for in the PAM in that the Chinese vowel /u/ is matched to the vowel pair /o, u/ in Korean, not the single vowel, /o/ or /u/.

Utilizing AI Foundation Models for Language-Driven Zero-Shot Object Navigation Tasks (언어-기반 제로-샷 물체 목표 탐색 이동 작업들을 위한 인공지능 기저 모델들의 활용)

  • Jeong-Hyun Choi;Ho-Jun Baek;Chan-Sol Park;Incheol Kim
    • The Journal of Korea Robotics Society
    • /
    • v.19 no.3
    • /
    • pp.293-310
    • /
    • 2024
  • In this paper, we propose an agent model for Language-Driven Zero-Shot Object Navigation (L-ZSON) tasks, which takes in a freeform language description of an unseen target object and navigates to find out the target object in an inexperienced environment. In general, an L-ZSON agent should able to visually ground the target object by understanding the freeform language description of it and recognizing the corresponding visual object in camera images. Moreover, the L-ZSON agent should be also able to build a rich spatial context map over the unknown environment and decide efficient exploration actions based on the map until the target object is present in the field of view. To address these challenging issues, we proposes AML (Agent Model for L-ZSON), a novel L-ZSON agent model to make effective use of AI foundation models such as Large Language Model (LLM) and Vision-Language model (VLM). In order to tackle the visual grounding issue of the target object description, our agent model employs GLEE, a VLM pretrained for locating and identifying arbitrary objects in images and videos in the open world scenario. To meet the exploration policy issue, the proposed agent model leverages the commonsense knowledge of LLM to make sequential navigational decisions. By conducting various quantitative and qualitative experiments with RoboTHOR, the 3D simulation platform and PASTURE, the L-ZSON benchmark dataset, we show the superior performance of the proposed agent model.

Korean Sentence Generation Using Phoneme-Level LSTM Language Model (한국어 음소 단위 LSTM 언어모델을 이용한 문장 생성)

  • Ahn, SungMahn;Chung, Yeojin;Lee, Jaejoon;Yang, Jiheon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.71-88
    • /
    • 2017
  • Language models were originally developed for speech recognition and language processing. Using a set of example sentences, a language model predicts the next word or character based on sequential input data. N-gram models have been widely used but this model cannot model the correlation between the input units efficiently since it is a probabilistic model which are based on the frequency of each unit in the training set. Recently, as the deep learning algorithm has been developed, a recurrent neural network (RNN) model and a long short-term memory (LSTM) model have been widely used for the neural language model (Ahn, 2016; Kim et al., 2016; Lee et al., 2016). These models can reflect dependency between the objects that are entered sequentially into the model (Gers and Schmidhuber, 2001; Mikolov et al., 2010; Sundermeyer et al., 2012). In order to learning the neural language model, texts need to be decomposed into words or morphemes. Since, however, a training set of sentences includes a huge number of words or morphemes in general, the size of dictionary is very large and so it increases model complexity. In addition, word-level or morpheme-level models are able to generate vocabularies only which are contained in the training set. Furthermore, with highly morphological languages such as Turkish, Hungarian, Russian, Finnish or Korean, morpheme analyzers have more chance to cause errors in decomposition process (Lankinen et al., 2016). Therefore, this paper proposes a phoneme-level language model for Korean language based on LSTM models. A phoneme such as a vowel or a consonant is the smallest unit that comprises Korean texts. We construct the language model using three or four LSTM layers. Each model was trained using Stochastic Gradient Algorithm and more advanced optimization algorithms such as Adagrad, RMSprop, Adadelta, Adam, Adamax, and Nadam. Simulation study was done with Old Testament texts using a deep learning package Keras based the Theano. After pre-processing the texts, the dataset included 74 of unique characters including vowels, consonants, and punctuation marks. Then we constructed an input vector with 20 consecutive characters and an output with a following 21st character. Finally, total 1,023,411 sets of input-output vectors were included in the dataset and we divided them into training, validation, testsets with proportion 70:15:15. All the simulation were conducted on a system equipped with an Intel Xeon CPU (16 cores) and a NVIDIA GeForce GTX 1080 GPU. We compared the loss function evaluated for the validation set, the perplexity evaluated for the test set, and the time to be taken for training each model. As a result, all the optimization algorithms but the stochastic gradient algorithm showed similar validation loss and perplexity, which are clearly superior to those of the stochastic gradient algorithm. The stochastic gradient algorithm took the longest time to be trained for both 3- and 4-LSTM models. On average, the 4-LSTM layer model took 69% longer training time than the 3-LSTM layer model. However, the validation loss and perplexity were not improved significantly or became even worse for specific conditions. On the other hand, when comparing the automatically generated sentences, the 4-LSTM layer model tended to generate the sentences which are closer to the natural language than the 3-LSTM model. Although there were slight differences in the completeness of the generated sentences between the models, the sentence generation performance was quite satisfactory in any simulation conditions: they generated only legitimate Korean letters and the use of postposition and the conjugation of verbs were almost perfect in the sense of grammar. The results of this study are expected to be widely used for the processing of Korean language in the field of language processing and speech recognition, which are the basis of artificial intelligence systems.