• Title/Summary/Keyword: large language model

Search Result 294, Processing Time 0.033 seconds

A Study About Verification Model for Cooperation of Software Components of AUML Base (AUML기반의 소프트웨어 컴포넌트들의 협력성을 위한 검증 모텔에 관한 연구)

  • Gawn, Han-Hyoun;Park, Jae-Bock
    • Journal of the Korea Computer Industry Society
    • /
    • v.6 no.3
    • /
    • pp.529-538
    • /
    • 2005
  • AUML (Agent Unified Modeling Language) is specification anger of agent software system, sight anger, language that do creation by purpose. Do so that may apply Together that is one of automation application program creation system to Agent's BDI in trend sophistication of software, large size Tuesday in this research and investigate this about operation between component system. Standard detailed statement (FIPA:Foundation for Inteligent Physical Agent) that use can consist by data exchange between component and cooperate each other even if type of component is different mutually to base ACL message, and protocole use and study about method and accuracy and consistency that minimize error when embody this using meta model base etc.. through object intention modelling.

  • PDF

A Fuzzy-AHP-based Movie Recommendation System with the Bidirectional Recurrent Neural Network Language Model (양방향 순환 신경망 언어 모델을 이용한 Fuzzy-AHP 기반 영화 추천 시스템)

  • Oh, Jae-Taek;Lee, Sang-Yong
    • Journal of Digital Convergence
    • /
    • v.18 no.12
    • /
    • pp.525-531
    • /
    • 2020
  • In today's IT environment where various pieces of information are distributed in large volumes, recommendation systems are in the spotlight capable of figuring out users' needs fast and helping them with their decisions. The current recommendation systems, however, have a couple of problems including that user preference may not be reflected on the systems right away according to their changing tastes or interests and that items with no relations to users' preference may be recommended, being induced by advertising. In an effort to solve these problems, this study set out to propose a Fuzzy-AHP-based movie recommendation system by applying the BRNN(Bidirectional Recurrent Neural Network) language model. Applied to this system was Fuzzy-AHP to reflect users' tastes or interests in clear and objective ways. In addition, the BRNN language model was adopted to analyze movie-related data collected in real time and predict movies preferred by users. The system was assessed for its performance with grid searches to examine the fitness of the learning model for the entire size of word sets. The results show that the learning model of the system recorded a mean cross-validation index of 97.9% according to the entire size of word sets, thus proving its fitness. The model recorded a RMSE of 0.66 and 0.805 against the movie ratings on Naver and LSTM model language model, respectively, demonstrating the system's superior performance in predicting movie ratings.

Korean Sentence Generation Using Phoneme-Level LSTM Language Model (한국어 음소 단위 LSTM 언어모델을 이용한 문장 생성)

  • Ahn, SungMahn;Chung, Yeojin;Lee, Jaejoon;Yang, Jiheon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.71-88
    • /
    • 2017
  • Language models were originally developed for speech recognition and language processing. Using a set of example sentences, a language model predicts the next word or character based on sequential input data. N-gram models have been widely used but this model cannot model the correlation between the input units efficiently since it is a probabilistic model which are based on the frequency of each unit in the training set. Recently, as the deep learning algorithm has been developed, a recurrent neural network (RNN) model and a long short-term memory (LSTM) model have been widely used for the neural language model (Ahn, 2016; Kim et al., 2016; Lee et al., 2016). These models can reflect dependency between the objects that are entered sequentially into the model (Gers and Schmidhuber, 2001; Mikolov et al., 2010; Sundermeyer et al., 2012). In order to learning the neural language model, texts need to be decomposed into words or morphemes. Since, however, a training set of sentences includes a huge number of words or morphemes in general, the size of dictionary is very large and so it increases model complexity. In addition, word-level or morpheme-level models are able to generate vocabularies only which are contained in the training set. Furthermore, with highly morphological languages such as Turkish, Hungarian, Russian, Finnish or Korean, morpheme analyzers have more chance to cause errors in decomposition process (Lankinen et al., 2016). Therefore, this paper proposes a phoneme-level language model for Korean language based on LSTM models. A phoneme such as a vowel or a consonant is the smallest unit that comprises Korean texts. We construct the language model using three or four LSTM layers. Each model was trained using Stochastic Gradient Algorithm and more advanced optimization algorithms such as Adagrad, RMSprop, Adadelta, Adam, Adamax, and Nadam. Simulation study was done with Old Testament texts using a deep learning package Keras based the Theano. After pre-processing the texts, the dataset included 74 of unique characters including vowels, consonants, and punctuation marks. Then we constructed an input vector with 20 consecutive characters and an output with a following 21st character. Finally, total 1,023,411 sets of input-output vectors were included in the dataset and we divided them into training, validation, testsets with proportion 70:15:15. All the simulation were conducted on a system equipped with an Intel Xeon CPU (16 cores) and a NVIDIA GeForce GTX 1080 GPU. We compared the loss function evaluated for the validation set, the perplexity evaluated for the test set, and the time to be taken for training each model. As a result, all the optimization algorithms but the stochastic gradient algorithm showed similar validation loss and perplexity, which are clearly superior to those of the stochastic gradient algorithm. The stochastic gradient algorithm took the longest time to be trained for both 3- and 4-LSTM models. On average, the 4-LSTM layer model took 69% longer training time than the 3-LSTM layer model. However, the validation loss and perplexity were not improved significantly or became even worse for specific conditions. On the other hand, when comparing the automatically generated sentences, the 4-LSTM layer model tended to generate the sentences which are closer to the natural language than the 3-LSTM model. Although there were slight differences in the completeness of the generated sentences between the models, the sentence generation performance was quite satisfactory in any simulation conditions: they generated only legitimate Korean letters and the use of postposition and the conjugation of verbs were almost perfect in the sense of grammar. The results of this study are expected to be widely used for the processing of Korean language in the field of language processing and speech recognition, which are the basis of artificial intelligence systems.

DESIGN AND IMPLEMENTATION OF METADATA MODEL FOR SENSOR DATA STREAM

  • Lee, Yang-Koo;Jung, Young-Jin;Ryu, Keun-Ho;Kim, Kwang-Deuk
    • Proceedings of the KSRS Conference
    • /
    • v.2
    • /
    • pp.768-771
    • /
    • 2006
  • In WSN(Wireless Sensor Network) environment, a large amount of sensors, which are small and heterogeneous, generates data stream successively in physical space. These sensors are composed of measured data and metadata. Metadata includes various features such as location, sampling time, measurement unit, and their types. Until now, wireless sensors have been managed with individual specification, not the explicit standardization of metadata, so it is difficult to collect and communicate between heterogeneous sensors. To solve this problem, OGC(Open Geospatial Consortium) has proposed a SensorML(Sensor Model Language) which can manage metadata of heterogeneous sensors with unique format. In this paper, we introduce a metadata model using SensorML specification to manage various sensors, which are distributed in a wide scope. In addition, we implement the metadata management module applied to the sensor data stream management system. We provide many functions, namely generating metadata file, registering and storing them according to definition of SensorML.

  • PDF

Deep Learning Model Parallelism (딥러닝 모델 병렬 처리)

  • Park, Y.M.;Ahn, S.Y.;Lim, E.J.;Choi, Y.S.;Woo, Y.C.;Choi, W.
    • Electronics and Telecommunications Trends
    • /
    • v.33 no.4
    • /
    • pp.1-13
    • /
    • 2018
  • Deep learning (DL) models have been widely applied to AI applications such image recognition and language translation with big data. Recently, DL models have becomes larger and more complicated, and have merged together. For the accelerated training of a large-scale deep learning model, model parallelism that partitions the model parameters for non-shared parallel access and updates across multiple machines was provided by a few distributed deep learning frameworks. Model parallelism as a training acceleration method, however, is not as commonly used as data parallelism owing to the difficulty of efficient model parallelism. This paper provides a comprehensive survey of the state of the art in model parallelism by comparing the implementation technologies in several deep learning frameworks that support model parallelism, and suggests a future research directions for improving model parallelism technology.

Development of Active Data Mining Component for Web Database Applications (웹 데이터베이스 응용을 위한 액티브데이터마이닝 컴포넌트 개발)

  • Choi, Yong-Goo
    • Journal of Information Technology Applications and Management
    • /
    • v.15 no.2
    • /
    • pp.1-14
    • /
    • 2008
  • The distinguished prosperity of information technologies from great progress of e-business during the last decade has unavoidably made software development for active data mining to discovery hidden predictive information regarding business trends and behavior from vary large databases. Therefore this paper develops an active mining object(ADMO) component, which provides real-time predictive information from web databases. The ADMO component is to extended ADO(ActiveX Data Object) component to active data mining component based on COM(Component Object Model) for application program interface(API). ADMO component development made use of window script component(WSC) based on XML(eXtensible Markup Language). For the purpose of investigating the application environments and the practical schemes of the ADMO component, experiments for diverse practical applications were performed in this paper. As a result, ADMO component confirmed that it could effectively extract the analytic information of classification and aggregation from vary large databases for Web services.

  • PDF

Phonetic Question Set Generation Algorithm (음소 질의어 집합 생성 알고리즘)

  • 김성아;육동석;권오일
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.2
    • /
    • pp.173-179
    • /
    • 2004
  • Due to the insufficiency of training data in large vocabulary continuous speech recognition, similar context dependent phones can be clustered by decision trees to share the data. When the decision trees are built and used to predict unseen triphones, a phonetic question set is required. The phonetic question set, which contains categories of the phones with similar co-articulation effects, is usually generated by phonetic or linguistic experts. This knowledge-based approach for generating phonetic question set, however, may reduce the homogeneity of the clusters. Moreover, the experts must adjust the question sets whenever the language or the PLU (phone-like unit) of a recognition system is changed. Therefore, we propose a data-driven method to automatically generate phonetic question set. Since the proposed method generates the phone categories using speech data distribution, it is not dependent on the language or the PLU, and may enhance the homogeneity of the clusters. In large vocabulary speech recognition experiments, the proposed algorithm has been found to reduce the error rate by 14.3%.

Developing a Conceptual ERP Model by using "4+1 View" ("4+1 뷰"를 적용한 ERP 개념 모델 개발)

  • 허분애;정기원;이남용
    • The Journal of Society for e-Business Studies
    • /
    • v.5 no.2
    • /
    • pp.81-99
    • /
    • 2000
  • Nowadays, many commercial ERP products, such as Oracle, SAP, and Baan, etc, are designed based on large-scaled companies. It is difficult for small and medium-size companies with weakness in budgets and resources(e.g., human, organization, technique, and so on) to use them as it was. So, new ERP system need to be provided for small and medium-size companies. In this paper, we model and provide a conceptual ERP model for small and medium-size companies by using "4+1 View" architecture model of Unified Modeling Language(UML). The conceptual ERP model consists of five subsystems: Manufacturing, Sales, HumanResource and Payroll, Accounting, and Trading. Especially, we describe the conceptual ERP model focusing on "Manufacturing" subsystem by using several diagrams of UML. By using the conceptual ERP model, the ERP system′s developers of small and medium-size companies can obtain many benefits: improving the efficiency of software developing process and helping user requirements gathering and description of ERP system′s nonfunctional aspect as well as functional aspect.

  • PDF

The Effect of Domain Specificity on the Performance of Domain-Specific Pre-Trained Language Models (도메인 특수성이 도메인 특화 사전학습 언어모델의 성능에 미치는 영향)

  • Han, Minah;Kim, Younha;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.4
    • /
    • pp.251-273
    • /
    • 2022
  • Recently, research on applying text analysis to deep learning has steadily continued. In particular, researches have been actively conducted to understand the meaning of words and perform tasks such as summarization and sentiment classification through a pre-trained language model that learns large datasets. However, existing pre-trained language models show limitations in that they do not understand specific domains well. Therefore, in recent years, the flow of research has shifted toward creating a language model specialized for a particular domain. Domain-specific pre-trained language models allow the model to understand the knowledge of a particular domain better and reveal performance improvements on various tasks in the field. However, domain-specific further pre-training is expensive to acquire corpus data of the target domain. Furthermore, many cases have reported that performance improvement after further pre-training is insignificant in some domains. As such, it is difficult to decide to develop a domain-specific pre-trained language model, while it is not clear whether the performance will be improved dramatically. In this paper, we present a way to proactively check the expected performance improvement by further pre-training in a domain before actually performing further pre-training. Specifically, after selecting three domains, we measured the increase in classification accuracy through further pre-training in each domain. We also developed and presented new indicators to estimate the specificity of the domain based on the normalized frequency of the keywords used in each domain. Finally, we conducted classification using a pre-trained language model and a domain-specific pre-trained language model of three domains. As a result, we confirmed that the higher the domain specificity index, the higher the performance improvement through further pre-training.

LLM-based chatbot system to improve worker efficiency and prevent safety incidents (작업자의 업무 능률 향상과 안전 사고 방지를 위한 LLM 기반 챗봇 시스템)

  • Doohwan Kim;Yohan Han;Inhyuk Jeong;Yeongseok Hwnag;Jinju Park;Nahyeon Lee;Yujin Lee
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2024.01a
    • /
    • pp.321-324
    • /
    • 2024
  • 본 논문에서는 LLM(Large Language Models) 기반의 STT 결합 챗봇 시스템을 제안한다. 제조업 공장에서 안전 교육의 부족과 외국인 근로자의 증가는 안전을 중시하는 작업 환경에서 새로운 도전과제로 부상하고 있다. 이에 본 연구는 언어 모델과 음성 인식(Speech-to-Text, STT) 기술을 활용한 혁신적인 챗봇 시스템을 통해 이러한 문제를 해결하고자 한다. 제안된 시스템은 작업자들이 장비 사용 매뉴얼 및 안전 지침을 쉽게 접근하도록 지원하며, 비상 상황에서 신속하고 정확한 대응을 가능하게 한다. 연구 과정에서 LLM은 작업자의 의도를 파악하고, STT 기술은 음성 명령을 효과적으로 처리한다. 실험 결과, 이 시스템은 작업자의 업무 효율성을 증대시키고 언어 장벽을 해소하는데 효과적임이 확인되었다. 본 연구는 제조업 현장에서 작업자의 안전과 업무 효율성 향상에 기여할 것으로 기대된다.

  • PDF