• Title/Summary/Keyword: large language model

Search Result 282, Processing Time 0.026 seconds

Bilinear Graph Neural Network-Based Reasoning for Multi-Hop Question Answering (다중 홉 질문 응답을 위한 쌍 선형 그래프 신경망 기반 추론)

  • Lee, Sangui;Kim, Incheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.8
    • /
    • pp.243-250
    • /
    • 2020
  • Knowledge graph-based question answering not only requires deep understanding of the given natural language questions, but it also needs effective reasoning to find the correct answers on a large knowledge graph. In this paper, we propose a deep neural network model for effective reasoning on a knowledge graph, which can find correct answers to complex questions requiring multi-hop inference. The proposed model makes use of highly expressive bilinear graph neural network (BGNN), which can utilize context information between a pair of neighboring nodes, as well as allows bidirectional feature propagation between each entity node and one of its neighboring nodes on a knowledge graph. Performing experiments with an open-domain knowledge base (Freebase) and two natural-language question answering benchmark datasets(WebQuestionsSP and MetaQA), we demonstrate the effectiveness and performance of the proposed model.

The Unsupervised Learning-based Language Modeling of Word Comprehension in Korean

  • Kim, Euhee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.11
    • /
    • pp.41-49
    • /
    • 2019
  • We are to build an unsupervised machine learning-based language model which can estimate the amount of information that are in need to process words consisting of subword-level morphemes and syllables. We are then to investigate whether the reading times of words reflecting their morphemic and syllabic structures are predicted by an information-theoretic measure such as surprisal. Specifically, the proposed Morfessor-based unsupervised machine learning model is first to be trained on the large dataset of sentences on Sejong Corpus and is then to be applied to estimate the information-theoretic measure on each word in the test data of Korean words. The reading times of the words in the test data are to be recruited from Korean Lexicon Project (KLP) Database. A comparison between the information-theoretic measures of the words in point and the corresponding reading times by using a linear mixed effect model reveals a reliable correlation between surprisal and reading time. We conclude that surprisal is positively related to the processing effort (i.e. reading time), confirming the surprisal hypothesis.

Can ChatGPT Pass the National Korean Occupational Therapy Licensure Examination? (ChatGPT는 한국작업치료사면허시험에 합격할 수 있을까?)

  • Hong, Junhwa;Kim, Nayeon;Min, Hyemin;Yang, Hamin;Lee, Sihyun;Choi, Seojin;Park, Jin-Hyuck
    • Therapeutic Science for Rehabilitation
    • /
    • v.13 no.1
    • /
    • pp.65-74
    • /
    • 2024
  • Objective : This study assessed ChatGPT, an artificial intelligence system based on a large language model, for its ability to pass the National Korean Occupational Therapy Licensure Examination (NKOTLE). Methods : Using NKOTLE questions from 2018 to 2022, provided by the Korea Health and Medical Personnel Examination Institute, this study employed English prompts to determine the accuracy of ChatGPT in providing correct answers. Two researchers independently conducted the entire process, and the average accuracy of both researchers was used to determine whether ChatGPT passed over the 5-year period. The degree of agreement between ChatGPT answers of the two researchers was assessed. Results : ChatGPT passed the 2020 examination but failed to pass the other 4 years' examination. Specifically, its accuracy in questions related to medical regulations ranged from 25% to 57%, whereas its accuracy in other questions exceeded 60%. ChatGPT exhibited a strong agreement between researchers, except for medical regulation questions, and this agreement was significantly correlated with accuracy. Conclusion : There are still limitations to the application of ChatGPT to answer questions influenced by language or culture. Future studies should explore its potential as an educational tool for students majoring in occupational therapy through optimized prompts and continuous learning from the data.

A Study on Speaker Adaptation of Large Continuous Spoken Language Using back-off bigram (Back-off bigram을 이랑한 대용량 연속어의 화자적응에 관한 연구)

  • 최학윤
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.9C
    • /
    • pp.884-890
    • /
    • 2003
  • In this paper, we studied the speaker adaptation methods that improve the speaker independent recognition system. For the independent speakers, we compared the results between bigram and back-off bigram, MAP and MLLR. Cause back-off bigram applys unigram and back-off weighted value as bigram probability value, it has the effect adding little weighted value to bigram probability value. We did an experiment using total 39-feature vectors as featuring voice parameter with 12-MFCC, log energy and their delta and delta-delta parameter. For this recognition experiment, We constructed a system made by CHMM and tri-phones recognition unit and bigram and back-off bigrams language model.

Word Sense Disambiguation based on Concept Learning with a focus on the Lowest Frequency Words (저빈도어를 고려한 개념학습 기반 의미 중의성 해소)

  • Kim Dong-Sung;Choe Jae-Woong
    • Language and Information
    • /
    • v.10 no.1
    • /
    • pp.21-46
    • /
    • 2006
  • This study proposes a Word Sense Disambiguation (WSD) algorithm, based on concept learning with special emphasis on statistically meaningful lowest frequency words. Previous works on WSD typically make use of frequency of collocation and its probability. Such probability based WSD approaches tend to ignore the lowest frequency words which could be meaningful in the context. In this paper, we show an algorithm to extract and make use of the meaningful lowest frequency words in WSD. Learning method is adopted from the Find-Specific algorithm of Mitchell (1997), according to which the search proceeds from the specific predefined hypothetical spaces to the general ones. In our model, this algorithm is used to find contexts with the most specific classifiers and then moves to the more general ones. We build up small seed data and apply those data to the relatively large test data. Following the algorithm in Yarowsky (1995), the classified test data are exhaustively included in the seed data, thus expanding the seed data. However, this might result in lots of noise in the seed data. Thus we introduce the 'maximum a posterior hypothesis' based on the Bayes' assumption to validate the noise status of the new seed data. We use the Naive Bayes Classifier and prove that the application of Find-Specific algorithm enhances the correctness of WSD.

  • PDF

Trends and Future of Digital Personal Assistant (디지털 개인비서 동향과 미래)

  • Kwon, O.W.;Lee, K.Y.;Lee, Y.H.;Roh, Y.H.;Cho, M.S.;Huang, J.X.;Lim, S.J.;Choi, S.K.;Kim, Y.K.
    • Electronics and Telecommunications Trends
    • /
    • v.36 no.1
    • /
    • pp.1-11
    • /
    • 2021
  • In this study, we introduce trends in and the future of digital personal assistants. Recently, digital personal assistants have begun to handle many tasks like humans by communicating with users in human language on smart devices such as smart phones, smart speakers, and smart cars. Their capabilities range from simple voice commands and chitchat to complex tasks such as device control, reservation, ordering, and scheduling. The digital personal assistants of the future will certainly speak like a person, have a person-like personality, see, hear, and analyze situations like a person, and become more human. Dialogue processing technology that makes them more human-like has developed into an end-to-end learning model based on deep neural networks in recent years. In addition, language models pre-trained from a large corpus make dialogue processing more natural and better understood. Advances in artificial intelligence such as dialogue processing technology will enable digital personal assistants to serve with more familiar and better performance in various areas.

ICLAL: In-Context Learning-Based Audio-Language Multi-Modal Deep Learning Models (ICLAL: 인 컨텍스트 러닝 기반 오디오-언어 멀티 모달 딥러닝 모델)

  • Jun Yeong Park;Jinyoung Yeo;Go-Eun Lee;Chang Hwan Choi;Sang-Il Choi
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.514-517
    • /
    • 2023
  • 본 연구는 인 컨택스트 러닝 (In-Context Learning)을 오디오-언어 작업에 적용하기 위한 멀티모달 (Multi-Modal) 딥러닝 모델을 다룬다. 해당 모델을 통해 학습 단계에서 오디오와 텍스트의 소통 가능한 형태의 표현 (Representation)을 학습하고 여러가지 오디오-텍스트 작업을 수행할 수 있는 멀티모달 딥러닝 모델을 개발하는 것이 본 연구의 목적이다. 모델은 오디오 인코더와 언어 인코더가 연결된 구조를 가지고 있으며, 언어 모델은 6.7B, 30B 의 파라미터 수를 가진 자동회귀 (Autoregressive) 대형 언어 모델 (Large Language Model)을 사용한다 오디오 인코더는 자기지도학습 (Self-Supervised Learning)을 기반으로 사전학습 된 오디오 특징 추출 모델이다. 언어모델이 상대적으로 대용량이기 언어모델의 파라미터를 고정하고 오디오 인코더의 파라미터만 업데이트하는 프로즌 (Frozen) 방법으로 학습한다. 학습을 위한 과제는 음성인식 (Automatic Speech Recognition)과 요약 (Abstractive Summarization) 이다. 학습을 마친 후 질의응답 (Question Answering) 작업으로 테스트를 진행했다. 그 결과, 정답 문장을 생성하기 위해서는 추가적인 학습이 필요한 것으로 보였으나, 음성인식으로 사전학습 한 모델의 경우 정답과 유사한 키워드를 사용하는 문법적으로 올바른 문장을 생성함을 확인했다.

Development of Finite Element Model for Dynamic Characteristics of MEMS Piezo Actuator in Consideration of Semiconductor Process (반도체 공정을 고려한 유한요소해석에 의한 MEMS 압전 작동기의 동특성 해석)

  • Kim, Dong Woohn;Song, Jonghyeong;An, Seungdo;Woo, Kisuk
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2013.04a
    • /
    • pp.454-459
    • /
    • 2013
  • For the purpose of rapid development and superior design quality assurance, sophisticated finite element model for SOM(Spatial Optical Modulator) piezo actuator of MOEMS device has been developed and evaluated for the accuracy of dynamics and residual stress analysis. Parametric finite element model is constructed using ANSYS APDL language to increase the design and analysis performance. Geometric dimensions, mechanical material properties for each thin film layer are input parameters of FE model and residual stresses in all thin film layers are simulated by thermal expansion method with psedu process temperature. $6^{th}$ mask design samples are manufactured and $1^{st}$ natural frequency and 10V PZT driving displacement are measured with LDV. The results of experiment are compared with those of the simulation and validate the good agreement in $1^{st}$ natural frequency within 5% error. But large error over 30% occurred in 10V PZT driving displacement because of insufficient PZT constant $d_{31}$ measurement technology.

  • PDF

Service-centric Object Fragmentation Model for Efficient Retrieval and Management of XML Documents (XML 문서의 효율적인 검색과 관리를 위한 SCOF 모델)

  • Jeong, Chang-Hoo
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2007.11a
    • /
    • pp.595-598
    • /
    • 2007
  • Vast amount of XML documents raise interests in how they will be used and how far their usage can be expanded. This paper has two central goals: 1) easy and fast retrieval of XML documents or relevant elements; and 2) efficient and stable management of large-size XML documents. The keys to develop such a practical system are how to segment a large XML document to smaller fragments and how to store them. In order to achieve these goals, we designed SCOF(Service-centric Object Fragmentation) model, which is a semi-decomposition method based on conversion rules provided by XML database managers. Keyword-based search using SCOF model then retrieves the specific elements or attributes of XML documents, just as typical XML query language does. Even though this approach needs the wisdom of managers in XML document collection, SCOF model makes it efficient both retrieval and management of massive XML documents.

  • PDF

Service-centric Object Fragmentation Model for Efficient Retrieval and Management of Huge XML Documents (대용량 XML 문서의 효율적인 검색과 관리를 위한 SCOF 모델)

  • Jeong, Chang-Hoo;Choi, Yun-Soo;Jin, Du-Seok;Kim, Jin-Suk;Yoon, Hwa-Mook
    • Journal of Internet Computing and Services
    • /
    • v.9 no.1
    • /
    • pp.103-113
    • /
    • 2008
  • Vast amount of XML documents raise interests in how they will be used and how far their usage can be expanded, This paper has two central goals: 1) easy and fast retrieval of XML documents or relevant elements; and 2) efficient and stable management of large-size XML documents, The keys to develop such a practical system are how to segment a large XML document to smaller fragments and how to store them. In order to achieve these goals, we designed SCOF(Service-centric Object Fragmentation) model, which is a semi-decomposition method based on conversion rules provided by XML database managers. Keyword-based search using SCOF model then retrieves the specific elements or attributes of XML documents, just as typical XML query language does. Even though this approach needs the wisdom of managers in XML document collection, SCOF model makes it efficient both retrieval and management of massive XML documents.

  • PDF