• Title/Summary/Keyword: language models

Search Result 858, Processing Time 0.024 seconds

Research Trends in Large Language Models and Mathematical Reasoning (초거대 언어모델과 수학추론 연구 동향)

  • O.W. Kwon;J.H. Shin;Y.A. Seo;S.J. Lim;J. Heo;K.Y. Lee
    • Electronics and Telecommunications Trends
    • /
    • v.38 no.6
    • /
    • pp.1-11
    • /
    • 2023
  • Large language models seem promising for handling reasoning problems, but their underlying solving mechanisms remain unclear. Large language models will establish a new paradigm in artificial intelligence and the society as a whole. However, a major challenge of large language models is the massive resources required for training and operation. To address this issue, researchers are actively exploring compact large language models that retain the capabilities of large language models while notably reducing the model size. These research efforts are mainly focused on improving pretraining, instruction tuning, and alignment. On the other hand, chain-of-thought prompting is a technique aimed at enhancing the reasoning ability of large language models. It provides an answer through a series of intermediate reasoning steps when given a problem. By guiding the model through a multistep problem-solving process, chain-of-thought prompting may improve the model reasoning skills. Mathematical reasoning, which is a fundamental aspect of human intelligence, has played a crucial role in advancing large language models toward human-level performance. As a result, mathematical reasoning is being widely explored in the context of large language models. This type of research extends to various domains such as geometry problem solving, tabular mathematical reasoning, visual question answering, and other areas.

Towards a small language model powered chain-of-reasoning for open-domain question answering

  • Jihyeon Roh;Minho Kim;Kyoungman Bae
    • ETRI Journal
    • /
    • v.46 no.1
    • /
    • pp.11-21
    • /
    • 2024
  • We focus on open-domain question-answering tasks that involve a chain-of-reasoning, which are primarily implemented using large language models. With an emphasis on cost-effectiveness, we designed EffiChainQA, an architecture centered on the use of small language models. We employed a retrieval-based language model to address the limitations of large language models, such as the hallucination issue and the lack of updated knowledge. To enhance reasoning capabilities, we introduced a question decomposer that leverages a generative language model and serves as a key component in the chain-of-reasoning process. To generate training data for our question decomposer, we leveraged ChatGPT, which is known for its data augmentation ability. Comprehensive experiments were conducted using the HotpotQA dataset. Our method outperformed several established approaches, including the Chain-of-Thoughts approach, which is based on large language models. Moreover, our results are on par with those of state-of-the-art Retrieve-then-Read methods that utilize large language models.

Technical Trends in Artificial Intelligence for Robotics Based on Large Language Models (거대언어모델 기반 로봇 인공지능 기술 동향 )

  • J. Lee;S. Park;N.W. Kim;E. Kim;S.K. Ko
    • Electronics and Telecommunications Trends
    • /
    • v.39 no.1
    • /
    • pp.95-105
    • /
    • 2024
  • In natural language processing, large language models such as GPT-4 have recently been in the spotlight. The performance of natural language processing has advanced dramatically driven by an increase in the number of model parameters related to the number of acceptable input tokens and model size. Research on multimodal models that can simultaneously process natural language and image data is being actively conducted. Moreover, natural-language and image-based reasoning capabilities of large language models is being explored in robot artificial intelligence technology. We discuss research and related patent trends in robot task planning and code generation for robot control using large language models.

An Analysis of the Applications of the Language Models for Information Retrieval (정보검색에서의 언어모델 적용에 관한 분석)

  • Kim Heesop;Jung Youngmi
    • Journal of Korean Library and Information Science Society
    • /
    • v.36 no.2
    • /
    • pp.49-68
    • /
    • 2005
  • The purpose of this study is to examine the research trends and their experiment results on the applications of the language models for information retrieval. We reviewed the previous studies with the following categories: (1) the first generation of language modeling information retrieval (LMIR) experiments which are mainly focused on comparing the language modeling information retrieval with the traditional retrieval models in their retrieval performance, and (2) the second generation of LMIR experiments which are focused on comparing the expanded language modeling information retrieval with the basic language models in their retrieval performance. Through the analysis of the previous experiments results, we found that (1) language models are outperformed the probabilistic model or vector space model approaches, and (2) the expended language models demonstrated better results than the basic language models in their retrieval performance.

  • PDF

Current Status and Direction of Generative Large Language Model Applications in Medicine - Focusing on East Asian Medicine - (생성형 거대언어모델의 의학 적용 현황과 방향 - 동아시아 의학을 중심으로 -)

  • Bongsu Kang;SangYeon Lee;Hyojin Bae;Chang-Eop Kim
    • Journal of Physiology & Pathology in Korean Medicine
    • /
    • v.38 no.2
    • /
    • pp.49-58
    • /
    • 2024
  • The rapid advancement of generative large language models has revolutionized various real-life domains, emphasizing the importance of exploring their applications in healthcare. This study aims to examine how generative large language models are implemented in the medical domain, with the specific objective of searching for the possibility and potential of integration between generative large language models and East Asian medicine. Through a comprehensive current state analysis, we identified limitations in the deployment of generative large language models within East Asian medicine and proposed directions for future research. Our findings highlight the essential need for accumulating and generating structured data to improve the capabilities of generative large language models in East Asian medicine. Additionally, we tackle the issue of hallucination and the necessity for a robust model evaluation framework. Despite these challenges, the application of generative large language models in East Asian medicine has demonstrated promising results. Techniques such as model augmentation, multimodal structures, and knowledge distillation have the potential to significantly enhance accuracy, efficiency, and accessibility. In conclusion, we expect generative large language models to play a pivotal role in facilitating precise diagnostics, personalized treatment in clinical fields, and fostering innovation in education and research within East Asian medicine.

Comparative study of text representation and learning for Persian named entity recognition

  • Pour, Mohammad Mahdi Abdollah;Momtazi, Saeedeh
    • ETRI Journal
    • /
    • v.44 no.5
    • /
    • pp.794-804
    • /
    • 2022
  • Transformer models have had a great impact on natural language processing (NLP) in recent years by realizing outstanding and efficient contextualized language models. Recent studies have used transformer-based language models for various NLP tasks, including Persian named entity recognition (NER). However, in complex tasks, for example, NER, it is difficult to determine which contextualized embedding will produce the best representation for the tasks. Considering the lack of comparative studies to investigate the use of different contextualized pretrained models with sequence modeling classifiers, we conducted a comparative study about using different classifiers and embedding models. In this paper, we use different transformer-based language models tuned with different classifiers, and we evaluate these models on the Persian NER task. We perform a comparative analysis to assess the impact of text representation and text classification methods on Persian NER performance. We train and evaluate the models on three different Persian NER datasets, that is, MoNa, Peyma, and Arman. Experimental results demonstrate that XLM-R with a linear layer and conditional random field (CRF) layer exhibited the best performance. This model achieved phrase-based F-measures of 70.04, 86.37, and 79.25 and word-based F scores of 78, 84.02, and 89.73 on the MoNa, Peyma, and Arman datasets, respectively. These results represent state-of-the-art performance on the Persian NER task.

Transformer-based reranking for improving Korean morphological analysis systems

  • Jihee Ryu;Soojong Lim;Oh-Woog Kwon;Seung-Hoon Na
    • ETRI Journal
    • /
    • v.46 no.1
    • /
    • pp.137-153
    • /
    • 2024
  • This study introduces a new approach in Korean morphological analysis combining dictionary-based techniques with Transformer-based deep learning models. The key innovation is the use of a BERT-based reranking system, significantly enhancing the accuracy of traditional morphological analysis. The method generates multiple suboptimal paths, then employs BERT models for reranking, leveraging their advanced language comprehension. Results show remarkable performance improvements, with the first-stage reranking achieving over 20% improvement in error reduction rate compared with existing models. The second stage, using another BERT variant, further increases this improvement to over 30%. This indicates a significant leap in accuracy, validating the effectiveness of merging dictionary-based analysis with contemporary deep learning. The study suggests future exploration in refined integrations of dictionary and deep learning methods as well as using probabilistic models for enhanced morphological analysis. This hybrid approach sets a new benchmark in the field and offers insights for similar challenges in language processing applications.

An XML-Based Modeling Language for the Open Trading of Decision Models

  • Kim, Hyoung-Do
    • Korean Management Science Review
    • /
    • v.17 no.3
    • /
    • pp.147-160
    • /
    • 2000
  • These days, a modeling tool or environment has to know about the others on the market and build bridges to them with which their customers insist on sharing models and data. When it is based on a closed architecture, a tangle of import/export point translators is required. Using an exchange standard, we can design an open architecture for the interchange of models and data. XML(Extensible Markup Language) provides a framework for describing the syntax for creating and exchanging data structures. The explosive growth of XML-based business proposals and standards reflects the urgent requirements and its strength. This paper proposes an XML-based language for sharing decision models within the MSOR/DSS community. The language is able to allow applications and on-line analytic processing tools to models obtained from multiple sources without having to deal with individual differences between those sources. It is expected to be a medium for B2B integration by supporting flexible interchange of decision models.

  • PDF

A Study of Fine Tuning Pre-Trained Korean BERT for Question Answering Performance Development (사전 학습된 한국어 BERT의 전이학습을 통한 한국어 기계독해 성능개선에 관한 연구)

  • Lee, Chi Hoon;Lee, Yeon Ji;Lee, Dong Hee
    • Journal of Information Technology Services
    • /
    • v.19 no.5
    • /
    • pp.83-91
    • /
    • 2020
  • Language Models such as BERT has been an important factor of deep learning-based natural language processing. Pre-training the transformer-based language models would be computationally expensive since they are consist of deep and broad architecture and layers using an attention mechanism and also require huge amount of data to train. Hence, it became mandatory to do fine-tuning large pre-trained language models which are trained by Google or some companies can afford the resources and cost. There are various techniques for fine tuning the language models and this paper examines three techniques, which are data augmentation, tuning the hyper paramters and partly re-constructing the neural networks. For data augmentation, we use no-answer augmentation and back-translation method. Also, some useful combinations of hyper parameters are observed by conducting a number of experiments. Finally, we have GRU, LSTM networks to boost our model performance with adding those networks to BERT pre-trained model. We do fine-tuning the pre-trained korean-based language model through the methods mentioned above and push the F1 score from baseline up to 89.66. Moreover, some failure attempts give us important lessons and tell us the further direction in a good way.

Generating Premise-Hypothesis-Label Triplet Using Chain-of-Thought and Program-aided Language Models (Chain-of-Thought와 Program-aided Language Models을 이용한 전제-가설-라벨 삼중항 자동 생성)

  • Hee-jin Cho;Changki Lee;Kyoungman Bae
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.352-357
    • /
    • 2023
  • 자연어 추론은 두 문장(전제, 가설)간의 관계를 이해하고 추론하여 함의, 모순, 중립 세 가지 범주로 분류하며, 전제-가설-라벨(PHL) 데이터셋을 활용하여 자연어 추론 모델을 학습한다. 그러나, 새로운 도메인에 자연어 추론을 적용할 경우 학습 데이터가 존재하지 않거나 이를 구축하는 데 많은 시간과 자원이 필요하다는 문제가 있다. 본 논문에서는 자연어 추론을 위한 학습 데이터인 전제-가설-라벨 삼중항을 자동 생성하기 위해 [1]에서 제안한 문장 변환 규칙 대신에 거대 언어 모델과 Chain-of-Thought(CoT), Program-aided Language Models(PaL) 등의 프롬프팅(Prompting) 방법을 이용하여 전제-가설-라벨 삼중항을 자동으로 생성하는 방법을 제안한다. 실험 결과, CoT와 PaL 프롬프팅 방법으로 자동 생성된 데이터의 품질이 기존 규칙이나 기본 프롬프팅 방법보다 더 우수하였다.

  • PDF