• Title/Summary/Keyword: large language model

Search Result 282, Processing Time 0.019 seconds

Evaluating ChatGPT's Competency in BIM Related Knowledge via the Korean BIM Expertise Exam (BIM 운용 전문가 시험을 통한 ChatGPT의 BIM 분야 전문 지식 수준 평가)

  • Choi, Jiwon;Koo, Bonsang;Yu, Youngsu;Jeong, Yujeong;Ham, Namhyuk
    • Journal of KIBIM
    • /
    • v.13 no.3
    • /
    • pp.21-29
    • /
    • 2023
  • ChatGPT, a chatbot based on GPT large language models, has gained immense popularity among the general public as well as domain professionals. To assess its proficiency in specialized fields, ChatGPT was tested on mainstream exams like the bar exam and medical licensing tests. This study evaluated ChatGPT's ability to answer questions related to Building Information Modeling (BIM) by testing it on Korea's BIM expertise exam, focusing primarily on multiple-choice problems. Both GPT-3.5 and GPT-4 were tested by prompting them to provide the correct answers to three years' worth of exams, totaling 150 questions. The results showed that both versions passed the test with average scores of 68 and 85, respectively. GPT-4 performed particularly well in categories related to 'BIM software' and 'Smart Construction technology'. However, it did not fare well in 'BIM applications'. Both versions were more proficient with short-answer choices than with sentence-length answers. Additionally, GPT-4 struggled with questions related to BIM policies and regulations specific to the Korean industry. Such limitations might be addressed by using tools like LangChain, which allow for feeding domain-specific documents to customize ChatGPT's responses. These advancements are anticipated to enhance ChatGPT's utility as a virtual assistant for BIM education and modeling automation.

F_MixBERT: Sentiment Analysis Model using Focal Loss for Imbalanced E-commerce Reviews

  • Fengqian Pang;Xi Chen;Letong Li;Xin Xu;Zhiqiang Xing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.2
    • /
    • pp.263-283
    • /
    • 2024
  • Users' comments after online shopping are critical to product reputation and business improvement. These comments, sometimes known as e-commerce reviews, influence other customers' purchasing decisions. To confront large amounts of e-commerce reviews, automatic analysis based on machine learning and deep learning draws more and more attention. A core task therein is sentiment analysis. However, the e-commerce reviews exhibit the following characteristics: (1) inconsistency between comment content and the star rating; (2) a large number of unlabeled data, i.e., comments without a star rating, and (3) the data imbalance caused by the sparse negative comments. This paper employs Bidirectional Encoder Representation from Transformers (BERT), one of the best natural language processing models, as the base model. According to the above data characteristics, we propose the F_MixBERT framework, to more effectively use inconsistently low-quality and unlabeled data and resolve the problem of data imbalance. In the framework, the proposed MixBERT incorporates the MixMatch approach into BERT's high-dimensional vectors to train the unlabeled and low-quality data with generated pseudo labels. Meanwhile, data imbalance is resolved by Focal loss, which penalizes the contribution of large-scale data and easily-identifiable data to total loss. Comparative experiments demonstrate that the proposed framework outperforms BERT and MixBERT for sentiment analysis of e-commerce comments.

Korean Word Segmentation and Compound-noun Decomposition Using Markov Chain and Syllable N-gram (마코프 체인 밀 음절 N-그램을 이용한 한국어 띄어쓰기 및 복합명사 분리)

  • 권오욱
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.3
    • /
    • pp.274-284
    • /
    • 2002
  • Word segmentation errors occurring in text preprocessing often insert incorrect words into recognition vocabulary and cause poor language models for Korean large vocabulary continuous speech recognition. We propose an automatic word segmentation algorithm using Markov chains and syllable-based n-gram language models in order to correct word segmentation error in teat corpora. We assume that a sentence is generated from a Markov chain. Spaces and non-space characters are generated on self-transitions and other transitions of the Markov chain, respectively Then word segmentation of the sentence is obtained by finding the maximum likelihood path using syllable n-gram scores. In experimental results, the algorithm showed 91.58% word accuracy and 96.69% syllable accuracy for word segmentation of 254 sentence newspaper columns without any spaces. The algorithm improved the word accuracy from 91.00% to 96.27% for word segmentation correction at line breaks and yielded the decomposition accuracy of 96.22% for compound-noun decomposition.

The Strength of the Relationship between Semantic Similarity and the Subcategorization Frames of the English Verbs: a Stochastic Test based on the ICE-GB and WordNet (영어 동사의 의미적 유사도와 논항 선택 사이의 연관성 : ICE-GB와 WordNet을 이용한 통계적 검증)

  • Song, Sang-Houn;Choe, Jae-Woong
    • Language and Information
    • /
    • v.14 no.1
    • /
    • pp.113-144
    • /
    • 2010
  • The primary goal of this paper is to find a feasible way to answer the question: Does the similarity in meaning between verbs relate to the similarity in their subcategorization? In order to answer this question in a rather concrete way on the basis of a large set of English verbs, this study made use of various language resources, tools, and statistical methodologies. We first compiled a list of 678 verbs that were selected from the most and second most frequent word lists from the Colins Cobuild English Dictionary, which also appeared in WordNet 3.0. We calculated similarity measures between all the pairs of the words based on the 'jcn' algorithm (Jiang and Conrath, 1997) implemented in the WordNet::Similarity module (Pedersen, Patwardhan, and Michelizzi, 2004). The clustering process followed, first building similarity matrices out of the similarity measure values, next drawing dendrograms on the basis of the matricies, then finally getting 177 meaningful clusters (covering 437 verbs) that passed a certain level set by z-score. The subcategorization frames and their frequency values were taken from the ICE-GB. In order to calculate the Selectional Preference Strength (SPS) of the relationship between a verb and its subcategorizations, we relied on the Kullback-Leibler Divergence model (Resnik, 1996). The SPS values of the verbs in the same cluster were compared with each other, which served to give the statistical values that indicate how much the SPS values overlap between the subcategorization frames of the verbs. Our final analysis shows that the degree of overlap, or the relationship between semantic similarity and the subcategorization frames of the verbs in English, is equally spread out from the 'very strongly related' to the 'very weakly related'. Some semantically similar verbs share a lot in terms of their subcategorization frames, and some others indicate an average degree of strength in the relationship, while the others, though still semantically similar, tend to share little in their subcategorization frames.

  • PDF

A Comparative Study on Korean Zero-shot Relation Extraction using a Large Language Model (거대 언어 모델을 활용한 한국어 제로샷 관계 추출 비교 연구)

  • Jinsung Kim;Gyeongmin Kim;Kinam Park;Heuiseok Lim
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.648-653
    • /
    • 2023
  • 관계 추출 태스크는 주어진 텍스트로부터 두 개체 간의 적절한 관계를 추론하는 작업이며, 지식 베이스 구축 및 질의응답과 같은 응용 태스크의 기반이 된다. 최근 자연어처리 분야 전반에서 생성형 거대 언어모델의 내재 지식을 활용하여 뛰어난 성능을 성취하면서, 대표적인 정보 추출 태스크인 관계 추출에서 역시 이를 적극적으로 활용 가능한 방안에 대한 탐구가 필요하다. 특히, 실 세계의 추론 환경과의 유사성에서 기인하는 저자원 특히, 제로샷 환경에서의 관계 추출 연구의 중요성에 기반하여, 효과적인 프롬프팅 기법의 적용이 유의미함을 많은 기존 연구에서 증명해왔다. 따라서, 본 연구는 한국어 관계 추출 분야에서 거대 언어모델에 다각적인 프롬프팅 기법을 활용하여 제로샷 환경에서의 추론에 관한 비교 연구를 진행함으로써, 추후 한국어 관계 추출을 위한 최적의 거대 언어모델 프롬프팅 기법 심화 연구의 기반을 제공하고자 한다. 특히, 상식 추론 등의 도전적인 타 태스크에서 큰 성능 개선을 보인 사고의 연쇄(Chain-of-Thought) 및 자가 개선(Self-Refine)을 포함한 세 가지 프롬프팅 기법을 한국어 관계 추출에 도입하여 양적/질적으로 비교 분석을 제공한다. 실험 결과에 따르면, 사고의 연쇄 및 자가 개선 기법 보다 일반적인 태스크 지시 등이 포함된 프롬프팅이 정량적으로 가장 좋은 제로샷 성능을 보인다. 그러나, 이는 두 방법의 한계를 지적하는 것이 아닌, 한국어 관계 추출 태스크에의 최적화의 필요성을 암시한다고 해석 가능하며, 추후 이러한 방법론들을 발전시키는 여러 실험적 연구에 의해 개선될 것으로 판단된다.

  • PDF

FubaoLM : Automatic Evaluation based on Chain-of-Thought Distillation with Ensemble Learning (FubaoLM : 연쇄적 사고 증류와 앙상블 학습에 의한 대규모 언어 모델 자동 평가)

  • Huiju Kim;Donghyeon Jeon;Ohjoon Kwon;Soonhwan Kwon;Hansu Kim;Inkwon Lee;Dohyeon Kim;Inho Kang
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.448-453
    • /
    • 2023
  • 대규모 언어 모델 (Large Language Model, LLM)을 인간의 선호도 관점에서 평가하는 것은 기존의 벤치마크 평가와는 다른 도전적인 과제이다. 이를 위해, 기존 연구들은 강력한 LLM을 평가자로 사용하여 접근하였지만, 높은 비용 문제가 부각되었다. 또한, 평가자로서 LLM이 사용하는 주관적인 점수 기준은 모호하여 평가 결과의 신뢰성을 저해하며, 단일 모델에 의한 평가 결과는 편향될 가능성이 있다. 본 논문에서는 엄격한 기준을 활용하여 편향되지 않은 평가를 수행할 수 있는 평가 프레임워크 및 평가자 모델 'FubaoLM'을 제안한다. 우리의 평가 프레임워크는 심층적인 평가 기준을 통해 다수의 강력한 한국어 LLM을 활용하여 연쇄적 사고(Chain-of-Thought) 기반 평가를 수행한다. 이러한 평가 결과를 다수결로 통합하여 편향되지 않은 평가 결과를 도출하며, 지시 조정 (instruction tuning)을 통해 FubaoLM은 다수의 LLM으로 부터 평가 지식을 증류받는다. 더 나아가 본 논문에서는 전문가 기반 평가 데이터셋을 구축하여 FubaoLM 효과성을 입증한다. 우리의 실험에서 앙상블된 FubaoLM은 GPT-3.5 대비 16% 에서 23% 향상된 절대 평가 성능을 가지며, 이항 평가에서 인간과 유사한 선호도 평가 결과를 도출한다. 이를 통해 FubaoLM은 비교적 적은 비용으로도 높은 신뢰성을 유지하며, 편향되지 않은 평가를 수행할 수 있음을 보인다.

  • PDF

Analysis of Topics Related to Population Aging Using Natural Language Processing Techniques (자연어 처리 기술을 활용한 인구 고령화 관련 토픽 분석)

  • Hyunjung Park;Taemin Lee;Heuiseok Lim
    • Journal of Information Technology Services
    • /
    • v.23 no.1
    • /
    • pp.55-79
    • /
    • 2024
  • Korea, which is expected to enter a super-aged society in 2025, is facing the most worrisome crisis worldwide. Efforts are urgently required to examine problems and countermeasures from various angles and to improve the shortcomings. In this regard, from a new viewpoint, we intend to derive useful implications by applying the recent natural language processing techniques to online articles. More specifically, we derive three research questions: First, what topics are being reported in the online media and what is the public's response to them? Second, what is the relationship between these aging-related topics and individual happiness factors? Third, what are the strategic directions and implications for benchmarking discussed to solve the problem of population aging? To find answers to these, we collect Naver portal articles related to population aging and their classification categories, comments, and number of comments, including other numerical data. From the data, we firstly derive 33 topics with a semi-supervised BERTopic by reflecting article classification information that was not used in previous studies, conducting sentiment analysis of comments on them with a current open-source large language model. We also examine the relationship between the derived topics and personal happiness factors extended to Alderfer's ERG dimension, carrying out additional 3~4-gram keyword frequency analysis, trend analysis, text network analysis based on 3~4-gram keywords, etc. Through this multifaceted approach, we present diverse fresh insights from practical and theoretical perspectives.

A novel modeling of settlement of foundations in permafrost regions

  • Wang, Songhe;Qi, Jilin;Yu, Fan;Liu, Fengyin
    • Geomechanics and Engineering
    • /
    • v.10 no.2
    • /
    • pp.225-245
    • /
    • 2016
  • Settlement of foundations in permafrost regions primarily results from three physical and mechanical processes such as thaw consolidation of permafrost layer, creep of warm frozen soils and the additional deformation of seasonal active layer induced by freeze-thaw cycling. This paper firstly establishes theoretical models for the three sources of settlement including a statistical damage model for soils which experience cyclic freeze-thaw, a large strain thaw consolidation theory incorporating a modified Richards' equation and a Drucker-Prager yield criterion, as well as a simple rheological element based creep model for frozen soils. A novel numerical method was proposed for live computation of thaw consolidation, creep and freeze-thaw cycling in corresponding domains which vary with heat budget in frozen ground. It was then numerically implemented in the FISH language on the FLAC platform and verified by freeze-thaw tests on sandy clay. Results indicate that the calculated results agree well with the measured data. Finally a model test carried out on a half embankment in laboratory was modeled.

Algorithmic GPGPU Memory Optimization

  • Jang, Byunghyun;Choi, Minsu;Kim, Kyung Ki
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.14 no.4
    • /
    • pp.391-406
    • /
    • 2014
  • The performance of General-Purpose computation on Graphics Processing Units (GPGPU) is heavily dependent on the memory access behavior. This sensitivity is due to a combination of the underlying Massively Parallel Processing (MPP) execution model present on GPUs and the lack of architectural support to handle irregular memory access patterns. Application performance can be significantly improved by applying memory-access-pattern-aware optimizations that can exploit knowledge of the characteristics of each access pattern. In this paper, we present an algorithmic methodology to semi-automatically find the best mapping of memory accesses present in serial loop nest to underlying data-parallel architectures based on a comprehensive static memory access pattern analysis. To that end we present a simple, yet powerful, mathematical model that captures all memory access pattern information present in serial data-parallel loop nests. We then show how this model is used in practice to select the most appropriate memory space for data and to search for an appropriate thread mapping and work group size from a large design space. To evaluate the effectiveness of our methodology, we report on execution speedup using selected benchmark kernels that cover a wide range of memory access patterns commonly found in GPGPU workloads. Our experimental results are reported using the industry standard heterogeneous programming language, OpenCL, targeting the NVIDIA GT200 architecture.

An Open Standard-based Terrain Tile Production Chain for Geo-referenced Simulation

  • Yoo, Byoung-Hyun
    • Korean Journal of Remote Sensing
    • /
    • v.24 no.5
    • /
    • pp.497-506
    • /
    • 2008
  • The needs for digital models of real environment such as 3D terrain or cyber city model are increasing. Most of applications related with modeling and simulation require virtual environment constructed from geospatial information of real world in order to guarantee reliability and accuracy of the simulation. The most fundamental data for building virtual environment, terrain elevation and orthogonal imagery is acquired from optical sensor of satellite or airplane. Providing interoperable and reusable digital model is important to promote practical application of high-resolution satellite imagery. This paper presents the new research regarding representation of geospatial information, especially for 3D shape and appearance of virtual terrain. and describe framework for constructing real-time 3D model of large terrain based on high-resolution satellite imagery. It provides infrastructure of 3D simulation with geographical context. Web architecture, XML language and open protocols to build a standard based 3D terrain are presented. Details of standard-based approach for providing infrastructure of real-time 3D simulation using high-resolution satellite imagery are also presented. This work would facilitate interchange and interoperability across diverse systems and be usable by governments, industry scientists and general public.