• Title/Summary/Keyword: representations from transformers (bert)

Search Result 37, Processing Time 0.027 seconds

A BERT-based Transfer Learning Model for Bidirectional HR Matching (양방향 인재매칭을 위한 BERT 기반의 전이학습 모델)

  • Oh, Sojin;Jang, Moonkyoung;Song, Hee Seok
    • Journal of Information Technology Applications and Management
    • /
    • v.28 no.4
    • /
    • pp.33-43
    • /
    • 2021
  • While youth unemployment has recorded the lowest level since the global COVID-19 pandemic, SMEs(small and medium sized enterprises) are still struggling to fill vacancies. It is difficult for SMEs to find good candidates as well as for job seekers to find appropriate job offers due to information mismatch. To overcome information mismatch, this study proposes the fine-turning model for bidirectional HR matching based on a pre-learning language model called BERT(Bidirectional Encoder Representations from Transformers). The proposed model is capable to recommend job openings suitable for the applicant, or applicants appropriate for the job through sufficient pre-learning of terms including technical jargons. The results of the experiment demonstrate the superior performance of our model in terms of precision, recall, and f1-score compared to the existing content-based metric learning model. This study provides insights for developing practical models for job recommendations and offers suggestions for future research.

A Recommendation System by Extracting Scholarship Information with a BERT's Q&A Model (BERT Q&A 모델을 활용한 장학금 정보 추출 및 추천 시스템)

  • Byeongjun Kang;Kyujin Kim;Jinah Park;Ijun Jang;Jaehyun Joo;Hyungjoon Koo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.05a
    • /
    • pp.288-289
    • /
    • 2023
  • 본 논문은 글로벌 이슈로 인한 인플레이션과 대학 등록금 인상 우려 등으로 인해 장학금의 중요성이 부각되고 있는 상황을 고려하여 기존의 장학금 공고 게시물을 수집한 후 BERT Q&A (Bidirectional Encoder Representations from Transformers Question & Answering) 모델을 이용해 개별 맞춤형 장학 공고를 추천하는 시스템을 제안한다. 우선 웹 크롤링을 통해 장학금 정보를 수집하고, BERT Q&A 모델과 사전에 정의한 규칙 기반으로 핵심 정보를 추출한다. 이후 분류 과정을 거쳐 사용자가 입력한 정보와 매칭하여 조건에 맞는 장학금 게시물을 추천할 수 있는 어플리케이션을 구현하였다.

BERT-Based Logits Ensemble Model for Gender Bias and Hate Speech Detection

  • Sanggeon Yun;Seungshik Kang;Hyeokman Kim
    • Journal of Information Processing Systems
    • /
    • v.19 no.5
    • /
    • pp.641-651
    • /
    • 2023
  • Malicious hate speech and gender bias comments are common in online communities, causing social problems in our society. Gender bias and hate speech detection has been investigated. However, it is difficult because there are diverse ways to express them in words. To solve this problem, we attempted to detect malicious comments in a Korean hate speech dataset constructed in 2020. We explored bidirectional encoder representations from transformers (BERT)-based deep learning models utilizing hyperparameter tuning, data sampling, and logits ensembles with a label distribution. We evaluated our model in Kaggle competitions for gender bias, general bias, and hate speech detection. For gender bias detection, an F1-score of 0.7711 was achieved using an ensemble of the Soongsil-BERT and KcELECTRA models. The general bias task included the gender bias task, and the ensemble model achieved the best F1-score of 0.7166.

A Study on Fine-Tuning and Transfer Learning to Construct Binary Sentiment Classification Model in Korean Text (한글 텍스트 감정 이진 분류 모델 생성을 위한 미세 조정과 전이학습에 관한 연구)

  • JongSoo Kim
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.28 no.5
    • /
    • pp.15-30
    • /
    • 2023
  • Recently, generative models based on the Transformer architecture, such as ChatGPT, have been gaining significant attention. The Transformer architecture has been applied to various neural network models, including Google's BERT(Bidirectional Encoder Representations from Transformers) sentence generation model. In this paper, a method is proposed to create a text binary classification model for determining whether a comment on Korean movie review is positive or negative. To accomplish this, a pre-trained multilingual BERT sentence generation model is fine-tuned and transfer learned using a new Korean training dataset. To achieve this, a pre-trained BERT-Base model for multilingual sentence generation with 104 languages, 12 layers, 768 hidden, 12 attention heads, and 110M parameters is used. To change the pre-trained BERT-Base model into a text classification model, the input and output layers were fine-tuned, resulting in the creation of a new model with 178 million parameters. Using the fine-tuned model, with a maximum word count of 128, a batch size of 16, and 5 epochs, transfer learning is conducted with 10,000 training data and 5,000 testing data. A text sentiment binary classification model for Korean movie review with an accuracy of 0.9582, a loss of 0.1177, and an F1 score of 0.81 has been created. As a result of performing transfer learning with a dataset five times larger, a model with an accuracy of 0.9562, a loss of 0.1202, and an F1 score of 0.86 has been generated.

Ontology Matching Method for Solving Ontology Heterogeneity Issue (온톨로지 이질성 문제를 해결하기 위한 온톨로지 매칭 방법)

  • Hongzhou Duan;Yongju Lee
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.19 no.3
    • /
    • pp.571-576
    • /
    • 2024
  • Ontologies are created by domain experts, but the same content may be expressed differently by each expert due to different understandings of domain knowledge. Since the ontology standardization is still lacking, multiple ontologies can be exist within the same domain, resulting in a phenomenon called the ontology heterogeneity. Therefore, we propose a novel ontology matching method that combines SCBOW(: Siames Continuois Bag Of Words) and BERT(: Bidirectional Encoder Representations from Transformers) models to solve the ontology heterogeneity issue. Ontologies are expressed as a graph and the SimRank algorithm is used to solve the one-to-many problem that can occur in ontology matching problems. Experimental results showed that our approach improves performance by about 8% over traditional matching algorithm. Proposed method can enhance and refine the alignment technology used in ontology matching.

Robust Sentiment Classification of Metaverse Services Using a Pre-trained Language Model with Soft Voting

  • Haein Lee;Hae Sun Jung;Seon Hong Lee;Jang Hyun Kim
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.9
    • /
    • pp.2334-2347
    • /
    • 2023
  • Metaverse services generate text data, data of ubiquitous computing, in real-time to analyze user emotions. Analysis of user emotions is an important task in metaverse services. This study aims to classify user sentiments using deep learning and pre-trained language models based on the transformer structure. Previous studies collected data from a single platform, whereas the current study incorporated the review data as "Metaverse" keyword from the YouTube and Google Play Store platforms for general utilization. As a result, the Bidirectional Encoder Representations from Transformers (BERT) and Robustly optimized BERT approach (RoBERTa) models using the soft voting mechanism achieved a highest accuracy of 88.57%. In addition, the area under the curve (AUC) score of the ensemble model comprising RoBERTa, BERT, and A Lite BERT (ALBERT) was 0.9458. The results demonstrate that the ensemble combined with the RoBERTa model exhibits good performance. Therefore, the RoBERTa model can be applied on platforms that provide metaverse services. The findings contribute to the advancement of natural language processing techniques in metaverse services, which are increasingly important in digital platforms and virtual environments. Overall, this study provides empirical evidence that sentiment analysis using deep learning and pre-trained language models is a promising approach to improving user experiences in metaverse services.

BERT-based Hateful Text Filtering System - Focused on University Petition System (BERT 기반 혐오성 텍스트 필터링 시스템 - 대학 청원 시스템을 중심으로)

  • Taejin Moon;Hynebin Bae;Hyunsu Lee;Sanguk Park;Youngjong Kim
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.05a
    • /
    • pp.714-715
    • /
    • 2023
  • 최근들어 청원 시스템은 사람들의 다양한 의견을 반영하고 대응하기 위한 중요한 수단으로 부상하고 있다. 그러나 많은 양의 청원 글들을 수작업으로 분류하는 것은 매우 시간이 많이 소요되며, 인적 오류가 발생할 수 있는 문제점이 존재한다. 이를 해결하기 위해 자연어처리(NLP) 기술을 활용한 청원 분류 시스템을 개발하는 것이 필요하다. 본 연구에서는 BERT(Bidirectional Encoder Representations from Transformers)[1]를 기반으로 한 텍스트 필터링 시스템을 제안한다. BERT 는 최근 자연어 분류 분야에서 상위 성능을 보이는 모델로, 이를 활용하여 청원 글을 분류하고 분류된 결과를 이용해 해당 글의 노출여부를 결정한다. 본 논문에서는 BERT 모델의 이론적 배경과 구조, 그리고 미세 조정 학습 방법을 소개하고, 이를 활용하여 청원 분류 시스템을 구현하는 방법을 제시한다. 우리가 제안하는 BERT 기반의 텍스트 필터링 시스템은 청원 글 분류를 자동화하고, 이에 따른 대응 속도와 정확도를 향상시킬 것으로 기대된다. 또한, 이 시스템은 다양한 분야에서 응용 가능하며, 대용량 데이터 처리에도 적합하다. 이를 통해 대학 청원 시스템에서 혐오성 발언 등 부적절한 내용을 사전에 방지하고 학생들의 의견을 효율적으로 수집할 수 있는 기능을 제공할 수 있다는 장점을 가지고 있다.

HTML Tag Depth Embedding: An Input Embedding Method of the BERT Model for Improving Web Document Reading Comprehension Performance (HTML 태그 깊이 임베딩: 웹 문서 기계 독해 성능 개선을 위한 BERT 모델의 입력 임베딩 기법)

  • Mok, Jin-Wang;Jang, Hyun Jae;Lee, Hyun-Seob
    • Journal of Internet of Things and Convergence
    • /
    • v.8 no.5
    • /
    • pp.17-25
    • /
    • 2022
  • Recently the massive amount of data has been generated because of the number of edge devices increases. And especially, the number of raw unstructured HTML documents has been increased. Therefore, MRC(Machine Reading Comprehension) in which a natural language processing model finds the important information within an HTML document is becoming more important. In this paper, we propose HTDE(HTML Tag Depth Embedding Method), which allows the BERT to train the depth of the HTML document structure. HTDE makes a tag stack from the HTML document for each input token in the BERT and then extracts the depth information. After that, we add a HTML embedding layer that takes the depth of the token as input to the step of input embedding of BERT. Since tokenization using HTDE identifies the HTML document structures through the relationship of surrounding tokens, HTDE improves the accuracy of BERT for HTML documents. Finally, we demonstrated that the proposed idea showing the higher accuracy compared than the accuracy using the conventional embedding of BERT.

Probing Semantic Relations between Words in Pre-trained Language Model (사전학습 언어모델의 단어간 의미관계 이해도 평가)

  • Oh, Dongsuk;Kwon, Sunjae;Lee, Chanhee;Lim, Heuiseok
    • Annual Conference on Human and Language Technology
    • /
    • 2020.10a
    • /
    • pp.237-240
    • /
    • 2020
  • 사전학습 언어모델은 다양한 자연어처리 작업에서 높은 성능을 보였다. 하지만, 사전학습 언어모델은 문장 내 문맥 정보만을 학습하기 때문에 단어간 의미관계 정보를 추론하는데는 한계가 있다. 최근에는, 사전학습 언어모델이 어느수준으로 단어간 의미관계를 이해하고 있는지 다양한 Probing Test를 진행하고 있다. 이러한 Test는 언어모델의 강점과 약점을 분석하는데 효율적이며, 한층 더 인간의 언어를 정확하게 이해하기 위한 모델을 구축하는데 새로운 방향을 제시한다. 본 논문에서는 대표적인 사전 학습기반 언어모델인 BERT(Bidirectional Encoder Representations from Transformers)의 단어간 의미관계 이해도를 평가하는 3가지 작업을 진행한다. 첫 번째로 단어 간의 상위어, 하위어 관계를 나타내는 IsA 관계를 분석한다. 두번째는 '자동차'와 '변속'과 같은 관계를 나타내는 PartOf 관계를 분석한다. 마지막으로 '새'와 '날개'와 같은 관계를 나타내는 HasA 관계를 분석한다. 결과적으로, BERTbase 모델에 대해서는 추론 결과 대부분에서 낮은 성능을 보이지만, BERTlarge 모델에서는 BERTbase보다 높은 성능을 보였다.

  • PDF

Discovering AI-enabled convergences based on BERT and topic network

  • Ji Min Kim;Seo Yeon Lee;Won Sang Lee
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.3
    • /
    • pp.1022-1034
    • /
    • 2023
  • Various aspects of artificial intelligence (AI) have become of significant interest to academia and industry in recent times. To satisfy these academic and industrial interests, it is necessary to comprehensively investigate trends in AI-related changes of diverse areas. In this study, we identified and predicted emerging convergences with the help of AI-associated research abstracts collected from the SCOPUS database. The bidirectional encoder representations obtained via the transformers-based topic discovery technique were subsequently deployed to identify emerging topics related to AI. The topics discovered concern edge computing, biomedical algorithms, predictive defect maintenance, medical applications, fake news detection with block chain, explainable AI and COVID-19 applications. Their convergences were further analyzed based on the shortest path between topics to predict emerging convergences. Our findings indicated emerging AI convergences towards healthcare, manufacturing, legal applications, and marketing. These findings are expected to have policy implications for facilitating the convergences in diverse industries. Potentially, this study could contribute to the exploitation and adoption of AI-enabled convergences from a practical perspective.