• Title/Summary/Keyword: Bert

Search Result 390, Processing Time 0.02 seconds

BERT-Based Logits Ensemble Model for Gender Bias and Hate Speech Detection

  • Sanggeon Yun;Seungshik Kang;Hyeokman Kim
    • Journal of Information Processing Systems
    • /
    • v.19 no.5
    • /
    • pp.641-651
    • /
    • 2023
  • Malicious hate speech and gender bias comments are common in online communities, causing social problems in our society. Gender bias and hate speech detection has been investigated. However, it is difficult because there are diverse ways to express them in words. To solve this problem, we attempted to detect malicious comments in a Korean hate speech dataset constructed in 2020. We explored bidirectional encoder representations from transformers (BERT)-based deep learning models utilizing hyperparameter tuning, data sampling, and logits ensembles with a label distribution. We evaluated our model in Kaggle competitions for gender bias, general bias, and hate speech detection. For gender bias detection, an F1-score of 0.7711 was achieved using an ensemble of the Soongsil-BERT and KcELECTRA models. The general bias task included the gender bias task, and the ensemble model achieved the best F1-score of 0.7166.

Design of Real-Time Voice Phishing Detection Techniques using KoBERT (KoBERT를 활용한 실시간 보이스피싱 탐지기법 개념설계)

  • Yeong Jin Kim;Byoung-Yup Lee;Ah Reum Kang
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2024.01a
    • /
    • pp.95-96
    • /
    • 2024
  • 본 논문은 금융 범죄 중 하나인 보이스피싱을 실시간으로 예방하기 위한 탐지 기법을 제안한다. 제안된 모델은 수화기에 출력되는 음성을 녹음하고 네이버 CSR(Cloud Speech Recognition)을 통해 텍스트 파일로 변환한 후 딥러닝 기반의 KoBERT를 바탕으로 다양한 보이스피싱 패턴을 학습하여 실시간 환경에서의 신속하고 정확한 탐지를 위해 실제 통화 데이터를 적절하게 처리하여, 이를 통해 효과적인 보이스피싱 예방에 도움을 줄 것으로 예상된다.

  • PDF

BERT-based Classification Model for Korean Documents (한국어 기술문서 분석을 위한 BERT 기반의 분류모델)

  • Hwang, Sangheum;Kim, Dohyun
    • The Journal of Society for e-Business Studies
    • /
    • v.25 no.1
    • /
    • pp.203-214
    • /
    • 2020
  • It is necessary to classify technical documents such as patents, R&D project reports in order to understand the trends of technology convergence and interdisciplinary joint research, technology development and so on. Text mining techniques have been mainly used to classify these technical documents. However, in the case of classifying technical documents by text mining algorithms, there is a disadvantage that the features representing technical documents must be directly extracted. In this study, we propose a BERT-based document classification model to automatically extract document features from text information of national R&D projects and to classify them. Then, we verify the applicability and performance of the proposed model for classifying documents.

A BERT-Based Deep Learning Approach for Vulnerability Detection (BERT를 이용한 딥러닝 기반 소스코드 취약점 탐지 방법 연구)

  • Jin, Wenhui;Oh, Heekuck
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.32 no.6
    • /
    • pp.1139-1150
    • /
    • 2022
  • With the rapid development of SW Industry, softwares are everywhere in our daily life. The number of vulnerabilities are also increasing with a large amount of newly developed code. Vulnerabilities can be exploited by hackers, resulting the disclosure of privacy and threats to the safety of property and life. In particular, since the large numbers of increasing code, manually analyzed by expert is not enough anymore. Machine learning has shown high performance in object identification or classification task. Vulnerability detection is also suitable for machine learning, as a reuslt, many studies tried to use RNN-based model to detect vulnerability. However, the RNN model is also has limitation that as the code is longer, the earlier can not be learned well. In this paper, we proposed a novel method which applied BERT to detect vulnerability. The accuracy was 97.5%, which increased by 1.5%, and the efficiency also increased by 69% than Vuldeepecker.

Comparison and Analysis of Unsupervised Contrastive Learning Approaches for Korean Sentence Representations (한국어 문장 표현을 위한 비지도 대조 학습 방법론의 비교 및 분석)

  • Young Hyun Yoo;Kyumin Lee;Minjin Jeon;Jii Cha;Kangsan Kim;Taeuk Kim
    • Annual Conference on Human and Language Technology
    • /
    • 2022.10a
    • /
    • pp.360-365
    • /
    • 2022
  • 문장 표현(sentence representation)은 자연어처리 분야 내의 다양한 문제 해결 및 응용 개발에 있어 유용하게 활용될 수 있는 주요한 도구 중 하나이다. 하지만 최근 널리 도입되고 있는 사전 학습 언어 모델(pre-trained language model)로부터 도출한 문장 표현은 이방성(anisotropy)이 뚜렷한 등 그 고유의 특성으로 인해 문장 유사도(Semantic Textual Similarity; STS) 측정과 같은 태스크에서 기대 이하의 성능을 보이는 것으로 알려져 있다. 이러한 문제를 해결하기 위해 대조 학습(contrastive learning)을 사전 학습 언어 모델에 적용하는 연구가 문헌에서 활발히 진행되어 왔으며, 그중에서도 레이블이 없는 데이터를 활용하는 비지도 대조 학습 방법이 주목을 받고 있다. 하지만 대다수의 기존 연구들은 주로 영어 문장 표현 개선에 집중하였으며, 이에 대응되는 한국어 문장 표현에 관한 연구는 상대적으로 부족한 실정이다. 이에 본 논문에서는 대표적인 비지도 대조 학습 방법(ConSERT, SimCSE)을 다양한 한국어 사전 학습 언어 모델(KoBERT, KR-BERT, KLUE-BERT)에 적용하여 문장 유사도 태스크(KorSTS, KLUE-STS)에 대해 평가하였다. 그 결과, 한국어의 경우에도 일반적으로 영어의 경우와 유사한 경향성을 보이는 것을 확인하였으며, 이에 더하여 다음과 같은 새로운 사실을 관측하였다. 첫째, 사용한 비지도 대조 학습 방법 모두에서 KLUE-BERT가 KoBERT, KR-BERT보다 더 안정적이고 나은 성능을 보였다. 둘째, ConSERT에서 소개하는 여러 데이터 증강 방법 중 token shuffling 방법이 전반적으로 높은 성능을 보였다. 셋째, 두 가지 비지도 대조 학습 방법 모두 검증 데이터로 활용한 KLUE-STS 학습 데이터에 대해 성능이 과적합되는 현상을 발견하였다. 결론적으로, 본 연구에서는 한국어 문장 표현 또한 영어의 경우와 마찬가지로 비지도 대조 학습의 적용을 통해 그 성능을 개선할 수 있음을 검증하였으며, 이와 같은 결과가 향후 한국어 문장 표현 연구 발전에 초석이 되기를 기대한다.

  • PDF

Analysis of trends in deep learning and reinforcement learning

  • Dong-In Choi;Chungsoo Lim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.10
    • /
    • pp.55-65
    • /
    • 2023
  • In this paper, we apply KeyBERT(Keyword extraction with Bidirectional Encoder Representations of Transformers) algorithm-driven topic extraction and topic frequency analysis to deep learning and reinforcement learning research to discover the rapidly changing trends in them. First, we crawled abstracts of research papers on deep learning and reinforcement learning, and temporally divided them into two groups. After pre-processing the crawled data, we extracted topics using KeyBERT algorithm, and then analyzed the extracted topics in terms of topic occurrence frequency. This analysis reveals that there are distinct trends in research work of all analyzed algorithms and applications, and we can clearly tell which topics are gaining more interest. The analysis also proves the effectiveness of the utilized topic extraction and topic frequency analysis in research trend analysis, and this trend analysis scheme is expected to be used for research trend analysis in other research fields. In addition, the analysis can provide insight into how deep learning will evolve in the near future, and provide guidance for select research topics and methodologies by informing researchers of research topics and methodologies which are recently attracting attention.

Fine-tuning BERT-based NLP Models for Sentiment Analysis of Korean Reviews: Optimizing the sequence length (BERT 기반 자연어처리 모델의 미세 조정을 통한 한국어 리뷰 감성 분석: 입력 시퀀스 길이 최적화)

  • Sunga Hwang;Seyeon Park;Beakcheol Jang
    • Journal of Internet Computing and Services
    • /
    • v.25 no.4
    • /
    • pp.47-56
    • /
    • 2024
  • This paper proposes a method for fine-tuning BERT-based natural language processing models to perform sentiment analysis on Korean review data. By varying the input sequence length during this process and comparing the performance, we aim to explore the optimal performance according to the input sequence length. For this purpose, text review data collected from the clothing shopping platform M was utilized. Through web scraping, review data was collected. During the data preprocessing stage, positive and negative satisfaction scores were recalibrated to improve the accuracy of the analysis. Specifically, the GPT-4 API was used to reset the labels to reflect the actual sentiment of the review texts, and data imbalance issues were addressed by adjusting the data to 6:4 ratio. The reviews on the clothing shopping platform averaged about 12 tokens in length, and to provide the optimal model suitable for this, five BERT-based pre-trained models were used in the modeling stage, focusing on input sequence length and memory usage for performance comparison. The experimental results indicated that an input sequence length of 64 generally exhibited the most appropriate performance and memory usage. In particular, the KcELECTRA model showed optimal performance and memory usage at an input sequence length of 64, achieving higher than 92% accuracy and reliability in sentiment analysis of Korean review data. Furthermore, by utilizing BERTopic, we provide a Korean review sentiment analysis process that classifies new incoming review data by category and extracts sentiment scores for each category using the final constructed model.

Zero-anaphora resolution in Korean based on deep language representation model: BERT

  • Kim, Youngtae;Ra, Dongyul;Lim, Soojong
    • ETRI Journal
    • /
    • v.43 no.2
    • /
    • pp.299-312
    • /
    • 2021
  • It is necessary to achieve high performance in the task of zero anaphora resolution (ZAR) for completely understanding the texts in Korean, Japanese, Chinese, and various other languages. Deep-learning-based models are being employed for building ZAR systems, owing to the success of deep learning in the recent years. However, the objective of building a high-quality ZAR system is far from being achieved even using these models. To enhance the current ZAR techniques, we fine-tuned a pretrained bidirectional encoder representations from transformers (BERT). Notably, BERT is a general language representation model that enables systems to utilize deep bidirectional contextual information in a natural language text. It extensively exploits the attention mechanism based upon the sequence-transduction model Transformer. In our model, classification is simultaneously performed for all the words in the input word sequence to decide whether each word can be an antecedent. We seek end-to-end learning by disallowing any use of hand-crafted or dependency-parsing features. Experimental results show that compared with other models, our approach can significantly improve the performance of ZAR.

Phases of Alienation in Le Torrent by Anne Hébert (안느 에베르의 중·단편집 『격류』에 드러나는 소외의 시대상)

  • Kang, Choung-Kwon
    • Cross-Cultural Studies
    • /
    • v.39
    • /
    • pp.7-32
    • /
    • 2015
  • In 1950, Anne $H{\acute{e}}bert$ published Le Torrent, a collection of seven short stories. These stories containing many shocking themes and expressions have placed her one of the pioneers of modern novels in Quebec. This paper tries to analyze several phases of alienation described in the novels and the reactions of alienated caracters in their situation. Some examples of alienated and mentally or physically deformed characters in Le Torrent are Fran?ois, $St{\acute{e}}phanie$, Stella, etc. Although the author wanted readers to interpret these characters on ther individual level, this paper interprets them differently. The result of this study is as following. Alienation doesn't come from one's interior but his exterior. Society and history are major agents of alienation. The injustice of life imposed on the caracters results from political and religious underdevelopment, cultural lowness, absence of social security system and of universal education at that time. The conquest of Quebec by England left a deep and historical wound on the French Canadians. This fact is, in my opinion, one of the essential themes of Anne $H{\acute{e}}bert^{\prime}s$ novels. In spite of all these alienating situations, the reactions showed by the caracters of the novels are limited to escapist illusion, self-destruction, mistaken revenge, eternal submission, etc. In conclusion, Le Torrent by Anne $H{\acute{e}}bert$ which deeply approached themes of violence and alienation could be called authentic landscape of the inner world of Quebecois before 'la Revolution tranquille.'

Sentiment Analysis and Data Visualization of U.S. Public Companies' Disclosures using BERT (BERT를 활용한 미국 기업 공시에 대한 감성 분석 및 시각화)

  • Kim, Hyo Gon;Yoo, Dong Hee
    • The Journal of Information Systems
    • /
    • v.31 no.3
    • /
    • pp.67-87
    • /
    • 2022
  • Purpose This study quantified companies' views on the COVID-19 pandemic with sentiment analysis of U.S. public companies' disclosures. It aims to provide timely insights to shareholders, investors, and consumers by analyzing and visualizing sentiment changes over time as well as similarities and differences by industry. Design/methodology/approach From more than fifty thousand Form 10-K and Form 10-Q published between 2020 and 2021, we extracted over one million texts related to the COVID-19 pandemic. Using the FinBERT language model fine-tuned in the finance domain, we conducted sentiment analysis of the texts, and we quantified and classified the data into positive, negative, and neutral. In addition, we illustrated the analysis results using various visualization techniques for easy understanding of information. Findings The analysis results indicated that U.S. public companies' overall sentiment changed over time as the COVID-19 pandemic progressed. Positive sentiment gradually increased, and negative sentiment tended to decrease over time, but there was no trend in neutral sentiment. When comparing sentiment by industry, the pattern of changes in the amount of positive and negative sentiment and time-series changes were similar in all industries, but differences among industries were shown in neutral sentiment.