• Title/Summary/Keyword: NLP(Natural Language Processing)

Search Result 166, Processing Time 0.027 seconds

Automated Construction Activities Extraction from Accident Reports Using Deep Neural Network and Natural Language Processing Techniques

  • Do, Quan;Le, Tuyen;Le, Chau
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.744-751
    • /
    • 2022
  • Construction is among the most dangerous industries with numerous accidents occurring at job sites. Following an accident, an investigation report is issued, containing all of the specifics. Analyzing the text information in construction accident reports can help enhance our understanding of historical data and be utilized for accident prevention. However, the conventional method requires a significant amount of time and effort to read and identify crucial information. The previous studies primarily focused on analyzing related objects and causes of accidents rather than the construction activities. This study aims to extract construction activities taken by workers associated with accidents by presenting an automated framework that adopts a deep learning-based approach and natural language processing (NLP) techniques to automatically classify sentences obtained from previous construction accident reports into predefined categories, namely TRADE (i.e., a construction activity before an accident), EVENT (i.e., an accident), and CONSEQUENCE (i.e., the outcome of an accident). The classification model was developed using Convolutional Neural Network (CNN) showed a robust accuracy of 88.7%, indicating that the proposed model is capable of investigating the occurrence of accidents with minimal manual involvement and sophisticated engineering. Also, this study is expected to support safety assessments and build risk management systems.

  • PDF

A Study on Applying Novel Reverse N-Gram for Construction of Natural Language Processing Dictionary for Healthcare Big Data Analysis (헬스케어 분야 빅데이터 분석을 위한 개체명 사전구축에 새로운 역 N-Gram 적용 연구)

  • KyungHyun Lee;RackJune Baek;WooSu Kim
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.3
    • /
    • pp.391-396
    • /
    • 2024
  • This study proposes a novel reverse N-Gram approach to overcome the limitations of traditional N-Gram methods and enhance performance in building an entity dictionary specialized for the healthcare sector. The proposed reverse N-Gram technique allows for more precise analysis and processing of the complex linguistic features of healthcare-related big data. To verify the efficiency of the proposed method, big data on healthcare and digital health announced during the Consumer Electronics Show (CES) held each January was collected. Using the Python programming language, 2,185 news titles and summaries mentioned from January 1 to 31 in 2010 and from January 1 to 31 in 2024 were preprocessed with the new reverse N-Gram method. This resulted in the stable construction of a dictionary for natural language processing in the healthcare field.

A Survey on Deep Learning-based Pre-Trained Language Models (딥러닝 기반 사전학습 언어모델에 대한 이해와 현황)

  • Sangun Park
    • The Journal of Bigdata
    • /
    • v.7 no.2
    • /
    • pp.11-29
    • /
    • 2022
  • Pre-trained language models are the most important and widely used tools in natural language processing tasks. Since those have been pre-trained for a large amount of corpus, high performance can be expected even with fine-tuning learning using a small number of data. Since the elements necessary for implementation, such as a pre-trained tokenizer and a deep learning model including pre-trained weights, are distributed together, the cost and period of natural language processing has been greatly reduced. Transformer variants are the most representative pre-trained language models that provide these advantages. Those are being actively used in other fields such as computer vision and audio applications. In order to make it easier for researchers to understand the pre-trained language model and apply it to natural language processing tasks, this paper describes the definition of the language model and the pre-learning language model, and discusses the development process of the pre-trained language model and especially representative Transformer variants.

Development of UML Tool using WPF Framework and Forced-Directionality Graph Algorithm

  • Utama, Ahmad Zulfiana;Jang, Duk-Sung
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.6
    • /
    • pp.706-715
    • /
    • 2019
  • This research implemented grammatical rules for relationship extraction from class diagram candidate. The problem statement is generated by our algorithm to yield class diagram and candidate relationship candidates. The relationships of class diagrams are extracted automatically from the problem statement by using Natural Language Processing (NLP). The extraction used the grammatical rules that obtained from various sources and translated into our algorithm. The performance evaluation of the extraction algorithm used ATM problem statements. The application captures the problem statement and draws automatically the relations of class diagrams using Forced-Directionality Graph algorithm. The performance evaluations show refining methods for class diagram and relationships extraction improve recall score.

A Study on the Performance Analysis of Entity Name Recognition Techniques Using Korean Patent Literature

  • Gim, Jangwon
    • Journal of Advanced Information Technology and Convergence
    • /
    • v.10 no.2
    • /
    • pp.139-151
    • /
    • 2020
  • Entity name recognition is a part of information extraction that extracts entity names from documents and classifies the types of extracted entity names. Entity name recognition technologies are widely used in natural language processing, such as information retrieval, machine translation, and query response systems. Various deep learning-based models exist to improve entity name recognition performance, but studies that compared and analyzed these models on Korean data are insufficient. In this paper, we compare and analyze the performance of CRF, LSTM-CRF, BiLSTM-CRF, and BERT, which are actively used to identify entity names using Korean data. Also, we compare and evaluate whether embedding models, which are variously used in recent natural language processing tasks, can affect the entity name recognition model's performance improvement. As a result of experiments on patent data and Korean corpus, it was confirmed that the BiLSTM-CRF using FastText method showed the highest performance.

Grammatical Structure Oriented Automated Approach for Surface Knowledge Extraction from Open Domain Unstructured Text

  • Tissera, Muditha;Weerasinghe, Ruvan
    • Journal of information and communication convergence engineering
    • /
    • v.20 no.2
    • /
    • pp.113-124
    • /
    • 2022
  • News in the form of web data generates increasingly large amounts of information as unstructured text. The capability of understanding the meaning of news is limited to humans; thus, it causes information overload. This hinders the effective use of embedded knowledge in such texts. Therefore, Automatic Knowledge Extraction (AKE) has now become an integral part of Semantic web and Natural Language Processing (NLP). Although recent literature shows that AKE has progressed, the results are still behind the expectations. This study proposes a method to auto-extract surface knowledge from English news into a machine-interpretable semantic format (triple). The proposed technique was designed using the grammatical structure of the sentence, and 11 original rules were discovered. The initial experiment extracted triples from the Sri Lankan news corpus, of which 83.5% were meaningful. The experiment was extended to the British Broadcasting Corporation (BBC) news dataset to prove its generic nature. This demonstrated a higher meaningful triple extraction rate of 92.6%. These results were validated using the inter-rater agreement method, which guaranteed the high reliability.

Utilizing Various Natural Language Processing Techniques for Biomedical Interaction Extraction

  • Park, Kyung-Mi;Cho, Han-Cheol;Rim, Hae-Chang
    • Journal of Information Processing Systems
    • /
    • v.7 no.3
    • /
    • pp.459-472
    • /
    • 2011
  • The vast number of biomedical literature is an important source of biomedical interaction information discovery. However, it is complicated to obtain interaction information from them because most of them are not easily readable by machine. In this paper, we present a method for extracting biomedical interaction information assuming that the biomedical Named Entities (NEs) are already identified. The proposed method labels all possible pairs of given biomedical NEs as INTERACTION or NO-INTERACTION by using a Maximum Entropy (ME) classifier. The features used for the classifier are obtained by applying various NLP techniques such as POS tagging, base phrase recognition, parsing and predicate-argument recognition. Especially, specific verb predicates (activate, inhibit, diminish and etc.) and their biomedical NE arguments are very useful features for identifying interactive NE pairs. Based on this, we devised a twostep method: 1) an interaction verb extraction step to find biomedically salient verbs, and 2) an argument relation identification step to generate partial predicate-argument structures between extracted interaction verbs and their NE arguments. In the experiments, we analyzed how much each applied NLP technique improves the performance. The proposed method can be completely improved by more than 2% compared to the baseline method. The use of external contextual features, which are obtained from outside of NEs, is crucial for the performance improvement. We also compare the performance of the proposed method against the co-occurrence-based and the rule-based methods. The result demonstrates that the proposed method considerably improves the performance.

Improving Korean Part-of-Speech Tagging Using The Lexical Specific Classifier (어휘별 분류기를 이용한 한국어 품사 부착의 성능 향상)

  • Choi, Won-Jong;Lee, Do-Gil;Rim, Hae-Chang
    • Annual Conference on Human and Language Technology
    • /
    • 2006.10e
    • /
    • pp.133-139
    • /
    • 2006
  • 한국어 형태소 분석 및 품사 부착을 위해 지금까지 다양한 모델들이 제안이 되었으며 어절단위 평가로 95%를 넘는 성능을 보여주는 자동 태거가 보고 되었다. 하지만 형태소 분석 및 품사 부착은 모든 자연어처리 시스템의 성능에 큰 영향을 미치므로 작은 오류도 중요하다. 본 연구에서는 대상 어절의 주변 형태소의 어휘와 품사 자질, 그리고 어절 자질을 이용하여 분류기를 학습한 후 자동 태거의 품사 부착 결과를 입력으로 받아 후처리 하는 어휘별 분류기를 제안한다. 실험 결과 어휘별 분류기를 이용한 후처리만으로 어절단위 평가 6.86%$(95.251%{\rightarrow}95.577%)$의 오류가 감소하는 성능향상을 얻었으며, 기존에 제안된 품사별 자질을 이용한 후처리 방법과 순차 결합할 경우 16.91%$(95.251%{\rightarrow}96.054%)$의 오류가 감소하는 성능 향상을 얻을 수 있었다. 특히 본 논문에서 제안하는 방법은 형태소 어휘까지 정정할 수 있기 때문에 품사별 자질을 이용한 후처리 방법의 성능을 더욱 향상시킬 수 있다.

  • PDF

Using Syntax and Shallow Semantic Analysis for Vietnamese Question Generation

  • Phuoc Tran;Duy Khanh Nguyen;Tram Tran;Bay Vo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.10
    • /
    • pp.2718-2731
    • /
    • 2023
  • This paper presents a method of using syntax and shallow semantic analysis for Vietnamese question generation (QG). Specifically, our proposed technique concentrates on investigating both the syntactic and shallow semantic structure of each sentence. The main goal of our method is to generate questions from a single sentence. These generated questions are known as factoid questions which require short, fact-based answers. In general, syntax-based analysis is one of the most popular approaches within the QG field, but it requires linguistic expert knowledge as well as a deep understanding of syntax rules in the Vietnamese language. It is thus considered a high-cost and inefficient solution due to the requirement of significant human effort to achieve qualified syntax rules. To deal with this problem, we collected the syntax rules in Vietnamese from a Vietnamese language textbook. Moreover, we also used different natural language processing (NLP) techniques to analyze Vietnamese shallow syntax and semantics for the QG task. These techniques include: sentence segmentation, word segmentation, part of speech, chunking, dependency parsing, and named entity recognition. We used human evaluation to assess the credibility of our model, which means we manually generated questions from the corpus, and then compared them with the generated questions. The empirical evidence demonstrates that our proposed technique has significant performance, in which the generated questions are very similar to those which are created by humans.

Korean Morphological Analysis Method Based on BERT-Fused Transformer Model (BERT-Fused Transformer 모델에 기반한 한국어 형태소 분석 기법)

  • Lee, Changjae;Ra, Dongyul
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.4
    • /
    • pp.169-178
    • /
    • 2022
  • Morphemes are most primitive units in a language that lose their original meaning when segmented into smaller parts. In Korean, a sentence is a sequence of eojeols (words) separated by spaces. Each eojeol comprises one or more morphemes. Korean morphological analysis (KMA) is to divide eojeols in a given Korean sentence into morpheme units. It also includes assigning appropriate part-of-speech(POS) tags to the resulting morphemes. KMA is one of the most important tasks in Korean natural language processing (NLP). Improving the performance of KMA is closely related to increasing performance of Korean NLP tasks. Recent research on KMA has begun to adopt the approach of machine translation (MT) models. MT is to convert a sequence (sentence) of units of one domain into a sequence (sentence) of units of another domain. Neural machine translation (NMT) stands for the approaches of MT that exploit neural network models. From a perspective of MT, KMA is to transform an input sequence of units belonging to the eojeol domain into a sequence of units in the morpheme domain. In this paper, we propose a deep learning model for KMA. The backbone of our model is based on the BERT-fused model which was shown to achieve high performance on NMT. The BERT-fused model utilizes Transformer, a representative model employed by NMT, and BERT which is a language representation model that has enabled a significant advance in NLP. The experimental results show that our model achieves 98.24 F1-Score.