• Title/Summary/Keyword: Triples Extraction

Search Result 6, Processing Time 0.017 seconds

Grammatical Structure Oriented Automated Approach for Surface Knowledge Extraction from Open Domain Unstructured Text

  • Tissera, Muditha;Weerasinghe, Ruvan
    • Journal of information and communication convergence engineering
    • /
    • v.20 no.2
    • /
    • pp.113-124
    • /
    • 2022
  • News in the form of web data generates increasingly large amounts of information as unstructured text. The capability of understanding the meaning of news is limited to humans; thus, it causes information overload. This hinders the effective use of embedded knowledge in such texts. Therefore, Automatic Knowledge Extraction (AKE) has now become an integral part of Semantic web and Natural Language Processing (NLP). Although recent literature shows that AKE has progressed, the results are still behind the expectations. This study proposes a method to auto-extract surface knowledge from English news into a machine-interpretable semantic format (triple). The proposed technique was designed using the grammatical structure of the sentence, and 11 original rules were discovered. The initial experiment extracted triples from the Sri Lankan news corpus, of which 83.5% were meaningful. The experiment was extended to the British Broadcasting Corporation (BBC) news dataset to prove its generic nature. This demonstrated a higher meaningful triple extraction rate of 92.6%. These results were validated using the inter-rater agreement method, which guaranteed the high reliability.

A Comparative Research on End-to-End Clinical Entity and Relation Extraction using Deep Neural Networks: Pipeline vs. Joint Models (심층 신경망을 활용한 진료 기록 문헌에서의 종단형 개체명 및 관계 추출 비교 연구 - 파이프라인 모델과 결합 모델을 중심으로 -)

  • Sung-Pil Choi
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.57 no.1
    • /
    • pp.93-114
    • /
    • 2023
  • Information extraction can facilitate the intensive analysis of documents by providing semantic triples which consist of named entities and their relations recognized in the texts. However, most of the research so far has been carried out separately for named entity recognition and relation extraction as individual studies, and as a result, the effective performance evaluation of the entire information extraction systems was not performed properly. This paper introduces two models of end-to-end information extraction that can extract various entity names in clinical records and their relationships in the form of semantic triples, namely pipeline and joint models and compares their performances in depth. The pipeline model consists of an entity recognition sub-system based on bidirectional GRU-CRFs and a relation extraction module using multiple encoding scheme, whereas the joint model was implemented with a single bidirectional GRU-CRFs equipped with multi-head labeling method. In the experiments using i2b2/VA 2010, the performance of the pipeline model was 5.5% (F-measure) higher. In addition, through a comparative experiment with existing state-of-the-art systems using large-scale neural language models and manually constructed features, the objective performance level of the end-to-end models implemented in this paper could be identified properly.

A Usability Evaluation on the Visualization of Information Extraction Output (정보추출결과의 시각화 표현방법에 관한 이용성 평가 연구)

  • Lee Jee-Yeon
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.39 no.2
    • /
    • pp.287-304
    • /
    • 2005
  • The goal of this research is to evaluate the usability of visually browsing the automatically extracted information. A domain-independent information extraction system was used to extract information from news type texts to populate the visually browasable knowledge base. The information extraction system automatically generated Concept-Relation-Concept triples by applying various Natural Language Processing techniques to the text portion of the news articles. To visualize the information stored in the knowledge base, we used PersoanlBrain to develop a visualization portion of the user interface. PersonalBrain is a hyperbolic information visualization system, which enables the users to link information into a network of logical associations. To understand the usability of the visually browsable knowledge base, IS test subjects were observed while they use the visual interface and also interviewed afterward. By applying a qualitative test data analysis method. a number of usability Problems and further research directions were identified.

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.

Worker Symptom-based Chemical Substance Estimation System Design Using Knowledge Base (지식베이스를 이용한 작업자 증상 기반 화학물질 추정 시스템 설계)

  • Ju, Yongtaek;Lee, Donghoon;Shin, Eunji;Yoo, Sangwoo;Shin, Dongil
    • Journal of the Korean Institute of Gas
    • /
    • v.25 no.3
    • /
    • pp.9-15
    • /
    • 2021
  • In this paper, a study on the construction of a knowledge base based on natural language processing and the design of a chemical substance estimation system for the development of a knowledge service for a real-time sensor information fusion detection system and symptoms of contact with chemical substances in industrial sites. The information on 499 chemical substances contact symptoms from the Wireless Information System for Emergency Responders(WISER) program provided by the National Institutes of Health(NIH) in the United States was used as a reference. AllegroGraph 7.0.1 was used, input triples are Cas No., Synonyms, Symptom, SMILES, InChl, and Formula. As a result of establishing the knowledge base, it was confirmed that 39 symptoms based on ammonia (CAS No: 7664-41-7) were the same as those of the WISER program. Through this, a method of establishing was proposed knowledge base for the symptom extraction process of the chemical substance estimation system.

Integration of Extended IFC-BIM and Ontology for Information Management of Bridge Inspection (확장 IFC-BIM 기반 정보모델과 온톨로지를 활용한 교량 점검데이터 관리방법)

  • Erdene, Khuvilai;Kwon, Tae Ho;Lee, Sang-Ho
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.33 no.6
    • /
    • pp.411-417
    • /
    • 2020
  • To utilize building information modeling (BIM) technology at the bridge maintenance stage, it is necessary to integrate large quantities of bridge inspection and model data for object-oriented information management. This research aims to establish the benefits of utilizing the extended industry foundation class (IFC)-BIM and ontology for bridge inspection information management. The IFC entities were extended to represent the bridge objects, and a method of generating the extended IFC-based information model was proposed. The bridge inspection ontology was also developed by extraction and classification of inspection concepts from the AASHTO standard. The classified concepts and their relationships were mapped to the ontology based on the semantic triples approach. Finally, the extended IFC-based BIM model was integrated with the ontology for bridge inspection data management. The effectiveness of the proposed framework for bridge inspection information management by integration of the extended IFC-BIM and ontology was tested and verified by extracting bridge inspection data via the SPARQL query.