• Title/Summary/Keyword: Automatic Information Extraction

Search Result 592, Processing Time 0.031 seconds

Automatic Extraction of Land Cover information By Using KOMPSAT-2 Imagery (KOMPSAT-2 영상을 이용한 토지피복정보 자동 추출)

  • Lee, Hyun-Jik;Ru, Ji-Ho;Yu, Young-Geol
    • Proceedings of the Korean Society of Surveying, Geodesy, Photogrammetry, and Cartography Conference
    • /
    • 2010.04a
    • /
    • pp.277-280
    • /
    • 2010
  • There is a need to convert the old low- or medium-resolution satellite image-based thematic mapping to the high-resolution satellite image-based mapping of GSD 1m grade or lower. There is also a need to generate middle- or large-scale thematic maps of 1:5,000 or lower. In this study, the DEM and orthoimage is generated with the KOMPSAT-2 stereo image of Yuseong-gu, Daejeon Metropolitan City. By utilizing the orthoimage, automatic extraction experiments of land cover information are generated for buildings, roads and urban areas, raw land(agricultural land), mountains and forests, hydrosphere, grassland, and shadow. The experiment results show that it is possible to classify, in detail, for natural features such as the hydrosphere, mountains and forests, grassland, shadow, and raw land. While artificial features such as roads, buildings, and urban areas can be easily classified with automatic extraction, there are difficulties on detailed classifications along the boundaries. Further research should be performed on the automation methods using the conventional thematic maps and all sorts of geo-spatial information and mapping techniques in order to classify thematic information in detail.

  • PDF

Automatic Building Extraction from Airborne Laser Scanning Data using TIN

  • Jeong Jae-Wook;Chang Hwi-Jeong;Cho Woosug;Kim Kyoung-ok
    • Proceedings of the KSRS Conference
    • /
    • 2004.10a
    • /
    • pp.132-135
    • /
    • 2004
  • Building information plays a key role in diverse applications such as urban planning, telecommunication and environment monitoring. Automatic building extraction has been a prime interest in the field of GIS and photogrammetry. In this paper, we presented an automatic approach for building extraction from lidar data. The proposed approach is divided into four processes: pre-processing, filtering, segmentation and building extraction. Experimental results showed that the proposed method detected most of buildings with less commission and omission errors.

  • PDF

Grammatical Structure Oriented Automated Approach for Surface Knowledge Extraction from Open Domain Unstructured Text

  • Tissera, Muditha;Weerasinghe, Ruvan
    • Journal of information and communication convergence engineering
    • /
    • v.20 no.2
    • /
    • pp.113-124
    • /
    • 2022
  • News in the form of web data generates increasingly large amounts of information as unstructured text. The capability of understanding the meaning of news is limited to humans; thus, it causes information overload. This hinders the effective use of embedded knowledge in such texts. Therefore, Automatic Knowledge Extraction (AKE) has now become an integral part of Semantic web and Natural Language Processing (NLP). Although recent literature shows that AKE has progressed, the results are still behind the expectations. This study proposes a method to auto-extract surface knowledge from English news into a machine-interpretable semantic format (triple). The proposed technique was designed using the grammatical structure of the sentence, and 11 original rules were discovered. The initial experiment extracted triples from the Sri Lankan news corpus, of which 83.5% were meaningful. The experiment was extended to the British Broadcasting Corporation (BBC) news dataset to prove its generic nature. This demonstrated a higher meaningful triple extraction rate of 92.6%. These results were validated using the inter-rater agreement method, which guaranteed the high reliability.

Implementation of an Automatic Observation System for Cloud Observations

  • Kwon, Jung Jang
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.2
    • /
    • pp.79-88
    • /
    • 2016
  • In this paper, we propose an efficient automatic observation system for cloud amount and height observations. Observation system consists of clouds observations machinery, operational programs, cloud amount extraction program, cloud height extraction program, expert support programs. The experiment was conducted at the Daegwallyeong and Busan, through experimental observation confirmed the usefulness of the proposed system.

An Ontology-based Knowledge Management System - Integrated System of Web Information Extraction and Structuring Knowledge -

  • Mima, Hideki;Matsushima, Katsumori
    • Proceedings of the CALSEC Conference
    • /
    • 2005.03a
    • /
    • pp.55-61
    • /
    • 2005
  • We will introduce a new web-based knowledge management system in progress, in which XML-based web information extraction and our structuring knowledge technologies are combined using ontology-based natural language processing. Our aim is to provide efficient access to heterogeneous information on the web, enabling users to use a wide range of textual and non textual resources, such as newspapers and databases, effortlessly to accelerate knowledge acquisition from such knowledge sources. In order to achieve the efficient knowledge management, we propose at first an XML-based Web information extraction which contains a sophisticated control language to extract data from Web pages. With using standard XML Technologies in the system, our approach can make extracting information easy because of a) detaching rules from processing, b) restricting target for processing, c) Interactive operations for developing extracting rules. Then we propose a structuring knowledge system which includes, 1) automatic term recognition, 2) domain oriented automatic term clustering, 3) similarity-based document retrieval, 4) real-time document clustering, and 5) visualization. The system supports integrating different types of databases (textual and non textual) and retrieving different types of information simultaneously. Through further explanation to the specification and the implementation technique of the system, we will demonstrate how the system can accelerate knowledge acquisition on the Web even for novice users of the field.

  • PDF

Automatic Extraction of the Interest Organization from Full-Color Continuous Images for a Biological Sample

  • Takemoto, Satoko;Yokota, Hideo;Shimai, Hiroyuki;Makinouchi, Akitake;Mishima, Taketoshi
    • Proceedings of the IEEK Conference
    • /
    • 2002.07a
    • /
    • pp.196-199
    • /
    • 2002
  • We presented the automatic extraction technique of a biological internal organization from full-color continuous images. It was implemented using the localized homogeneousness of color intensity, and also using the continuity between neighboring images. Moreover, we set the "four-level status value" of area condition as a value showing "area possibility. This played important role of preventing a miss-judgement of area definition. These our approach had a beneficial effect on tracking color and shape change of the interest area in continuous extraction. As a resell we succeeded in extraction of mouse's stomach from continuous 50 images.

  • PDF

Automatic Generation of Information Extraction Rules Through User-interface Agents (사용자 인터페이스 에이전트를 통한 정보추출 규칙의 자동 생성)

  • 김용기;양재영;최중민
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.4
    • /
    • pp.447-456
    • /
    • 2004
  • Information extraction is a process of recognizing and fetching particular information fragments from a document. In order to extract information uniformly from many heterogeneous information sources, it is necessary to produce information extraction rules called a wrapper for each source. Previous methods of information extraction can be categorized into manual wrapper generation and automatic wrapper generation. In the manual method, since the wrapper is manually generated by a human expert who analyzes documents and writes rules, the precision of the wrapper is very high whereas it reveals problems in scalability and efficiency In the automatic method, the agent program analyzes a set of example documents and produces a wrapper through learning. Although it is very scalable, this method has difficulty in generating correct rules per se, and also the generated rules are sometimes unreliable. This paper tries to combine both manual and automatic methods by proposing a new method of learning information extraction rules. We adopt the scheme of supervised learning in which a user-interface agent is designed to get information from the user regarding what to extract from a document, and eventually XML-based information extraction rules are generated through learning according to these inputs. The interface agent is used not only to generate new extraction rules but also to modify and extend existing ones to enhance the precision and the recall measures of the extraction system. We have done a series of experiments to test the system, and the results are very promising. We hope that our system can be applied to practical systems such as information-mediator agents.

Extraction of Informative Features for Automatic Indexation of Human Sensibility Ergonomic Documents (감성공학 문서 데이터의 지표 자동화를 위한 코퍼스 분석 기반 특성정보 추출)

  • 배희숙;곽현민;채균식;이상태
    • Science of Emotion and Sensibility
    • /
    • v.7 no.2
    • /
    • pp.133-140
    • /
    • 2004
  • A large number of indices are produced from human sensibility ergonomic data, which are accumulated by the project "Study on the Development of Web-Based Database System of Human Sensibility and its Support". Since the research in this field will be increased rapidly, it is necessary to automate the index processing of human sensibility ergonomic data. From the similarity between indexation and summarization, we propose the automation of this process. In this paper, we study on extraction of keywords, information types and expression features that are considered as basic elements of following techniques for automatic summarization: classification of documents, extraction of information types and linguistic features. This study can be applied to automatic summarization system and knowledge management system in the domain of human sensibility ergonomics.rgonomics.

  • PDF

Automatic Road Extraction by Gradient Direction Profile Algorithm (GDPA) using High-Resolution Satellite Imagery: Experiment Study

  • Lee, Ki-Won;Yu, Young-Chul;Lee, Bong-Gyu
    • Korean Journal of Remote Sensing
    • /
    • v.19 no.5
    • /
    • pp.393-402
    • /
    • 2003
  • In times of the civil uses of commercialized high-resolution satellite imagery, applications of remote sensing have been widely extended to the new fields or the problem solving beyond traditional application domains. Transportation application of this sensor data, related to the automatic or semiautomatic road extraction, is regarded as one of the important issues in uses of remote sensing imagery. Related to these trends, this study focuses on automatic road extraction using Gradient Direction Profile Algorithm (GDPA) scheme, with IKONOS panchromatic imagery having 1 meter resolution. For this, the GDPA scheme and its main modules were reviewed with processing steps and implemented as a prototype software. Using the extracted bi-level image and ground truth coming from actual GIS layer, overall accuracy evaluation and ranking error-assessment were performed. As the processed results, road information can be automatically extracted; by the way, it is pointed out that some user-defined variables should be carefully determined in using high-resolution satellite imagery in the dense or low contrast areas. While, the GDPA method needs additional processing, because direct results using this method do not produce high overall accuracy or ranking value. The main advantage of the GDPA scheme on road features extraction can be noted as its performance and further applicability. This experiment study can be extended into practical application fields related to remote sensing.

Defect Cell Extraction for TFT-LCD Auto-Repair System (TFT-LCD 자동 수선시스템에서 결함이 있는 셀을 자동으로 추출하는 방법)

  • Cho, Jae-Soo;Ha, Gwang-Sung;Lee, Jin-Wook;Kim, Dong-Hyun;Jeon, Edward
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.5
    • /
    • pp.432-437
    • /
    • 2008
  • This paper proposes a defect cell extraction algorithm for TFT-LCD auto-repair system. Auto defect search algorithm and automatic defect cell extraction method are very important for TFT-LCD auto repair system. In the previous literature[1], we proposed an automatic visual inspection algorithm of TFT-LCD. Based on the inspected information(defect size and defect axis, if defect exists) by the automatic search algorithm, defect cells should be extracted from the input image for the auto repair system. For automatic extraction of defect cells, we used a novel block matching algorithm and a simple filtering process in order to find a given reference point in the LCD cell. The proposed defect cell extraction algorithm can be used in all kinds of TFT-LCD devices by changing a stored template which includes a given reference point. Various experimental results show the effectiveness of the proposed method.