• Title/Summary/Keyword: Automatic Information Extraction

Search Result 592, Processing Time 0.027 seconds

Automatic Recognition Algorithm for Linearly Modulated Signals Under Non-coherent Asynchronous Condition (넌코히어런트 비동기하에서의 선형 변조신호 자동인식 알고리즘)

  • Sim, Kyuhong;Yoon, Wonsik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.10
    • /
    • pp.2409-2416
    • /
    • 2014
  • In this paper, an automatic recognition algorithm for linearly modulated signals like PSK, QAM under noncoherent asynchronous condition is proposed. Frequency, phase, and amplitude characteristics of digitally modulated signals are changed periodically. By using this characteristics, cyclic moments and higher order cumulants based features are utilized for the modulation recognition. Hierarchial decision tree method is used for high speed signal processing and totally 4 feature extraction parameters are used for modulation recognition. In the condition where the symbol number is 4,096, the recognition accuracy of the proposed algorithm is more than 95% at SNR 15dB. Also the proposed algorithm is effective to classify the signal which has carrier frequency and phase offset.

Frontal Face Region Extraction & Features Extraction for Ocular Inspection (망진을 위한 정면 얼굴 영역 및 특징 요소 추출)

  • Cho Dong-Uk;Kim Sun-Young
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.6C
    • /
    • pp.585-592
    • /
    • 2005
  • One of the most important things in the researches on diseases is to attach more importance to prevention of a disease and preservation of health than to treatment of a disease, also to foods rather than to medicines. In this context, the most significant concern in examining a patient is to find the presence of disease, and, if any, to diaguose the type of disease, after which a pharmacotherapy is followed. In this paper, various diagnosis methods of Oriental medicines are discussed. And ocular inspection, the most important method among the 4 disease diagnoses of Oriental medicines, is studied. Observing a person's shape and color has been the major method for ocular inspection, which usually has been dependent upon doctor's intuition as of these days. We are developing an automatic system which provides objective basic data for ocular inspection. As the first stage, we applied the signal processing techniques to automatic feature extraction of faces for ocular inspection. Firstly, facial regions are extracted from the point of frontal view, which was followed by extraction of their features. The experiment applied to 20 persons showed that frontal face regions are perfectly extracted, as well as their features, such as eyes, eyebrows, noses and mouths. Future work will seek to address the issues of morphological operation for a few unfinished extraction results, such as combined hair and eyebrows.

Development of Automatic Rule Extraction Method in Data Mining : An Approach based on Hierarchical Clustering Algorithm and Rough Set Theory (데이터마이닝의 자동 데이터 규칙 추출 방법론 개발 : 계층적 클러스터링 알고리듬과 러프 셋 이론을 중심으로)

  • Oh, Seung-Joon;Park, Chan-Woong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.6
    • /
    • pp.135-142
    • /
    • 2009
  • Data mining is an emerging area of computational intelligence that offers new theories, techniques, and tools for analysis of large data sets. The major techniques used in data mining are mining association rules, classification and clustering. Since these techniques are used individually, it is necessary to develop the methodology for rule extraction using a process of integrating these techniques. Rule extraction techniques assist humans in analyzing of large data sets and to turn the meaningful information contained in the data sets into successful decision making. This paper proposes an autonomous method of rule extraction using clustering and rough set theory. The experiments are carried out on data sets of UCI KDD archive and present decision rules from the proposed method. These rules can be successfully used for making decisions.

Automatic Extraction of the Facial Feature Points Using Moving Color (색상 움직임을 이용한 얼굴 특징점 자동 추출)

  • Kim, Nam-Ho;Kim, Hyoung-Gon;Ko, Sung-Jea
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.35S no.8
    • /
    • pp.55-67
    • /
    • 1998
  • This paper presents an automatic facial feature point extraction algorithm in sequential color images. To extract facial region in the video sequence, a moving color detection technique is proposed that emphasize moving skin color region by applying motion detection algorithm on the skin-color transformed images. The threshold value for the pixel difference detection is also decided according to the transformed pixel value that represents the probability of the desired color information. Eye candidate regions are selected using both of the black/white color information inside the skin-color region and the valley information of the moving skin region detected using morphological operators. Eye region is finally decided by the geometrical relationship of the eyes and color histogram. To decide the exact feature points, the PCA(Principal Component Analysis) is used on each eye and mouth regions. Experimental results show that the feature points of eye and mouth can be obtained correctly irrespective of background, direction and size of face.

  • PDF

BIM-Based Generation of Free-form Building Panelization Model (BIM 기반 비정형 건축물 패널화 모델 생성 방법에 관한 연구)

  • Kim, Yang-Gil;Lee, Yun-Gu;Ham, Nam-Hyuk;Kim, Jae-Jun
    • Journal of KIBIM
    • /
    • v.12 no.4
    • /
    • pp.19-31
    • /
    • 2022
  • With the development of 3D-based CAD (Computer Aided Design), attempts at freeform building design have expanded to small and medium-sized buildings in Korea. However, a standardized system for continuous utilization of shape data and BIM conversion process implemented with 3D-based NURBS is still immature. Without accurate review and management throughout the Freeform building project, interference between members occurs and the cost of the project increases. This is very detrimental to the project. To solve this problem, we proposed a continuous utilization process of 3D shape information based on BIM parameters. Our process includes algorithms such as Auto Split, Panel Optimization, Excel extraction based on shape information, BIM modeling through Adaptive Component, and BIM model utilization method using ID Code. The optimal cutting reference point was calculated and the optimal material specification was derived using the Panel Optimization algorithm. With the Adaptive Component design methodology, a BIM model conforming to the standard cross-section details and specifications was uniformly established. The automatic BIM conversion algorithm of shape data through Excel extraction created a BIM model without omission of data based on the optimized panel cutting reference point and cutting line. Finally, we analyzed how to use the BIM model built for automatic conversion. As a result of the analysis, in addition to the BIM utilization plan in the general construction stage such as visualization, interference review, quantity calculation, and construction simulation, an individual management plan for the unit panel was derived through ID data input. This study suggested an improvement process by linking the existing research on atypical panel optimization and the study of parameter-based BIM information management method. And it showed that it can solve the problems of existing Freeform building project.

Automatic Sputum Color Image Segmentation for Lung Cancer Diagnosis

  • Taher, Fatma;Werghi, Naoufel;Al-Ahmad, Hussain
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.1
    • /
    • pp.68-80
    • /
    • 2013
  • Lung cancer is considered to be the leading cause of cancer death worldwide. A technique commonly used consists of analyzing sputum images for detecting lung cancer cells. However, the analysis of sputum is time consuming and requires highly trained personnel to avoid errors. The manual screening of sputum samples has to be improved by using image processing techniques. In this paper we present a Computer Aided Diagnosis (CAD) system for early detection and diagnosis of lung cancer based on the analysis of the sputum color image with the aim to attain a high accuracy rate and to reduce the time consumed to analyze such sputum samples. In order to form general diagnostic rules, we present a framework for segmentation and extraction of sputum cells in sputum images using respectively, a Bayesian classification method followed by region detection and feature extraction techniques to determine the shape of the nuclei inside the sputum cells. The final results will be used for a (CAD) system for early detection of lung cancer. We analyzed the performance of a Bayesian classification with respect to the color space representation and quantification. Our methods were validated via a series of experimentation conducted with a data set of 100 images. Our evaluation criteria were based on sensitivity, specificity and accuracy.

Urban Road Extraction from Aerial Photo by Linking Method

  • Yang, Sung-Chul;Han, Dong-Yeo;Kim, Min-Suk;Kim, Yong-Il
    • Korean Journal of Geomatics
    • /
    • v.3 no.1
    • /
    • pp.67-72
    • /
    • 2003
  • We have seen rapid changes in road systems and networks in urban areas due to fast urbanization and increased traffic demands. As a result, many researchers have put greater importance on extraction, correction and updating of information about road systems. Also, by using the various data on road systems and its condition, we can manage our road more efficiently and economically. Furthermore, such information can be used as input for digital map and GIS analysis. In this research, we used a high resolution aerial photo of the roads in Seongnam area. First, we applied the top-hat filter to the area of interest so that the road markings could be extracted in an efficient manner. The lane separation lines were selected, considering the shape similarity between the selected lane separation line and reference data. Next, we extracted the roads in the urban area using the aforementioned road marking. Using this technique, we could easily extract roads in urban area in semi-automatic way.

  • PDF

Icefex: Protocol Format Extraction from IL-based Concolic Execution

  • Pan, Fan;Wu, Li-Fa;Hong, Zheng;Li, Hua-Bo;Lai, Hai-Guang;Zheng, Chen-Hui
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.3
    • /
    • pp.576-599
    • /
    • 2013
  • Protocol reverse engineering is useful for many security applications, including intelligent fuzzing, intrusion detection and fingerprint generation. Since manual reverse engineering is a time-consuming and tedious process, a number of automatic techniques have been proposed. However, the accuracy of these techniques is limited due to the complexity of binary instructions, and the derived formats have missed constraints that are critical for security applications. In this paper, we propose a new approach for protocol format extraction. Our approach reasons about only the evaluation behavior of a program on the input message from concolic execution, and enables field identification and constraint inference with high accuracy. Moreover, it performs binary analysis with low complexity by reducing modern instruction sets to BIL, a small, well-specified and architecture-independent language. We have implemented our approach into a system called Icefex and evaluated it over real-world implementations of DNS, eDonkey, FTP, HTTP and McAfee ePO protocols. Experimental results show that our approach is more accurate and effective at extracting protocol formats than other approaches.

Minimally Supervised Relation Identification from Wikipedia Articles

  • Oh, Heung-Seon;Jung, Yuchul
    • Journal of Information Science Theory and Practice
    • /
    • v.6 no.4
    • /
    • pp.28-38
    • /
    • 2018
  • Wikipedia is composed of millions of articles, each of which explains a particular entity with various languages in the real world. Since the articles are contributed and edited by a large population of diverse experts with no specific authority, Wikipedia can be seen as a naturally occurring body of human knowledge. In this paper, we propose a method to automatically identify key entities and relations in Wikipedia articles, which can be used for automatic ontology construction. Compared to previous approaches to entity and relation extraction and/or identification from text, our goal is to capture naturally occurring entities and relations from Wikipedia while minimizing artificiality often introduced at the stages of constructing training and testing data. The titles of the articles and anchored phrases in their text are regarded as entities, and their types are automatically classified with minimal training. We attempt to automatically detect and identify possible relations among the entities based on clustering without training data, as opposed to the relation extraction approach that focuses on improvement of accuracy in selecting one of the several target relations for a given pair of entities. While the relation extraction approach with supervised learning requires a significant amount of annotation efforts for a predefined set of relations, our approach attempts to discover relations as they occur naturally. Unlike other unsupervised relation identification work where evaluation of automatically identified relations is done with the correct relations determined a priori by human judges, we attempted to evaluate appropriateness of the naturally occurring clusters of relations involving person-artifact and person-organization entities and their relation names.

An intelligent system for automatic data extraction in E-Commerce Applications

  • Cardenosa, Jesus;Iraola, Luis;Tovar, Edmundo
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2001.01a
    • /
    • pp.202-208
    • /
    • 2001
  • One of the most frequent uses of Internet is data gathering. Data can be about many themes but perhaps one of the most demanded fields is the tourist information. Normally, databases that support these systems are maintained manually. However, there is other approach, that is, to extract data automatically, for instance, from textual public information existing in the Web. This approach consists of extracting data from textual sources(public or not) and to serve them totally or partially to the user in the form that he/she wants. The obtained data can maintain automatically databases that support different systems as WAP mobile telephones, or commercial systems accessed by Natural Language Interfaces and others. This process has three main actors. The first is the information itself that is present in a particular context. The second is the information supplier (extracting data from the existing information) and the third is the user or information searcher. This added value chain reuse and give value to existing data even in the case that these data were not tough for the last use by the use of the described technology. The main advantage of this approach is that it makes independent the information source from the information user. This means that the original information belongs to a particular context, not necessarily the context of the user. This paper will describe the application based on this approach developed by the authors in the FLEX EXPRIT IV n$^{\circ}$EP29158 in the Work-package "Knowledge Extraction & Data mining"where the information captured from digital newspapers is extracted and reused in tourist information context.

  • PDF