• Title/Summary/Keyword: domain ontology

Search Result 269, Processing Time 0.022 seconds

A Study on Ontology-based Keywords Structuring for Efficient Information Retrieval (연구.학술정보 효율적 검색을 위한 온톨로지 기반의 주제 색인어 구조화 방안 연구)

  • Song, In-Seok
    • Journal of Information Management
    • /
    • v.39 no.4
    • /
    • pp.121-154
    • /
    • 2008
  • In this paper, a ontology-based keyword structuring method is proposed to represent the knowledge structure of scholarly documents and to make inferences from the semantic relationships holding among them. The characteristics of thesaurus as a knowledge organization system(KOS) for subject heading is critically reviewed from the information retrieval point of view. The domain concepts are identified and classified by analysis of the information activities occurring in a general research process based on scholarly sensemaking model. The ontological structure of keyword set is defined in terms of the semantic relationship of the canonical concepts which constitute scholarly documents such as journal articles. As a result, each ontologically structured keyword set of a document represents the knowledge structure of the corresponding document as semantic index. By means of the axioms and inference rules defined for information needs, users can efficiently explore the scholarly communication network built on the semantic relationship among documents in an analytic way based on the scholarly sensemaking model in oder to efficiently retrieve the relevant information for problem solving.

A Design of the Ontology-based Situation Recognition System to Detect Risk Factors in a Semiconductor Manufacturing Process (반도체 공정의 위험요소 판단을 위한 온톨로지 기반의 상황인지 시스템 설계)

  • Baek, Seung-Min;Jeon, Min-Ho;Oh, Chang-Heon
    • Journal of Advanced Navigation Technology
    • /
    • v.17 no.6
    • /
    • pp.804-809
    • /
    • 2013
  • The current state monitoring system at a semiconductor manufacturing process is based on the manually collected sensor data, which involves limitations when it comes to complex malfunction detection and real time monitoring. This study aims to design a situation recognition algorithm to form a network over time by creating a domain ontology and to suggest a system to provide users with services by generating events upon finding risk factors in the semiconductor process. To this end, a multiple sensor node for situational inference was designed and tested. As a result of the experiment, events to which the rule of time inference was applied occurred for the contents formed over time with regard to a quantity of collected data while the events that occurred with regard to malfunction and external time factors provided log data only.

Livestock Telemedicine System Prediction Model for Human Healthy Life (인간의 건강한 삶을 위한 가축원격 진료 예측 모델)

  • Kang, Yun-Jeong;Lee, Kwang-Jae;Choi, Dong-Oun
    • Journal of Korea Entertainment Industry Association
    • /
    • v.13 no.8
    • /
    • pp.335-343
    • /
    • 2019
  • Healthy living is an essential element of human happiness. Quality eating provides the basis for life, and the health of livestock, which provides meat and dairy products, has a direct impact on human health. In the case of calves, diarrhea is the cause of all diseases.In this paper, we use a sensor to measure calf 's biometric data to diagnose calf diarrhea. The collected biometric data is subjected to a preprocessing process for use as meaningful information. We measure calf birth history and calf biometrics. The ontology is constructed by inputting environmental information of housing and biochemistry, immunity, and measurement information of human body for disease management. We will build a knowledge base for predicting calf diarrhea by predicting calf diarrhea through logical reasoning. Predict diarrhea with the knowledge base on the name of the disease, cause, timing and symptoms of livestock diseases. These knowledge bases can be expressed as domain ontologies for parent ontology and prediction, and as a result, treatment and prevention methods can be suggested.

Intelligent Army Tactical Command Information System based on National Defense Ontology (국방온톨로지 기반의 지능형 육군전술지휘정보체계)

  • Yoo, Donghee;Ra, Minyoung;Han, Changhee;Shin, Jinhee;No, Sungchun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.3
    • /
    • pp.79-89
    • /
    • 2013
  • ATCIS (Army Tactical Command Information System) provides commanders and staff officers the battlefield information that is reported by tactical echelons under an army corps and the commanders make decisions based on the information by using their experience and specialty in military domain. If ATICS can automatically understand the reported information from rapidly changing battlefield and provide new knowledge that can support decision making, the commanders would be able to make faster and more accurate decision. In this paper, therefore, we propose an intelligent ATCIS using a national defense ontology. To this end, we built the national defense ontology by analyzing the electronic field manuals and ATCIS database, and then we defined military knowledge for decision making as a form of rule by interviewing several staff officers from different fields. In order to show how to apply the ontology and rules to decision making support for the commanders, we implemented a decision support service to estimate the possibility of enemy's provocation by using semantic web technologies.

The Design and Implementation of Ontology for Simulation based Architecture Framework (ONT-AF) in Military Domain (SBA AF의 구축을 지원하는 온톨로지의 설계 및 구현(ONT-SAF))

  • Kwon, Youngmin;Sohn, Mye;Lee, Wookey
    • Journal of Information Technology and Architecture
    • /
    • v.9 no.3
    • /
    • pp.233-241
    • /
    • 2012
  • Architecture framework (AF) is a guideline to define components needed to develop and operate enterprise architecture (EA), and to define relationships among the components. There are many architecture frameworks to operate EA of governments and businesses such as Zachman framework, DoDAF, TOGAF, FEAF, and TEAF. DoDAF is the most representative AF to support the development of the EA in the military domain. DoDAF is composed of eight viewpoints and 40 views that are affiliated with the viewpoints. To develop an AF for a specific goal, system architects decide a set of views. Furthermore, they determine data that are needed for a view modeling. However, views and data in DoDAF are structurally inter-related explicitly and/or implicitly. So, developing an AF for a specific goal is going to be a project to be carried out over a long haul. To reduce the burden of its development, in this paper, we develop ONT-SAF (Ontology for DoDAF) that can infer inter-relationships like referential and transitive relationships and the sequences among the views. Furthermore, to promote reusability and consistency of the views and the data within an AF, we adopt the view-data separation strategy. ONT-DAT contains classes like 'viewpoint', 'view', 'data', 'expression method', and 'reference model', and 11 properties including 'hasView.' To prove the effectiveness of ONT-SAF, we perform a case study.

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.

Full-Length Enriched cDNA Library Construction from Tissues Related to Energy Metabolism in Pigs

  • Lee, Kyung-Tai;Byun, Mi-Jeong;Lim, Dajeong;Kang, Kyung-Soo;Kim, Nam-Soon;Oh, Jung-Hwa;Chung, Chung-Soo;Park, Hae-Suk;Shin, Younhee;Kim, Tae-Hun
    • Molecules and Cells
    • /
    • v.28 no.6
    • /
    • pp.529-536
    • /
    • 2009
  • Genome sequencing of the pig is being accelerated because of its importance as an evolutionary and biomedical model animal as well as a major livestock animal. However, information on expressed porcine genes is insufficient to allow annotation and use of the genomic information. A series of expressed sequence tags of 5' ends of five full-length enriched cDNA libraries (SUSFLECKs) were functionally characterized. SUSFLECKs were constructed from porcine abdominal fat, induced fat cells, loin muscle, liver, and pituitary gland, and were composed of non-normalized and normalized libraries. A total of 55,658 ESTs that were sequenced once from the 5′ ends of clones were produced and assembled into 17,684 unique sequences with 7,736 contigs and 9,948 singletons. In Gene Ontology analysis, two significant biological process leaf nodes were found: gluconeogenesis and translation elongation. In functional domain analysis based on the Pfam database, the beta transducin repeat domain of WD40 protein was the most frequently occurring domain. Twelve genes, including SLC25A6, EEF1G, EEF1A1, COX1, ACTA1, SLA, and ANXA2, were significantly more abundant in fat tissues than in loin muscle, liver, and pituitary gland in the SUSFLECKs. These characteristics of SUSFLECKs determined by EST analysis can provide important insight to discover the functional pathways in gene networks and to expand our understanding of energy metabolism in the pig.

Web-Based Computational System for Protein-Protein Interaction Inference

  • Kim, Ki-Bong
    • Journal of Information Processing Systems
    • /
    • v.8 no.3
    • /
    • pp.459-470
    • /
    • 2012
  • Recently, high-throughput technologies such as the two-hybrid system, protein chip, Mass Spectrometry, and the phage display have furnished a lot of data on protein-protein interactions (PPIs), but the data has not been accurate so far and the quantity has also been limited. In this respect, computational techniques for the prediction and validation of PPIs have been developed. However, existing computational methods do not take into account the fact that a PPI is actually originated from the interactions of domains that each protein contains. So, in this work, the information on domain modules of individual proteins has been employed in order to find out the protein interaction relationship. The system developed here, WASPI (Web-based Assistant System for Protein-protein interaction Inference), has been implemented to provide many functional insights into the protein interactions and their domains. To achieve those objectives, several preprocessing steps have been taken. First, the domain module information of interacting proteins was extracted by taking advantage of the InterPro database, which includes protein families, domains, and functional sites. The InterProScan program was used in this preprocess. Second, the homology comparison with the GO (Gene Ontology) and COG (Clusters of Orthologous Groups) with an E-value of $10^{-5}$, $10^{-3}$ respectively, was employed to obtain the information on the function and annotation of each interacting protein of a secondary PPI database in the WASPI. The BLAST program was utilized for the homology comparison.

A Context Model Comparison Methodology for Developing Generic Context Model used in Ubiquitous Multi-Services (유비쿼터스 멀티 서비스 개발에서의 일반적 상황모형 구축을 위한 상황모형 비교 평가방법론)

  • Park, Tae-Hwan;Kwon, Oh-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.13 no.1
    • /
    • pp.29-47
    • /
    • 2007
  • Acquiring context data in a timely and correct way is now regarded as one of the crucial characteristics of the proactive service which runs on ubiquitous computing environment. Moreover, context model should be well designed to provide a solid context-aware system. Since the ubiquitous computing systems aim to provide context-aware services everywhere with any available devices, legacy services which uses context models assuming single or limited domain should be extended enough to be useful even for multi-domain muli-services. This leads us to a motivation to build a generic context model with an appropriate type of model. Hence, the purpose of this paper is to propose a generic context model by assessing a variety of model types with a sort of evaluation measures.

  • PDF

A Multi-Strategic Mapping Approach for Distributed Topic Maps (분산 토픽맵의 다중 전략 매핑 기법)

  • Kim Jung-Min;Shin Hyo-phil;Kim Hyoung-Joo
    • Journal of KIISE:Software and Applications
    • /
    • v.33 no.1
    • /
    • pp.114-129
    • /
    • 2006
  • Ontology mapping is the task of finding semantic correspondences between two ontologies. In order to improve the effectiveness of ontology mapping, we need to consider the characteristics and constraints of data models used for implementing ontologies. Earlier research on ontology mapping, however, has proven to be inefficient because the approach should transform input ontologies into graphs and take into account all the nodes and edges of the graphs, which ended up requiring a great amount of processing time. In this paper, we propose a multi-strategic mapping approach to find correspondences between ontologies based on the syntactic or semantic characteristics and constraints of the topic maps. Our multi-strategic mapping approach includes a topic name-based mapping, a topic property-based mapping, a hierarchy-based mapping, and an association-based mapping approach. And it also uses a hybrid method in which a combined similarity is derived from the results of individual mapping approaches. In addition, we don't need to generate a cross-pair of all topics from the ontologies because unmatched pairs of topics can be removed by characteristics and constraints of the topic maps. For our experiments, we used oriental philosophy ontologies, western philosophy ontologies, Yahoo western philosophy dictionary, and Yahoo german literature dictionary as input ontologies. Our experiments show that the automatically generated mapping results conform to the outputs generated manually by domain experts, which is very promising for further work.