• Title/Summary/Keyword: 분산 추론

Search Result 177, Processing Time 0.022 seconds

Large Scale Incremental Reasoning using SWRL Rules in a Distributed Framework (분산 처리 환경에서 SWRL 규칙을 이용한 대용량 점증적 추론 방법)

  • Lee, Wan-Gon;Bang, Sung-Hyuk;Park, Young-Tack
    • Journal of KIISE
    • /
    • v.44 no.4
    • /
    • pp.383-391
    • /
    • 2017
  • As we enter a new era of Big Data, the amount of semantic data has rapidly increased. In order to derive meaningful information from this large semantic data, studies that utilize the SWRL(Semantic Web Rule Language) are being actively conducted. SWRL rules are based on data extracted from a user's empirical knowledge. However, conventional reasoning systems developed on single machines cannot process large scale data. Similarly, multi-node based reasoning systems have performance degradation problems due to network shuffling. Therefore, this paper overcomes the limitations of existing systems and proposes more efficient distributed inference methods. It also introduces data partitioning strategies to minimize network shuffling. In addition, it describes a method for optimizing the incremental reasoning process through data selection and determining the rule order. In order to evaluate the proposed methods, the experiments were conducted using WiseKB consisting of 200 million triples with 83 user defined rules and the overall reasoning task was completed in 32.7 minutes. Also, the experiment results using LUBM bench datasets showed that our approach could perform reasoning twice as fast as MapReduce based reasoning systems.

An Approach of Scalable SHIF Ontology Reasoning using Spark Framework (Spark 프레임워크를 적용한 대용량 SHIF 온톨로지 추론 기법)

  • Kim, Je-Min;Park, Young-Tack
    • Journal of KIISE
    • /
    • v.42 no.10
    • /
    • pp.1195-1206
    • /
    • 2015
  • For the management of a knowledge system, systems that automatically infer and manage scalable knowledge are required. Most of these systems use ontologies in order to exchange knowledge between machines and infer new knowledge. Therefore, approaches are needed that infer new knowledge for scalable ontology. In this paper, we propose an approach to perform rule based reasoning for scalable SHIF ontologies in a spark framework which works similarly to MapReduce in distributed memories on a cluster. For performing efficient reasoning in distributed memories, we focus on three areas. First, we define a data structure for splitting scalable ontology triples into small sets according to each reasoning rule and loading these triple sets in distributed memories. Second, a rule execution order and iteration conditions based on dependencies and correlations among the SHIF rules are defined. Finally, we explain the operations that are adapted to execute the rules, and these operations are based on reasoning algorithms. In order to evaluate the suggested methods in this paper, we perform an experiment with WebPie, which is a representative ontology reasoner based on a cluster using the LUBM set, which is formal data used to evaluate ontology inference and search speed. Consequently, the proposed approach shows that the throughput is improved by 28,400% (157k/sec) from WebPie(553/sec) with LUBM.

Design and Implementation of a Large-Scale Spatial Reasoner Using MapReduce Framework (맵리듀스 프레임워크를 이용한 대용량 공간 추론기의 설계 및 구현)

  • Nam, Sang Ha;Kim, In Cheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.10
    • /
    • pp.397-406
    • /
    • 2014
  • In order to answer the questions successfully on behalf of the human in DeepQA environments such as Jeopardy! of the American quiz show, the computer is required to have the capability of fast temporal and spatial reasoning on a large-scale commonsense knowledge base. In this paper, we present a scalable spatial reasoning algorithm for deriving efficiently new directional and topological relations using the MapReduce framework, one of well-known parallel distributed computing environments. The proposed reasoning algorithm assumes as input a large-scale spatial knowledge base including CSD-9 directional relations and RCC-8 topological relations. To infer new directional and topological relations from the given spatial knowledge base, it performs the cross-consistency checks as well as the path-consistency checks on the knowledge base. To maximize the parallelism of reasoning computations according to the principle of the MapReduce framework, we design the algorithm to partition effectively the large knowledge base into smaller ones and distribute them over multiple computing nodes at the map phase. And then, at the reduce phase, the algorithm infers the new knowledge from distributed spatial knowledge bases. Through experiments performed on the sample knowledge base with the MapReduce-based implementation of our algorithm, we proved the high performance of our large-scale spatial reasoner.

Scalable Ontology Reasoning Using GPU Cluster Approach (GPU 클러스터 기반 대용량 온톨로지 추론)

  • Hong, JinYung;Jeon, MyungJoong;Park, YoungTack
    • Journal of KIISE
    • /
    • v.43 no.1
    • /
    • pp.61-70
    • /
    • 2016
  • In recent years, there has been a need for techniques for large-scale ontology inference in order to infer new knowledge from existing knowledge at a high speed, and for a diversity of semantic services. With the recent advances in distributed computing, developments of ontology inference engines have mostly been studied based on Hadoop or Spark frameworks on large clusters. Parallel programming techniques using GPGPU, which utilizes many cores when compared with CPU, is also used for ontology inference. In this paper, by combining the advantages of both techniques, we propose a new method for reasoning large RDFS ontology data using a Spark in-memory framework and inferencing distributed data at a high speed using GPGPU. Using GPGPU, ontology reasoning over high-capacity data can be performed as a low cost with higher efficiency over conventional inference methods. In addition, we show that GPGPU can reduce the data workload on each node through the Spark cluster. In order to evaluate our approach, we used LUBM ranging from 10 to 120. Our experimental results showed that our proposed reasoning engine performs 7 times faster than a conventional approach which uses a Spark in-memory inference engine.

Distributed Table Join for Scalable RDFS Reasoning on Cloud Computing Environment (클라우드 컴퓨팅 환경에서의 대용량 RDFS 추론을 위한 분산 테이블 조인 기법)

  • Lee, Wan-Gon;Kim, Je-Min;Park, Young-Tack
    • Journal of KIISE
    • /
    • v.41 no.9
    • /
    • pp.674-685
    • /
    • 2014
  • The Knowledge service system needs to infer a new knowledge from indicated knowledge to provide its effective service. Most of the Knowledge service system is expressed in terms of ontology. The volume of knowledge information in a real world is getting massive, so effective technique for massive data of ontology is drawing attention. This paper is to provide the method to infer massive data-ontology to the extent of RDFS, based on cloud computing environment, and evaluate its capability. RDFS inference suggested in this paper is focused on both the method applying MapReduce based on RDFS meta table, and the method of single use of cloud computing memory without using MapReduce under distributed file computing environment. Therefore, this paper explains basically the inference system structure of each technique, the meta table set-up according to RDFS inference rule, and the algorithm of inference strategy. In order to evaluate suggested method in this paper, we perform experiment with LUBM set which is formal data to evaluate ontology inference and search speed. In case LUBM6000, the RDFS inference technique based on meta table had required 13.75 minutes(inferring 1,042 triples per second) to conduct total inference, whereas the method applying the cloud computing memory had needed 7.24 minutes(inferring 1,979 triples per second) showing its speed twice faster.

Confidence Value based Large Scale OWL Horst Ontology Reasoning (신뢰 값 기반의 대용량 OWL Horst 온톨로지 추론)

  • Lee, Wan-Gon;Park, Hyun-Kyu;Jagvaral, Batselem;Park, Young-Tack
    • Journal of KIISE
    • /
    • v.43 no.5
    • /
    • pp.553-561
    • /
    • 2016
  • Several machine learning techniques are able to automatically populate ontology data from web sources. Also the interest for large scale ontology reasoning is increasing. However, there is a problem leading to the speculative result to imply uncertainties. Hence, there is a need to consider the reliability problems of various data obtained from the web. Currently, large scale ontology reasoning methods based on the trust value is required because the inference-based reliability of quantitative ontology is insufficient. In this study, we proposed a large scale OWL Horst reasoning method based on a confidence value using spark, a distributed in-memory framework. It describes a method for integrating the confidence value of duplicated data. In addition, it explains a distributed parallel heuristic algorithm to solve the problem of degrading the performance of the inference. In order to evaluate the performance of reasoning methods based on the confidence value, the experiment was conducted using LUBM3000. The experiment results showed that our approach could perform reasoning twice faster than existing reasoning systems like WebPIE.

SWAT: A Study on the Efficient Integration of SWRL and ATMS based on a Distributed In-Memory System (SWAT: 분산 인-메모리 시스템 기반 SWRL과 ATMS의 효율적 결합 연구)

  • Jeon, Myung-Joong;Lee, Wan-Gon;Jagvaral, Batselem;Park, Hyun-Kyu;Park, Young-Tack
    • Journal of KIISE
    • /
    • v.45 no.2
    • /
    • pp.113-125
    • /
    • 2018
  • Recently, with the advent of the Big Data era, we have gained the capability of acquiring vast amounts of knowledge from various fields. The collected knowledge is expressed by well-formed formula and in particular, OWL, a standard language of ontology, is a typical form of well-formed formula. The symbolic reasoning is actively being studied using large amounts of ontology data for extracting intrinsic information. However, most studies of this reasoning support the restricted rule expression based on Description Logic and they have limited applicability to the real world. Moreover, knowledge management for inaccurate information is required, since knowledge inferred from the wrong information will also generate more incorrect information based on the dependencies between the inference rules. Therefore, this paper suggests that the SWAT, knowledge management system should be combined with the SWRL (Semantic Web Rule Language) reasoning based on ATMS (Assumption-based Truth Maintenance System). Moreover, this system was constructed by combining with SWRL reasoning and ATMS for managing large ontology data based on the distributed In-memory framework. Based on this, the ATMS monitoring system allows users to easily detect and correct wrong knowledge. We used the LUBM (Lehigh University Benchmark) dataset for evaluating the suggested method which is managing the knowledge through the retraction of the wrong SWRL inference data on large data.

Word Recognition Using Multi-section Equi-segmentation and Fuzzy Inference (다구간 등분할법과 퍼지추론을 이용한 단어인식)

  • 최승호;최갑석
    • The Journal of the Acoustical Society of Korea
    • /
    • v.12 no.4
    • /
    • pp.47-56
    • /
    • 1993
  • 본 논문은 다구간 등분할법과 퍼지추론으로 단어인식을 행하는 패턴매칭법을 제안한다. 패턴매칭시 발생되는 시간변동은 발성순서에 따라 등간격으로 다구간 분할함으로써 해결하고, 주파수변동은 구간의 차수별로 정해진 퍼지관계로부터 패턴간의 퍼지추론이 행해짐으로써 흡수한다. 추론에 사용된 삼각형 맴버쉽 함수의 중심값과 변동폭은 패턴의 평균값과 분산값으로 대응되도록 작성한다. 20대 남성 2인이 발성한 데이터를 사용하여, 제안된 방법으로 DDD지역명 28개를 구간수와 변동폭을 달리하여 인식실험한 결과, 8구간과 4배의 변동폭을 가질 때 92%의 인식을 얻었다.

  • PDF

An Activity Recognition Algorithm using a Distributed Inference based on the Hidden Markov Model in Wireless Sensor Networks (WSN환경에서 은닉 마코프 모텔 기반의 분산추론 기법 적용한 행위인지 알고리즘)

  • Kim, Hong-Sop;Han, Man-Hyung;Yim, Geo-Su
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2009.01a
    • /
    • pp.231-236
    • /
    • 2009
  • 본 연구에서는 집이나 사무실과 같은 일상 공간에서 발생할 수 있는 연간의 일상생활행위 (ADL: Activities of Daily Living) 들을 인지하는 분산 모델을 제시한다. 사용자의 환경 정보, 위치 정보 및 행위 정보를 간단한 센서들이 부착된 가정용 기기들과 가구, 식기들을 통해 무선 센서 네트워크를 통해 수집하며 분석한다. 하지만 이와 같은 다양한 기기의 활용과 충분히 분석되어지지 않은 데이터들은 본 논문에서 제시하는 일상 환경에서 고차원의 ADL 모델을 구축하기 어렵게 한다. 그러나 ADL들이 생성하는 센서 데이터들과 센서 데이터들의 순서들은 어떤 행위가, 이루어지고 있는지 인지할 수 있도록 도와준다. 따라서 이 센서 데이터들의 순서를 특정 행위 패턴을 분석하는 데 활용하고, 이를 통해 분산 선형 시간 추론 알고리즘을 제안한다. 이 알고리즘은 센서 네트워크와 같은 소규모 시스템에서 행위를 인지하는 데 적절하다.

  • PDF

SSQUSAR : A Large-Scale Qualitative Spatial Reasoner Using Apache Spark SQL (SSQUSAR : Apache Spark SQL을 이용한 대용량 정성 공간 추론기)

  • Kim, Jonghoon;Kim, Incheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.6 no.2
    • /
    • pp.103-116
    • /
    • 2017
  • In this paper, we present the design and implementation of a large-scale qualitative spatial reasoner, which can derive new qualitative spatial knowledge representing both topological and directional relationships between two arbitrary spatial objects in efficient way using Aparch Spark SQL. Apache Spark SQL is well known as a distributed parallel programming environment which provides both efficient join operations and query processing functions over a variety of data in Hadoop cluster computer systems. In our spatial reasoner, the overall reasoning process is divided into 6 jobs such as knowledge encoding, inverse reasoning, equal reasoning, transitive reasoning, relation refining, knowledge decoding, and then the execution order over the reasoning jobs is determined in consideration of both logical causal relationships and computational efficiency. The knowledge encoding job reduces the size of knowledge base to reason over by transforming the input knowledge of XML/RDF form into one of more precise form. Repeat of the transitive reasoning job and the relation refining job usually consumes most of computational time and storage for the overall reasoning process. In order to improve the jobs, our reasoner finds out the minimal disjunctive relations for qualitative spatial reasoning, and then, based upon them, it not only reduces the composition table to be used for the transitive reasoning job, but also optimizes the relation refining job. Through experiments using a large-scale benchmarking spatial knowledge base, the proposed reasoner showed high performance and scalability.