• Title/Summary/Keyword: Distributed Inference

Search Result 78, Processing Time 0.02 seconds

Scalable RDFS Reasoning Using the Graph Structure of In-Memory based Parallel Computing (인메모리 기반 병렬 컴퓨팅 그래프 구조를 이용한 대용량 RDFS 추론)

  • Jeon, MyungJoong;So, ChiSeoung;Jagvaral, Batselem;Kim, KangPil;Kim, Jin;Hong, JinYoung;Park, YoungTack
    • Journal of KIISE
    • /
    • v.42 no.8
    • /
    • pp.998-1009
    • /
    • 2015
  • In recent years, there has been a growing interest in RDFS Inference to build a rich knowledge base. However, it is difficult to improve the inference performance with large data by using a single machine. Therefore, researchers are investigating the development of a RDFS inference engine for a distributed computing environment. However, the existing inference engines cannot process data in real-time, are difficult to implement, and are vulnerable to repetitive tasks. In order to overcome these problems, we propose a method to construct an in-memory distributed inference engine that uses a parallel graph structure. In general, the ontology based on a triple structure possesses a graph structure. Thus, it is intuitive to design a graph structure-based inference engine. Moreover, the RDFS inference rule can be implemented by utilizing the operator of the graph structure, and we can thus design the inference engine according to the graph structure, and not the structure of the data table. In this study, we evaluate the proposed inference engine by using the LUBM1000 and LUBM3000 data to test the speed of the inference. The results of our experiment indicate that the proposed in-memory distributed inference engine achieved a performance of about 10 times faster than an in-storage inference engine.

AN INTERPOLATIVE FUZZY INFERENCE METHOD AND ITS APPLICATION

  • SHIMAKAWA, Manabu;MURAKAMI, Shuta
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1998.06a
    • /
    • pp.556-561
    • /
    • 1998
  • This paper deals with our proposed fuzzy inference method, in which the fuzzy relation is represented by the membership functions of the antecedent and consequent parts, it is not used any fuzzy composition. The strong point of this method is that the membership function of an inferred conclusion has a simple shape and thus its meaning can be interpreted easily. Firstly, the proposed method is explained, and then it is applied to fuzzy modeling of distributed data.

  • PDF

Parallel Fuzzy Inference Method for Large Volumes of Satellite Images

  • Lee, Sang-Gu
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.1 no.1
    • /
    • pp.119-124
    • /
    • 2001
  • In this pattern recognition on the large volumes of remote sensing satellite images, the inference time is much increased. In the case of the remote sensing data [5] having 4 wavebands, the 778 training patterns are learned. Each land cover pattern is classified by using 159, 900 patterns including the trained patterns. For the fuzzy classification, the 778 fuzzy rules are generated. Each fuzzy rule has 4 fuzzy variables in the condition part. Therefore, high performance parallel fuzzy inference system is needed. In this paper, we propose a novel parallel fuzzy inference system on T3E parallel computer. In this, fuzzy rules are distributed and executed simultaneously. The ONE_To_ALL algorithm is used to broadcast the fuzzy input to the all nodes. The results of the MIN/MAX operations are transferred to the output processor by the ALL_TO_ONE algorithm. By parallel processing of the fuzzy rules, the parallel fuzzy inference algorithm extracts match parallelism and achieves a good speed factor. This system can be used in a large expert system that ha many inference variables in the condition and the consequent part.

  • PDF

Communication Failure Resilient Improvement of Distributed Neural Network Partitioning and Inference Accuracy (통신 실패에 강인한 분산 뉴럴 네트워크 분할 및 추론 정확도 개선 기법)

  • Jeong, Jonghun;Yang, Hoeseok
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.16 no.1
    • /
    • pp.9-15
    • /
    • 2021
  • Recently, it is increasingly necessary to run high-end neural network applications with huge computation overhead on top of resource-constrained embedded systems, such as wearable devices. While the huge computational overhead can be alleviated by distributed neural networks running on multiple separate devices, existing distributed neural network techniques suffer from a large traffic between the devices; thus are very vulnerable to communication failures. These drawbacks make the distributed neural network techniques inapplicable to wearable devices, which are connected with each other through unstable and low data rate communication medium like human body communication. Therefore, in this paper, we propose a distributed neural network partitioning technique that is resilient to communication failures. Furthermore, we show that the proposed technique also improves the inference accuracy even in case of no communication failure, thanks to the improved network partitioning. We verify through comparative experiments with a real-life neural network application that the proposed technique outperforms the existing state-of-the-art distributed neural network technique in terms of accuracy and resiliency to communication failures.

A Case-Based Reasoning Approach to Ontology Inference Engine Selection for Robust Context-Aware Services (상황인식 서비스의 안정적 운영을 위한 온톨로지 추론 엔진 선택을 위한 사례기반추론 접근법)

  • Shim, Jae-Moon;Kwon, Oh-Byung
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.33 no.2
    • /
    • pp.27-44
    • /
    • 2008
  • Owl-based ontology is useful to realize the context-aware services which are composed of the distributed and self-configuring modules. Many ontology-based inference engines are developed to infer useful information from ontology. Since these engines show the uniqueness in terms of speed and information richness, it's difficult to ensure stable operation in providing dynamic context-aware services, especially when they should deal with the complex and big-size ontology. To provide a best inference service, the purpose of this paper is to propose a novel methodology of context-aware engine selection in a contextually prompt manner Case-based reasoning is applied to identify the causality between context and inference engined to be selected. Finally, a series of experiments is performed with a novel evaluation methodology to what extent the methodology works better than competitive methods on an actual context-aware service.

An elastic distributed parallel Hadoop system for bigdata platform and distributed inference engines (동적 분산병렬 하둡시스템 및 분산추론기에 응용한 서버가상화 빅데이터 플랫폼)

  • Song, Dong Ho;Shin, Ji Ae;In, Yean Jin;Lee, Wan Gon;Lee, Kang Se
    • Journal of the Korean Data and Information Science Society
    • /
    • v.26 no.5
    • /
    • pp.1129-1139
    • /
    • 2015
  • Inference process generates additional triples from knowledge represented in RDF triples of semantic web technology. Tens of million of triples as an initial big data and the additionally inferred triples become a knowledge base for applications such as QA(question&answer) system. The inference engine requires more computing resources to process the triples generated while inferencing. The additional computing resources supplied by underlying resource pool in cloud computing can shorten the execution time. This paper addresses an algorithm to allocate the number of computing nodes "elastically" at runtime on Hadoop, depending on the size of knowledge data fed. The model proposed in this paper is composed of the layered architecture: the top layer for applications, the middle layer for distributed parallel inference engine to process the triples, and lower layer for elastic Hadoop and server visualization. System algorithms and test data are analyzed and discussed in this paper. The model hast the benefit that rich legacy Hadoop applications can be run faster on this system without any modification.

Scalable Ontology Reasoning Using GPU Cluster Approach (GPU 클러스터 기반 대용량 온톨로지 추론)

  • Hong, JinYung;Jeon, MyungJoong;Park, YoungTack
    • Journal of KIISE
    • /
    • v.43 no.1
    • /
    • pp.61-70
    • /
    • 2016
  • In recent years, there has been a need for techniques for large-scale ontology inference in order to infer new knowledge from existing knowledge at a high speed, and for a diversity of semantic services. With the recent advances in distributed computing, developments of ontology inference engines have mostly been studied based on Hadoop or Spark frameworks on large clusters. Parallel programming techniques using GPGPU, which utilizes many cores when compared with CPU, is also used for ontology inference. In this paper, by combining the advantages of both techniques, we propose a new method for reasoning large RDFS ontology data using a Spark in-memory framework and inferencing distributed data at a high speed using GPGPU. Using GPGPU, ontology reasoning over high-capacity data can be performed as a low cost with higher efficiency over conventional inference methods. In addition, we show that GPGPU can reduce the data workload on each node through the Spark cluster. In order to evaluate our approach, we used LUBM ranging from 10 to 120. Our experimental results showed that our proposed reasoning engine performs 7 times faster than a conventional approach which uses a Spark in-memory inference engine.

Fuzzy Inference of Large Volumes in Parallel Computing Environment (병렬컴퓨팅 환경에서의 대용량 퍼지 추론)

  • 김진일;박찬량;이동철;이상구
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2000.05a
    • /
    • pp.13-16
    • /
    • 2000
  • In fuzzy expert systems or database systems that have huge volumes of fuzzy data or large fuzzy rules, the inference time is much increased. Therefore, a high performance parallel fuzzy computing environment is needed. In this paper, we propose a parallel fuzzy inference mechanism in parallel computing environment. In this, fuzzy rules are distributed and executed simultaneously. The ONE_TO_ALL algorithm is used to broadcast the fuzzy input vector to the all nodes. The results of the MIN/MAX operations are transferred to the output processor by the ALL_TO_ONE algorithm. By parallel processing of fuzzy rules or data, the parallel fuzzy inference algorithm extracts effective parallel ism and achieves a good speed factor.

  • PDF

Performance analysis of local exit for distributed deep neural networks over cloud and edge computing

  • Lee, Changsik;Hong, Seungwoo;Hong, Sungback;Kim, Taeyeon
    • ETRI Journal
    • /
    • v.42 no.5
    • /
    • pp.658-668
    • /
    • 2020
  • In edge computing, most procedures, including data collection, data processing, and service provision, are handled at edge nodes and not in the central cloud. This decreases the processing burden on the central cloud, enabling fast responses to end-device service requests in addition to reducing bandwidth consumption. However, edge nodes have restricted computing, storage, and energy resources to support computation-intensive tasks such as processing deep neural network (DNN) inference. In this study, we analyze the effect of models with single and multiple local exits on DNN inference in an edge-computing environment. Our test results show that a single-exit model performs better with respect to the number of local exited samples, inference accuracy, and inference latency than a multi-exit model at all exit points. These results signify that higher accuracy can be achieved with less computation when a single-exit model is adopted. In edge computing infrastructure, it is therefore more efficient to adopt a DNN model with only one or a few exit points to provide a fast and reliable inference service.

A Cooperative Spectrum Sensing Scheme Using Fuzzy Logic for Cognitive Radio Networks

  • Thuc, Kieu-Xuan;Koo, In-Soo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.4 no.3
    • /
    • pp.289-304
    • /
    • 2010
  • This paper proposes a novel scheme for cooperative spectrum sensing on distributed cognitive radio networks. A fuzzy logic rule - based inference system is proposed to estimate the presence possibility of the licensed user's signal based on the observed energy at each cognitive radio terminal. The estimated results are aggregated to make the final sensing decision at the fusion center. Simulation results show that significant improvement of the spectrum sensing accuracy is achieved by our schemes.