• Title/Summary/Keyword: Inference Engines

Search Result 31, Processing Time 0.021 seconds

Scalable Ontology Reasoning Using GPU Cluster Approach (GPU 클러스터 기반 대용량 온톨로지 추론)

  • Hong, JinYung;Jeon, MyungJoong;Park, YoungTack
    • Journal of KIISE
    • /
    • v.43 no.1
    • /
    • pp.61-70
    • /
    • 2016
  • In recent years, there has been a need for techniques for large-scale ontology inference in order to infer new knowledge from existing knowledge at a high speed, and for a diversity of semantic services. With the recent advances in distributed computing, developments of ontology inference engines have mostly been studied based on Hadoop or Spark frameworks on large clusters. Parallel programming techniques using GPGPU, which utilizes many cores when compared with CPU, is also used for ontology inference. In this paper, by combining the advantages of both techniques, we propose a new method for reasoning large RDFS ontology data using a Spark in-memory framework and inferencing distributed data at a high speed using GPGPU. Using GPGPU, ontology reasoning over high-capacity data can be performed as a low cost with higher efficiency over conventional inference methods. In addition, we show that GPGPU can reduce the data workload on each node through the Spark cluster. In order to evaluate our approach, we used LUBM ranging from 10 to 120. Our experimental results showed that our proposed reasoning engine performs 7 times faster than a conventional approach which uses a Spark in-memory inference engine.

The Research for Ontology Repository Management (온톨로지 저장소 관리에 관한 연구)

  • Lee D.H.;Yang J.J.
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2005.10a
    • /
    • pp.124-127
    • /
    • 2005
  • The increased use of ontologies fur knowledge sharing emerges in many applications where knowledge applicability plays a critical role. The trend demands the need for an infrastructure that allows management tools to use ontology more easily such as ontology editors, storing, integration and inference engines towards comprehensive ontology-based solutions. We call such an infrastructure as ontology repository. This paper designed ontology repository for scalable ontology data

  • PDF

Heterogeneous Lifelog Mining Model in Health Big-data Platform (헬스 빅데이터 플랫폼에서 이기종 라이프로그 마이닝 모델)

  • Kang, JI-Soo;Chung, Kyungyong
    • Journal of the Korea Convergence Society
    • /
    • v.9 no.10
    • /
    • pp.75-80
    • /
    • 2018
  • In this paper, we propose heterogeneous lifelog mining model in health big-data platform. It is an ontology-based mining model for collecting user's lifelog in real-time and providing healthcare services. The proposed method distributes heterogeneous lifelog data and processes it in real time in a cloud computing environment. The knowledge base is reconstructed by an upper ontology method suitable for the environment constructed based on the heterogeneous ontology. The restructured knowledge base generates inference rules using Jena 4.0 inference engines, and provides real-time healthcare services by rule-based inference methods. Lifelog mining constructs an analysis of hidden relationships and a predictive model for time-series bio-signal. This enables real-time healthcare services that realize preventive health services to detect changes in the users' bio-signal by exploring negative or positive correlations that are not included in the relationships or inference rules. The performance evaluation shows that the proposed heterogeneous lifelog mining model method is superior to other models with an accuracy of 0.734, a precision of 0.752.

Index for Efficient Ontology Retrieval and Inference (효율적인 온톨로지 검색과 추론을 위한 인덱스)

  • Song, Seungjae;Kim, Insung;Chun, Jonghoon
    • The Journal of Society for e-Business Studies
    • /
    • v.18 no.2
    • /
    • pp.153-173
    • /
    • 2013
  • The ontology has been gaining increasing interests by recent arise of the semantic web and related technologies. The focus is mostly on inference query processing that requires high-level techniques for storage and searching ontologies efficiently, and it has been actively studied in the area of semantic-based searching. W3C's recommendation is to use RDFS and OWL for representing ontologies. However memory-based editors, inference engines, and triple storages all store ontology as a simple set of triplets. Naturally the performance is limited, especially when a large-scale ontology needs to be processed. A variety of researches on proposing algorithms for efficient inference query processing has been conducted, and many of them are based on using proven relational database technology. However, none of them had been successful in obtaining the complete set of inference results which reflects the five characteristics of the ontology properties. In this paper, we propose a new index structure called hyper cube index to efficiently process inference queries. Our approach is based on an intuition that an index can speed up the query processing when extensive inferencing is required.

Intelligent Healthcare Service Provisioning Using Ontology with Low-Level Sensory Data

  • Khattak, Asad Masood;Pervez, Zeeshan;Lee, Sung-Young;Lee, Young-Koo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.5 no.11
    • /
    • pp.2016-2034
    • /
    • 2011
  • Ubiquitous Healthcare (u-Healthcare) is the intelligent delivery of healthcare services to users anytime and anywhere. To provide robust healthcare services, recognition of patient daily life activities is required. Context information in combination with user real-time daily life activities can help in the provision of more personalized services, service suggestions, and changes in system behavior based on user profile for better healthcare services. In this paper, we focus on the intelligent manipulation of activities using the Context-aware Activity Manipulation Engine (CAME) core of the Human Activity Recognition Engine (HARE). The activities are recognized using video-based, wearable sensor-based, and location-based activity recognition engines. An ontology-based activity fusion with subject profile information for personalized system response is achieved. CAME receives real-time low level activities and infers higher level activities, situation analysis, personalized service suggestions, and makes appropriate decisions. A two-phase filtering technique is applied for intelligent processing of information (represented in ontology) and making appropriate decisions based on rules (incorporating expert knowledge). The experimental results for intelligent processing of activity information showed relatively better accuracy. Moreover, CAME is extended with activity filters and T-Box inference that resulted in better accuracy and response time in comparison to initial results of CAME.

Interworking technology of neural network and data among deep learning frameworks

  • Park, Jaebok;Yoo, Seungmok;Yoon, Seokjin;Lee, Kyunghee;Cho, Changsik
    • ETRI Journal
    • /
    • v.41 no.6
    • /
    • pp.760-770
    • /
    • 2019
  • Based on the growing demand for neural network technologies, various neural network inference engines are being developed. However, each inference engine has its own neural network storage format. There is a growing demand for standardization to solve this problem. This study presents interworking techniques for ensuring the compatibility of neural networks and data among the various deep learning frameworks. The proposed technique standardizes the graphic expression grammar and learning data storage format using the Neural Network Exchange Format (NNEF) of Khronos. The proposed converter includes a lexical, syntax, and parser. This NNEF parser converts neural network information into a parsing tree and quantizes data. To validate the proposed system, we verified that MNIST is immediately executed by importing AlexNet's neural network and learned data. Therefore, this study contributes an efficient design technique for a converter that can execute a neural network and learned data in various frameworks regardless of the storage format of each framework.

Trends of Compiler Development for AI Processor (인공지능 프로세서 컴파일러 개발 동향)

  • Kim, J.K.;Kim, H.J.;Cho, Y.C.P.;Kim, H.M.;Lyuh, C.G.;Han, J.;Kwon, Y.
    • Electronics and Telecommunications Trends
    • /
    • v.36 no.2
    • /
    • pp.32-42
    • /
    • 2021
  • The rapid growth of deep-learning applications has invoked the R&D of artificial intelligence (AI) processors. A dedicated software framework such as a compiler and runtime APIs is required to achieve maximum processor performance. There are various compilers and frameworks for AI training and inference. In this study, we present the features and characteristics of AI compilers, training frameworks, and inference engines. In addition, we focus on the internals of compiler frameworks, which are based on either basic linear algebra subprograms or intermediate representation. For an in-depth insight, we present the compiler infrastructure, internal components, and operation flow of ETRI's "AI-Ware." The software framework's significant role is evidenced from the optimized neural processing unit code produced by the compiler after various optimization passes, such as scheduling, architecture-considering optimization, schedule selection, and power optimization. We conclude the study with thoughts about the future of state-of-the-art AI compilers.

An elastic distributed parallel Hadoop system for bigdata platform and distributed inference engines (동적 분산병렬 하둡시스템 및 분산추론기에 응용한 서버가상화 빅데이터 플랫폼)

  • Song, Dong Ho;Shin, Ji Ae;In, Yean Jin;Lee, Wan Gon;Lee, Kang Se
    • Journal of the Korean Data and Information Science Society
    • /
    • v.26 no.5
    • /
    • pp.1129-1139
    • /
    • 2015
  • Inference process generates additional triples from knowledge represented in RDF triples of semantic web technology. Tens of million of triples as an initial big data and the additionally inferred triples become a knowledge base for applications such as QA(question&answer) system. The inference engine requires more computing resources to process the triples generated while inferencing. The additional computing resources supplied by underlying resource pool in cloud computing can shorten the execution time. This paper addresses an algorithm to allocate the number of computing nodes "elastically" at runtime on Hadoop, depending on the size of knowledge data fed. The model proposed in this paper is composed of the layered architecture: the top layer for applications, the middle layer for distributed parallel inference engine to process the triples, and lower layer for elastic Hadoop and server visualization. System algorithms and test data are analyzed and discussed in this paper. The model hast the benefit that rich legacy Hadoop applications can be run faster on this system without any modification.

Framework Design for Wine Knowledge-based Semantic Web Services (시맨틱 웹 기반 와인 지식 검색을 위한 웹 서비스 설계)

  • Jeon Hyun-Joo;Youn Ho-Chang;Choi Gwang-Ung
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2005.05a
    • /
    • pp.237-243
    • /
    • 2005
  • As the well-being or quality of life of a population is the common interests, a lot of people are interested in wines. They are willingness to share wine knowledges with other wine experts on the web. Therefore the study for information retrieval system and inference engines are needed to get relevant search results about wine types and suitable wines for given foods. This paper discusses an approach to the architecture of agent-based semantic web services in wine ontology.

  • PDF

An Efficient Web Ontology Storage Considering Hierarchical Knowledge for Jena-based Applications

  • Jeong, Dong-Won;Shin, Hee-Young;Baik, Doo-Kwon;Jeong, Young-Sik
    • Journal of Information Processing Systems
    • /
    • v.5 no.1
    • /
    • pp.11-18
    • /
    • 2009
  • As well as providing various APIs for the development of inference engines and storage models, Jena is widely used in the development of systems or tools related with Web ontology management. However, Jena still has several problems with regard to the development of real applications, one of the most important being that its query processing performance is unacceptable. This paper proposes a storage model to improve the query processing performance of the original Jena storage. The proposed storage model semantically classifies OWL elements, and stores an ontology in separately classified tables according to the classification. In particular, the hierarchical knowledge is managed, which can make the processing performance of inferable queries enhanced and stores information. It enhances the query processing performance by using hierarchical knowledge. For this paper an experimental evaluation was conducted, the results of which showed that the proposed storage model provides a improved performance compared with Jena.