• Title/Summary/Keyword: OWL- S

Search Result 120, Processing Time 0.031 seconds

A development of an ontology model and an ontology based design system for the excavator design (굴삭기 설계 영역에 대한 온톨로지 모델 및 온톨로지 기반 설계 시스템 개발)

  • Bae I.J.;Lee S.H.;Jeon C.M.;Chang J.H.
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2006.05a
    • /
    • pp.613-614
    • /
    • 2006
  • Design data, information, and knowledge have complex associations with each other. Systems related with the management of the data, information, and knowledge are various, and the representations are numerous. Therefore it is difficult to construct a knowledge based design system with a full association knowledge for supporting the design tasks. In this research, OWL based ontology model for an excavator design is developed for the representation of the relationships. Also an ontology model is used to develop the knowledge based excavator design system.

  • PDF

Constraints based Semantic Web Services Discovery for an Intelligent Agent (지능형 에이전트를 위한 제약조건 기반의 시맨틱 웹 서비스 검색)

  • Namgoong Hyun;Jung Seungwoo;Kim Hyung-il;Chung Moonyoung;Cho HyeonSung
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2005.11b
    • /
    • pp.505-507
    • /
    • 2005
  • 시맨틱 웹 서비스 검색시스템은 제공자가 기술한 웹 서비스 시맨틱 기술을 웹 서비스 사용자의 요청에 의해 검색하여 사용자에게 반환한다. 이러한 검색은 의미적 비교를 통해 이루어지며, 의미적 비교는 OWL-S[4]와 같은 표준을 통해 기술된 웹 서비스의 입력과 출력, 조건, 효과(Input, Output, Precondition, Effect) 즉, IOPE 와 사용자의 요청의 그것에 대해 수행된다. 하지만, 이러한 검색은 때때로 웹 서비스를 이용하여 이용자에게 서비스를 제공하는 지능형 에이전트에게 적합하지 않다. 지능형 에이전트는 웹 서비스의 실행을 위한 입력의 변환과 반환된 출력의 선택을 통해 서비스를 이용 가능하므로 IOPE의 형식에 대한 정확한 일치를 요구하지 않는다. 본 논문에서는 지능형 에이전트에 보다 융통성 있는 검색 서비스의 제공을 위한 제약조건을 기반으로 시맨틱 웹 서비스 검색시스템에 대하여 설명한다.

  • PDF

Lifting a Metadata Model to the Semantic Multimedia World

  • Martens, Gaetan;Verborgh, Ruben;Poppe, Chris;Van De Walle, Rik
    • Journal of Information Processing Systems
    • /
    • v.7 no.1
    • /
    • pp.199-208
    • /
    • 2011
  • This paper describes best-practices in lifting an image metadata standard to the Semantic Web. We provide guidelines on how an XML-based metadata format can be converted into an OWL ontology. Additionally, we discuss how this ontology can be mapped to the W3C's Media Ontology. This ontology is a standardization effort of the W3C to provide a core vocabulary for multimedia annotations. The approach presented here can be applied to other XML-based metadata standards.

A Study of Automatic Web Ontology for Comparison-Shopping Agent in e-Business (전자상거래에서 비교구매 에이전트를 위한 웹 온토롤지에 대한 연구)

  • Kim Su-Kyoung;Kim Young-Geun;Ahn Kee-Hong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2004.11a
    • /
    • pp.547-550
    • /
    • 2004
  • 기존 전자상거래 플랫폼과 컨텐츠는 데이터의 확장 및 통합이 고려되지 않은 HTML을 중심으로 한 표현 기반 기술로 되어 있고, 특히 전자상거래 사이트별로 상품 정보에 대한 분류체계가 상이함으로 인해 구매자에게 상품별 비교와 검색에 있어서 많은 시간이 낭비되고 또한 정보의 공유가 어려운 기술로 인하여 판매자와 구매자 모두의 요구를 만족시키지 못하고 있다. 본 논문에서는 최근 차세대 웹 기술로 각광받고 있는 시맨틱 웹 기반 기술인 RDF/RDFS를 이용하여 기존 이종의 상점에 제시된 상품정보를 RDF 문서로 생성하고, OWL을 사용하여 상품 지식 기반온톨로지를 구축한 뒤, RDF 문서와의 분석과 매칭을 통하여 이종의 상점에 표현된 상품을 실시간으로 비교 검색하는 시스템을 설계 제안하였다.

  • PDF

A Study on Method for Extraction of Emotion in Newspaper (신문기사의 감정추출 방법에 관한 연구)

  • Baek, Sun-Kyoung;Kim, Pan-Koo
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2005.07b
    • /
    • pp.562-564
    • /
    • 2005
  • 정보검색에서의 사용자의 다양한 질의어는 객관적인 키워드에서 인간이 주관적으로 생각하고 느끼는 감정요소를 동반한 어휘들로 범위가 넓어지고 있다. 이에 본 논문에서는 감정에 기반한 신문기사 검색을 위하여 기사의 구문 분석 및 품사 태깅 절차를 거쳐 동사를 추출하고 그 중 감점을 내포하는 동사들의 관계를 이용하여 신문기사의 감정을 추출한다. 감정동사의 관계를 창조하기 위하여 감정동사들을 OWL/RDF(S)를 이용해서 온톨로지를 구축하였고 에지(Edge)기반의 유사도 측정방법을 제안하였다. 제안한 방법은 여러 가지 감정을 추출하고 감정 정도를 측정할 수 있기 때문에 이는 향후 감정기반 신문기사 검색에 효과적으로 사용될 수 있을 것이다.

  • PDF

시멘틱 웹 기반의 비교구매 에이전트를 위한 동적 웹 온톨로지 시스템에 대한 연구

  • Kim, Su-Gyeong;An, Gi-Hong
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2005.05a
    • /
    • pp.306-315
    • /
    • 2005
  • 기존 전자상거래 플랫폼과 컨텐츠는 데이터의 확장 및 통합이 고려되지 않은 HTML을 중심으로 한 표현 기반 기술로 되어 있고, 각 사이트별로 상품 정보에 대한 분류체계가 상이하여 구매자들이 상품별 비교와 검색에 있어서 많은 시간을 낭비하는 등 많은 문제점을 가지고 있다. 따라서 전자상거래 사이트들 간의 효율적인 정보 공유의 필요성이 제기 되고 있다. 또한 정보의 공유가 어려운 기술로 인하여 판매자와 구매자들의 다양한 요구를 만족시키지 못하고 있다. 그러므로 본 논문에서는 최근 차세대 웹기술로 각광받고 있는 시맨틱 웹 기반 기술인 RDF/RDFS를 이용하여 기존의 상점에 제시된 상품정보를 Wrapper 기술을 이용하여 필요한 정보만을 추출한 뒤, 이것을 기반으로 RDF 트리플과 문서로 생성한다. 상품 정보에 대한 온톨로지를 설계한 뒤 이를 Web Ontology Language (OWL)를 사용하여 상품 지식 기반 온톨로지를 구축하고, 이를 RDF 트리플과 문서와의 분석과 매칭을 통하여 이종의 상점에 표현된 상품들을 실시간으로 비교 검색하고 동적으로 상품에 대한 지식 기반 온톨로지를 생성하는 웹 온톨로지 시스템을 설계 제안하였다.

  • PDF

Semantic Process Retrieval with Similarity Algorithms (유사도 알고리즘을 활용한 시맨틱 프로세스 검색방안)

  • Lee, Hong-Joo;Klein, Mark
    • Asia pacific journal of information systems
    • /
    • v.18 no.1
    • /
    • pp.79-96
    • /
    • 2008
  • One of the roles of the Semantic Web services is to execute dynamic intra-organizational services including the integration and interoperation of business processes. Since different organizations design their processes differently, the retrieval of similar semantic business processes is necessary in order to support inter-organizational collaborations. Most approaches for finding services that have certain features and support certain business processes have relied on some type of logical reasoning and exact matching. This paper presents our approach of using imprecise matching for expanding results from an exact matching engine to query the OWL(Web Ontology Language) MIT Process Handbook. MIT Process Handbook is an electronic repository of best-practice business processes. The Handbook is intended to help people: (1) redesigning organizational processes, (2) inventing new processes, and (3) sharing ideas about organizational practices. In order to use the MIT Process Handbook for process retrieval experiments, we had to export it into an OWL-based format. We model the Process Handbook meta-model in OWL and export the processes in the Handbook as instances of the meta-model. Next, we need to find a sizable number of queries and their corresponding correct answers in the Process Handbook. Many previous studies devised artificial dataset composed of randomly generated numbers without real meaning and used subjective ratings for correct answers and similarity values between processes. To generate a semantic-preserving test data set, we create 20 variants for each target process that are syntactically different but semantically equivalent using mutation operators. These variants represent the correct answers of the target process. We devise diverse similarity algorithms based on values of process attributes and structures of business processes. We use simple similarity algorithms for text retrieval such as TF-IDF and Levenshtein edit distance to devise our approaches, and utilize tree edit distance measure because semantic processes are appeared to have a graph structure. Also, we design similarity algorithms considering similarity of process structure such as part process, goal, and exception. Since we can identify relationships between semantic process and its subcomponents, this information can be utilized for calculating similarities between processes. Dice's coefficient and Jaccard similarity measures are utilized to calculate portion of overlaps between processes in diverse ways. We perform retrieval experiments to compare the performance of the devised similarity algorithms. We measure the retrieval performance in terms of precision, recall and F measure? the harmonic mean of precision and recall. The tree edit distance shows the poorest performance in terms of all measures. TF-IDF and the method incorporating TF-IDF measure and Levenshtein edit distance show better performances than other devised methods. These two measures are focused on similarity between name and descriptions of process. In addition, we calculate rank correlation coefficient, Kendall's tau b, between the number of process mutations and ranking of similarity values among the mutation sets. In this experiment, similarity measures based on process structure, such as Dice's, Jaccard, and derivatives of these measures, show greater coefficient than measures based on values of process attributes. However, the Lev-TFIDF-JaccardAll measure considering process structure and attributes' values together shows reasonably better performances in these two experiments. For retrieving semantic process, we can think that it's better to consider diverse aspects of process similarity such as process structure and values of process attributes. We generate semantic process data and its dataset for retrieval experiment from MIT Process Handbook repository. We suggest imprecise query algorithms that expand retrieval results from exact matching engine such as SPARQL, and compare the retrieval performances of the similarity algorithms. For the limitations and future work, we need to perform experiments with other dataset from other domain. And, since there are many similarity values from diverse measures, we may find better ways to identify relevant processes by applying these values simultaneously.

A SOA-based Dynamic Service Composition Framework using Web Services and OpenAPIs (웹 서비스와 OpenAPI를 사용한 SOA 기반 동적 서비스 합성 프레임워크)

  • Kim, Jin-Han;Lee, Byung-Jeong
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.3
    • /
    • pp.187-199
    • /
    • 2009
  • With the advent of Web 2.0, OpenAPIs are becoming an increasing trend to emphasize Web as platform recently. OpenAPIs are used to combine services and generate new services by mashup. However because the standard documents for OpenAPIs do not exist, it may restrict the use of OpenAPIs. Previous studies of OpenAPIs mashup have been limited to tool design or language definition for service combination rather than dynamic composition. On the other hand, Web services that are a software technology implementing SOA provide standard documents such as WSDL to explain each service, UDDI to register it, and SOAP to transfer messages. Thus Web applications can interpret and execute services by using these technologies. Recent works have also been performed to provide semantic features and dynamic composition for SOA. If a dynamic and systematic approach is provided to combine Web services and OpenAPIs, Web applications can provide users with diverse services. In this study, we present a SOA based framework for mashup of OpenAPIs and Web services. The framework supports dynamic composition of OpenAPIs and Web services, where the process of composite services is described in OWL-S. A prototype is provided to validate our framework. The framework is expected to add diversity to typical Web services.

SWAT: A Study on the Efficient Integration of SWRL and ATMS based on a Distributed In-Memory System (SWAT: 분산 인-메모리 시스템 기반 SWRL과 ATMS의 효율적 결합 연구)

  • Jeon, Myung-Joong;Lee, Wan-Gon;Jagvaral, Batselem;Park, Hyun-Kyu;Park, Young-Tack
    • Journal of KIISE
    • /
    • v.45 no.2
    • /
    • pp.113-125
    • /
    • 2018
  • Recently, with the advent of the Big Data era, we have gained the capability of acquiring vast amounts of knowledge from various fields. The collected knowledge is expressed by well-formed formula and in particular, OWL, a standard language of ontology, is a typical form of well-formed formula. The symbolic reasoning is actively being studied using large amounts of ontology data for extracting intrinsic information. However, most studies of this reasoning support the restricted rule expression based on Description Logic and they have limited applicability to the real world. Moreover, knowledge management for inaccurate information is required, since knowledge inferred from the wrong information will also generate more incorrect information based on the dependencies between the inference rules. Therefore, this paper suggests that the SWAT, knowledge management system should be combined with the SWRL (Semantic Web Rule Language) reasoning based on ATMS (Assumption-based Truth Maintenance System). Moreover, this system was constructed by combining with SWRL reasoning and ATMS for managing large ontology data based on the distributed In-memory framework. Based on this, the ATMS monitoring system allows users to easily detect and correct wrong knowledge. We used the LUBM (Lehigh University Benchmark) dataset for evaluating the suggested method which is managing the knowledge through the retraction of the wrong SWRL inference data on large data.

Distributed Assumption-Based Truth Maintenance System for Scalable Reasoning (대용량 추론을 위한 분산환경에서의 가정기반진리관리시스템)

  • Jagvaral, Batselem;Park, Young-Tack
    • Journal of KIISE
    • /
    • v.43 no.10
    • /
    • pp.1115-1123
    • /
    • 2016
  • Assumption-based truth maintenance system (ATMS) is a tool that maintains the reasoning process of inference engine. It also supports non-monotonic reasoning based on dependency-directed backtracking. Bookkeeping all the reasoning processes allows it to quickly check and retract beliefs and efficiently provide solutions for problems with large search space. However, the amount of data has been exponentially grown recently, making it impossible to use a single machine for solving large-scale problems. The maintaining process for solving such problems can lead to high computation cost due to large memory overhead. To overcome this drawback, this paper presents an approach towards incrementally maintaining the reasoning process of inference engine on cluster using Spark. It maintains data dependencies such as assumption, label, environment and justification on a cluster of machines in parallel and efficiently updates changes in a large amount of inferred datasets. We deployed the proposed ATMS on a cluster with 5 machines, conducted OWL/RDFS reasoning over University benchmark data (LUBM) and evaluated our system in terms of its performance and functionalities such as assertion, explanation and retraction. In our experiments, the proposed system performed the operations in a reasonably short period of time for over 80GB inferred LUBM2000 dataset.