• 제목/요약/키워드: Semantic Computing

검색결과 254건 처리시간 0.031초

An Analysis of Existing Studies on Parallel and Distributed Processing of the Rete Algorithm (Rete 알고리즘의 병렬 및 분산 처리에 관한 기존 연구 분석)

  • Kim, Jaehoon
    • The Journal of Korean Institute of Information Technology
    • /
    • 제17권7호
    • /
    • pp.31-45
    • /
    • 2019
  • The core technologies for intelligent services today are deep learning, that is neural networks, and parallel and distributed processing technologies such as GPU parallel computing and big data. However, for intelligent services and knowledge sharing services through globally shared ontologies in the future, there is a technology that is better than the neural networks for representing and reasoning knowledge. It is a knowledge representation of IF-THEN in RIF or SWRL, which is the standard rule language of the Semantic Web, and can be inferred efficiently using the rete algorithm. However, when the number of rules processed by the rete algorithm running on a single computer is 100,000, its performance becomes very poor with several tens of minutes, and there is an obvious limitation. Therefore, in this paper, we analyze the past and current studies on parallel and distributed processing of rete algorithm, and examine what aspects should be considered to implement an efficient rete algorithm.

A Study on the Evaluation Differences of Korean and Chinese Users in Smart Home App Services through Text Mining based on the Two-Factor Theory: Focus on Trustness (이요인 이론 기반 텍스트 마이닝을 통한 한·중 스마트홈 앱 서비스 사용자 평가 차이에 대한 연구: 신뢰성 중심)

  • Yuning Zhao;Gyoo Gun Lim
    • Journal of Information Technology Services
    • /
    • 제22권3호
    • /
    • pp.141-165
    • /
    • 2023
  • With the advent of the fourth industrial revolution, technologies such as the Internet of Things, artificial intelligence and cloud computing are developing rapidly, and smart homes enabled by these technologies are rapidly gaining popularity. To gain a competitive advantage in the global market, companies must understand the differences in consumer needs in different countries and cultures and develop corresponding business strategies. Therefore, this study conducts a comparative analysis of consumer reviews of smart homes in South Korea and China. This study collected online reviews of SmartThings, ThinQ, Msmarthom, and MiHome, the four most commonly used smart home apps in Korea and China. The collected review data is divided into satisfied reviews and dissatisfied reviews according to the ratings, and topics are extracted for each review dataset using LDA topic modeling. Next, the extracted topics are classified according to five evaluation factors of Perceived Usefulness, Reachability, Interoperability,Trustness, and Product Brand proposed by previous studies. Then, by comparing the importance of each evaluation factor in the two datasets of satisfaction and dissatisfaction, we find out the factors that affect consumer satisfaction and dissatisfaction, and compare the differences between users in Korea and China. We found Trustness and Reachability are very important factors. Finally, through language network analysis, the relationship between dissatisfied factors is analyzed from a more microscopic level, and improvement plans are proposed to the companies according to the analysis results.

A Novel Two-Stage Training Method for Unbiased Scene Graph Generation via Distribution Alignment

  • Dongdong Jia;Meili Zhou;Wei WEI;Dong Wang;Zongwen Bai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제17권12호
    • /
    • pp.3383-3397
    • /
    • 2023
  • Scene graphs serve as semantic abstractions of images and play a crucial role in enhancing visual comprehension and reasoning. However, the performance of Scene Graph Generation is often compromised when working with biased data in real-world situations. While many existing systems focus on a single stage of learning for both feature extraction and classification, some employ Class-Balancing strategies, such as Re-weighting, Data Resampling, and Transfer Learning from head to tail. In this paper, we propose a novel approach that decouples the feature extraction and classification phases of the scene graph generation process. For feature extraction, we leverage a transformer-based architecture and design an adaptive calibration function specifically for predicate classification. This function enables us to dynamically adjust the classification scores for each predicate category. Additionally, we introduce a Distribution Alignment technique that effectively balances the class distribution after the feature extraction phase reaches a stable state, thereby facilitating the retraining of the classification head. Importantly, our Distribution Alignment strategy is model-independent and does not require additional supervision, making it applicable to a wide range of SGG models. Using the scene graph diagnostic toolkit on Visual Genome and several popular models, we achieved significant improvements over the previous state-of-the-art methods with our model. Compared to the TDE model, our model improved mR@100 by 70.5% for PredCls, by 84.0% for SGCls, and by 97.6% for SGDet tasks.

A Distributed SPARQL Query Processing Scheme Considering Data Locality and Query Execution Path (데이터 지역성 및 질의 수행 경로를 고려한 분산 SPARQL 질의 처리 기법)

  • Kim, Byounghoon;Kim, Daeyun;Ko, Geonsik;Noh, Yeonwoo;Lim, Jongtae;Bok, kyoungsoo;Lee, Byoungyup;Yoo, Jaesoo
    • KIISE Transactions on Computing Practices
    • /
    • 제23권5호
    • /
    • pp.275-283
    • /
    • 2017
  • A large amount of RDF data has been generated along with the increase of semantic web services. Various distributed storage and query processing schemes have been studied to efficiently use the massive amounts of RDF data. In this paper, we propose a distributed SPARQL query processing scheme that considers the data locality and query execution path of large RDF data. The proposed scheme considers the data locality and query execution path in order to reduce join and communication costs. In a distributed environment, when processing a SPARQL query, it is divided into several sub-queries according to the conditions of the WHERE clause by considering the data locality. The proposed scheme reduces data communication costs by grouping and processing the sub-queries through the index based on associated nodes. In addition, in order to reduce unnecessary joins and latency when processing the query, it creates an efficient query execution path considering data parsing cost, the amount of each node's data communication, and latency. It is shown through various performance evaluations that the proposed scheme outperforms the existing scheme.

A Data Taxonomy Methodology based on Their Origin (데이터 본질 기반의 데이터 분류 방법론)

  • Choi, Mi-Young;Moon, Chang-Joo;Baik, Doo-Kwon;Kwon, Ju-Hum;Lee, Young-Moo
    • Journal of KIISE:Computing Practices and Letters
    • /
    • 제16권2호
    • /
    • pp.163-176
    • /
    • 2010
  • The representative method to efficiently manage the organization's data is to avoid data duplication through the promotion of sharing and reusing existing data. The systematic structuring of existing data and efficient searching should be supported in order to promote the sharing and reusing of data. Without regard for these points, the data for the system development would be duplicated, which would deteriorate the quality of the data. Data taxonomy provides some methods that can enable the needed data elements to be searched quickly with a systematic order of managing data. This paper proposes that the Origin data taxonomy method can best maximize data sharing, reusing, and consolidation, and it can be used for Meta Data Registry (MDR) and Semantic Web efficiently. The Origin data taxonomy method constructs the data taxonomy structure built upon the intrinsic nature of data, so it can classify the data with independence from business classification. Also, it shows a deployment method for data elements used in various areas according to the Origin data taxonomy structure with a data taxonomic procedure that supports the proposed taxonomy. Based on this case study, the proposed data taxonomy and taxonomic procedure can be applied to real world data efficiently.

Unsupervised Noun Sense Disambiguation using Local Context and Co-occurrence (국소 문맥과 공기 정보를 이용한 비교사 학습 방식의 명사 의미 중의성 해소)

  • Lee, Seung-Woo;Lee, Geun-Bae
    • Journal of KIISE:Software and Applications
    • /
    • 제27권7호
    • /
    • pp.769-783
    • /
    • 2000
  • In this paper, in order to disambiguate Korean noun word sense, we define a local context and explain how to extract it from a raw corpus. Following the intuition that two different nouns are likely to have similar meanings if they occur in the same local context, we use, as a clue, the word that occurs in the same local context where the target noun occurs. This method increases the usability of extracted knowledge and makes it possible to disambiguate the sense of infrequent words. And we can overcome the data sparseness problem by extending the verbs in a local context. The sense of a target noun is decided by the maximum similarity to the clues learned previously. The similarity between two words is computed by their concept distance in the sense hierarchy borrowed from WordNet. By reducing the multiplicity of clues gradually in the process of computing maximum similarity, we can speed up for next time calculation. When a target noun has more than two local contexts, we assign a weight according to the type of each local context to implement the differences according to the strength of semantic restriction of local contexts. As another knowledge source, we get a co-occurrence information from dictionary definitions and example sentences about the target noun. This is used to support local contexts and helps to select the most appropriate sense of the target noun. Through experiments using the proposed method, we discovered that the applicability of local contexts is very high and the co-occurrence information can supplement the local context for the precision. In spite of the high multiplicity of the target nouns used in our experiments, we can achieve higher performance (89.8%) than the supervised methods which use a sense-tagged corpus.

  • PDF

A Policy-driven RFID Data Management Event Definition Language (정책기반 RFID 데이터 관리 이벤트 정의 언어)

  • Song, Ji-Hye;Kim, Kwang-Hoon
    • Journal of Internet Computing and Services
    • /
    • 제12권1호
    • /
    • pp.55-70
    • /
    • 2011
  • In this paper, we propose a policy-driven RFID data management event definition language, which is possibly applicable as a partial standard for SSI (Software System Infrastructure) Part 4 (Application Interface, 24791-4) defined by ISO/IEC JTC 1/SC 31/WG 4 (RFID for Item Management). The SSI's RFID application interface part is originally defined for providing a unified interface of the RFID middleware functionality―data management, device management, device interface and security functions. However, the current specifications are too circumstantial to be understood by the application developers who used to lack the professional and technological backgrounds of the RFID middleware functionality. As an impeccable solution, we use the concept of event-constraint policy that is not only representing semantic contents of RFID domains but also providing transparencies with higher level abstractions to RFID applications, and that is able to provide a means of specifying event-constraints for filtering a huge number of raw data caught from the associated RF readers. Conclusively, we try to embody the proposed concept by newly defining an XML-based RFID event policy definition language, which is abbreviated to rXPDL. Additionally, we expect that the specification of rXPDL proposed in the paper becomes a technological basis for the domestic as well as the international standards that are able to be extensively applied to RFID and ubiquitous sensor networks.

Implementation of Policy based In-depth Searching for Identical Entities and Cleansing System in LOD Cloud (LOD 클라우드에서의 연결정책 기반 동일개체 심층검색 및 정제 시스템 구현)

  • Kim, Kwangmin;Sohn, Yonglak
    • Journal of Internet Computing and Services
    • /
    • 제19권3호
    • /
    • pp.67-77
    • /
    • 2018
  • This paper suggests that LOD establishes its own link policy and publishes it to LOD cloud to provide identity among entities in different LODs. For specifying the link policy, we proposed vocabulary set founded on RDF model as well. We implemented Policy based In-depth Searching and Cleansing(PISC for short) system that proceeds in-depth searching across LODs by referencing the link policies. PISC has been published on Github. LODs have participated voluntarily to LOD cloud so that degree of the entity identity needs to be evaluated. PISC, therefore, evaluates the identities and cleanses the searched entities to confine them to that exceed user's criterion of entity identity level. As for searching results, PISC provides entity's detailed contents which have been collected from diverse LODs and ontology customized to the content. Simulation of PISC has been performed on DBpedia's 5 LODs. We found that similarity of 0.9 of source and target RDF triples' objects provided appropriate expansion ratio and inclusion ratio of searching result. For sufficient identity of searched entities, 3 or more target LODs are required to be specified in link policy.

Performance Tests of 3D Data Models for Laser Radar Simulation (레이저레이더 시뮬레이션을 위한 3차원 데이터 모델의 성능 테스트)

  • Kim, Geun-Han;Kim, Hye-Young;Jun, Chul-Min
    • Journal of Korean Society for Geospatial Information Science
    • /
    • 제17권3호
    • /
    • pp.97-107
    • /
    • 2009
  • Experiments using real guided weapons for the development of the LADAR(Laser radar) are not practical. Therefore, we need computing environment that can simulate the 3D detections by LADAR. Such simulations require dealing with large sized data representing buildings and terrain over large area. And they also need the information of 3D target objects, for example, material and echo rate of building walls. However, currently used 3D models are mostly focused on visualization maintained as file-based formats and do not contain such semantic information. In this study, as a solution to these problems, a method to use a spatial DBMS and a 3D model suitable for LADAR simulation is suggested. The 3D models found in previous studies are developed to serve different purposes, thus, it is not easy to choose one among them which is optimized for LADAR simulation. In this study, 4 representative 3D models are first defined, each of which are tested for different performance scenarios. As a result, one model, "Body-Face", is selected as being the most suitable model for the simulation. Using this model, a test simulation is carried out.

  • PDF

An Ontology-based Collaboration System for Service Interoperability (온톨로지 기반의 서비스 상호운용을 위한 협업 시스템)

  • Hwang, Chi-Gon;Moon, Seok-Jae;Jung, Kye-Dong;Choi, Young-Keun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • 제17권1호
    • /
    • pp.210-217
    • /
    • 2013
  • The development of collaboration among information systems in accordance with changes in enterprises' business environment brings about the problems of duplication of the existing business services and increase in costs of maintenance. Accordingly, Web service has been suggested as the standard of distributed computing to prevent the duplication of services within the same business domain and to attain the services that are already being utilized. But since the data needed for Web services are not standardized, it is difficult for the users to find services that meet diverse business purposes. In this paper, we construct an ontology-based collaboration system for service interoperability. The ontology can support fusion service by finding services which are existed interdependently under the distributed environment for collaboration processing. The role of the collaborative system includes development, registration and call of services based on ontology. A local systems request collaboration support through the service profile. Collaborative system supports the development of service using the service profiles, represents the semantic association between real data through system ontology, and infers relationship between instances contained in the services. Based on this, we applied the travel booking services for collaboration system. As a result, service can be managed effectively preventing collision in collaborative system, and we verify that the mapping between system is reduced.