• Title/Summary/Keyword: Query process

Search Result 526, Processing Time 0.03 seconds

A Study on XMDR-DSM System Design for Cooperative (협업을 위한 XMDR-DSM 시스템 설계에 관한 연구)

  • Moon, Seok-Jae;Jung, Kye-Dong;Choi, Young-Keun
    • The KIPS Transactions:PartD
    • /
    • v.16D no.5
    • /
    • pp.701-714
    • /
    • 2009
  • In the enterprises the data integration based on service requires integrated data management as the change in the environment of enterprises accelerates. Cooperation among enterprises is accomplished through accessing distributed database using business process. As this approach is performed based on the global query, problems such as data heterogeneity, schema heterogeneity, and verification of validity have to be solved in advance for the interoperability among the heterogeneous system. Thus, cooperation requires dynamic and reliable construction. In this paper, we propose XMDR-DSM (eXtended MetaData Registry-Data Service Mediator) system for cooperation. XMDR-DSM, which is comprised of XMDR-DS, XMDR-DQP, and XMDR-DAI, supports the mapping between global schema and local schema and provides data access and integration service. Therefore, XMDR-DSM enables the mutual support of business operations among heterogeneous database. In addition, it can secure information as reusable asset and the standardization of interchange. Also it can manage unified information since it provides business process based on workflow; therefore, it will be able to increase the life span of information and reduce the cost.

Lost and Found Registration and Inquiry Management System for User-dependent Interface using Automatic Image Classification and Ranking System based on Deep Learning (딥 러닝 기반 이미지 자동 분류 및 랭킹 시스템을 이용한 사용자 편의 중심의 유실물 등록 및 조회 관리 시스템)

  • Jeong, Hamin;Yoo, Hyunsoo;You, Taewoo;Kim, Yunuk;Ahn, Yonghak
    • Convergence Security Journal
    • /
    • v.18 no.4
    • /
    • pp.19-25
    • /
    • 2018
  • In this paper, we propose an user-centered integrated lost-goods management system through a ranking system based on weight and a hierarchical image classification system based on Deep Learning. The proposed system consists of a hierarchical image classification system that automatically classifies images through deep learning, and a ranking system modules that listing the registered lost property information on the system in order of weight for the convenience of the query process.In the process of registration, various information such as category classification, brand, and related tags are automatically recognized by only one photograph, thereby minimizing the hassle of users in the registration process. And through the ranking systems, it has increased the efficiency of searching for lost items by exposing users frequently visited lost items on top. As a result of the experiment, the proposed system allows users to use the system easily and conveniently.

  • PDF

Hazelcast Vs. Ignite: Opportunities for Java Programmers

  • Maxim, Bartkov;Tetiana, Katkova;S., Kruglyk Vladyslav;G., Murtaziev Ernest;V., Kotova Olha
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.2
    • /
    • pp.406-412
    • /
    • 2022
  • Storing large amounts of data has always been a big problem from the beginning of computing history. Big Data has made huge advancements in improving business processes by finding the customers' needs using prediction models based on web and social media search. The main purpose of big data stream processing frameworks is to allow programmers to directly query the continuous stream without dealing with the lower-level mechanisms. In other words, programmers write the code to process streams using these runtime libraries (also called Stream Processing Engines). This is achieved by taking large volumes of data and analyzing them using Big Data frameworks. Streaming platforms are an emerging technology that deals with continuous streams of data. There are several streaming platforms of Big Data freely available on the Internet. However, selecting the most appropriate one is not easy for programmers. In this paper, we present a detailed description of two of the state-of-the-art and most popular streaming frameworks: Apache Ignite and Hazelcast. In addition, the performance of these frameworks is compared using selected attributes. Different types of databases are used in common to store the data. To process the data in real-time continuously, data streaming technologies are developed. With the development of today's large-scale distributed applications handling tons of data, these databases are not viable. Consequently, Big Data is introduced to store, process, and analyze data at a fast speed and also to deal with big users and data growth day by day.

Metadata Ontology Design for B2B Business Process Registries (기업간 비즈니스 프로세스 등록저장소를 위한 메타데이터 온톨로지 설계)

  • Kim, Jong-Woo;Kim, Hyoung-Do;Yun, Jung-Hee;Jung, Hyun-Chul
    • The KIPS Transactions:PartD
    • /
    • v.14D no.4 s.114
    • /
    • pp.435-446
    • /
    • 2007
  • B2B registries are information systems to register B2B related business information such as companies' profiles, business documents, business processes, and services and to provide query facilities to find information about potential business partners. Focusing on the design of the registry for B2B business processes, in this paper, a metadata ontology is designed to register B2B business processes. In practice, there are several competitive business process definition languages such as ebXML BPSS (Business Process Specification Schema), WSBPEL (Web Service Business Process Execution Language), BPMN (Business Process Modeling Notation), and so on. In order to register heterogeneous business processes based on different representation frameworks, the proposed metadata ontology consists of three layers, common metadata, language-specific metadata, and interrelationship metadata. To show the usefulness of the proposed metadata ontology, two examples which are represented by ebXML BPSS and WSBPEL respectively are described in order to show how the proposed metadata ontology is used to registry B2B business processes. To implement the proposed metadata ontology using ebXML registry, metadata mapping scheme to ebRIM (ebXML Registry Information Model) is also suggested.

Implementation of Reporting Tool Supporting OLAP and Data Mining Analysis Using XMLA (XMLA를 사용한 OLAP과 데이타 마이닝 분석이 가능한 리포팅 툴의 구현)

  • Choe, Jee-Woong;Kim, Myung-Ho
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.3
    • /
    • pp.154-166
    • /
    • 2009
  • Database query and reporting tools, OLAP tools and data mining tools are typical front-end tools in Business Intelligence environment which is able to support gathering, consolidating and analyzing data produced from business operation activities and provide access to the result to enterprise's users. Traditional reporting tools have an advantage of creating sophisticated dynamic reports including SQL query result sets, which look like documents produced by word processors, and publishing the reports to the Web environment, but data source for the tools is limited to RDBMS. On the other hand, OLAP tools and data mining tools have an advantage of providing powerful information analysis functions on each own way, but built-in visualization components for analysis results are limited to tables or some charts. Thus, this paper presents a system that integrates three typical front-end tools to complement one another for BI environment. Traditional reporting tools only have a query editor for generating SQL statements to bring data from RDBMS. However, the reporting tool presented by this paper can extract data also from OLAP and data mining servers, because editors for OLAP and data mining query requests are added into this tool. Traditional systems produce all documents in the server side. This structure enables reporting tools to avoid repetitive process to generate documents, when many clients intend to access the same dynamic document. But, because this system targets that a few users generate documents for data analysis, this tool generates documents at the client side. Therefore, the tool has a processing mechanism to deal with a number of data despite the limited memory capacity of the report viewer in the client side. Also, this reporting tool has data structure for integrating data from three kinds of data sources into one document. Finally, most of traditional front-end tools for BI are dependent on data source architecture from specific vendor. To overcome the problem, this system uses XMLA that is a protocol based on web service to access to data sources for OLAP and data mining services from various vendors.

Query-based Answer Extraction using Korean Dependency Parsing (의존 구문 분석을 이용한 질의 기반 정답 추출)

  • Lee, Dokyoung;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.161-177
    • /
    • 2019
  • In this paper, we study the performance improvement of the answer extraction in Question-Answering system by using sentence dependency parsing result. The Question-Answering (QA) system consists of query analysis, which is a method of analyzing the user's query, and answer extraction, which is a method to extract appropriate answers in the document. And various studies have been conducted on two methods. In order to improve the performance of answer extraction, it is necessary to accurately reflect the grammatical information of sentences. In Korean, because word order structure is free and omission of sentence components is frequent, dependency parsing is a good way to analyze Korean syntax. Therefore, in this study, we improved the performance of the answer extraction by adding the features generated by dependency parsing analysis to the inputs of the answer extraction model (Bidirectional LSTM-CRF). The process of generating the dependency graph embedding consists of the steps of generating the dependency graph from the dependency parsing result and learning the embedding of the graph. In this study, we compared the performance of the answer extraction model when inputting basic word features generated without the dependency parsing and the performance of the model when inputting the addition of the Eojeol tag feature and dependency graph embedding feature. Since dependency parsing is performed on a basic unit of an Eojeol, which is a component of sentences separated by a space, the tag information of the Eojeol can be obtained as a result of the dependency parsing. The Eojeol tag feature means the tag information of the Eojeol. The process of generating the dependency graph embedding consists of the steps of generating the dependency graph from the dependency parsing result and learning the embedding of the graph. From the dependency parsing result, a graph is generated from the Eojeol to the node, the dependency between the Eojeol to the edge, and the Eojeol tag to the node label. In this process, an undirected graph is generated or a directed graph is generated according to whether or not the dependency relation direction is considered. To obtain the embedding of the graph, we used Graph2Vec, which is a method of finding the embedding of the graph by the subgraphs constituting a graph. We can specify the maximum path length between nodes in the process of finding subgraphs of a graph. If the maximum path length between nodes is 1, graph embedding is generated only by direct dependency between Eojeol, and graph embedding is generated including indirect dependencies as the maximum path length between nodes becomes larger. In the experiment, the maximum path length between nodes is adjusted differently from 1 to 3 depending on whether direction of dependency is considered or not, and the performance of answer extraction is measured. Experimental results show that both Eojeol tag feature and dependency graph embedding feature improve the performance of answer extraction. In particular, considering the direction of the dependency relation and extracting the dependency graph generated with the maximum path length of 1 in the subgraph extraction process in Graph2Vec as the input of the model, the highest answer extraction performance was shown. As a result of these experiments, we concluded that it is better to take into account the direction of dependence and to consider only the direct connection rather than the indirect dependence between the words. The significance of this study is as follows. First, we improved the performance of answer extraction by adding features using dependency parsing results, taking into account the characteristics of Korean, which is free of word order structure and omission of sentence components. Second, we generated feature of dependency parsing result by learning - based graph embedding method without defining the pattern of dependency between Eojeol. Future research directions are as follows. In this study, the features generated as a result of the dependency parsing are applied only to the answer extraction model in order to grasp the meaning. However, in the future, if the performance is confirmed by applying the features to various natural language processing models such as sentiment analysis or name entity recognition, the validity of the features can be verified more accurately.

Tracing the Development and Spread Patterns of OSS using the Method of Netnography - The Case of JavaScript Frameworks - (네트노그라피를 이용한 공개 소프트웨어의 개발 및 확산 패턴 분석에 관한 연구 - 자바스크립트 프레임워크 사례를 중심으로 -)

  • Kang, Heesuk;Yoon, Inhwan;Lee, Heesan
    • Management & Information Systems Review
    • /
    • v.36 no.3
    • /
    • pp.131-150
    • /
    • 2017
  • The purpose of this study is to observe the spread pattern of open source software (OSS) while establishing relations with surrounding actors during its operation period. In order to investigate the change pattern of participants in the OSS, we use a netnography on the basis of online data, which can trace the change patterns of the OSS depending on the passage of time. For this, the cases of three OSSs (e.g. jQuery, MooTools, and YUI), which are JavaScript frameworks, were compared, and the corresponding data were collected from the open application programming interface (API) of GitHub as well as blog and web searches. This research utilizes the translation process of the actor-network theory to categorize the stages of the change patterns on the OSS translation process. In the project commencement stage, we identified the type of three different OSS-related actors and defined associated relationships among them. The period, when a master commences a project at first, is refined through the course for the maintenance of source codes with persons concerned (i.e. project growth stage). Thereafter, the period when the users have gone through the observation and learning period by being exposed to promotion activities and codes usage respectively, and becoming to active participants, is regarded as the 'leap of participants' stage. Our results emphasize the importance of promotion processes in participants' selection of the OSS for participation and confirm the crowding-out effect that the rapid speed of OSS development retarded the emergence of participants.

  • PDF

Min-Distance Hop Count based Multi-Hop Clustering In Non-uniform Wireless Sensor Networks

  • Kim, Eun-Ju;Kim, Dong-Joo;Park, Jun-Ho;Seong, Dong-Ook;Lee, Byung-Yup;Yoo, Jae-Soo
    • International Journal of Contents
    • /
    • v.8 no.2
    • /
    • pp.13-18
    • /
    • 2012
  • In wireless sensor networks, an energy efficient data gathering scheme is one of core technologies to process a query. The cluster-based data gathering methods minimize the energy consumption of sensor nodes by maximizing the efficiency of data aggregation. However, since the existing clustering methods consider only uniform network environments, they are not suitable for the real world applications that sensor nodes can be distributed unevenly. To solve such a problem, we propose a balanced multi-hop clustering scheme in non-uniform wireless sensor networks. The proposed scheme constructs a cluster based on the logical distance to the cluster head using a min-distance hop count. To show the superiority of our proposed scheme, we compare it with the existing clustering schemes in sensor networks. Our experimental results show that our proposed scheme prolongs about 48% lifetime over the existing methods on average.

Algorithm for Finding K-Nearest Object Pairs in Circular Search Spaces (순환검색공간에서 K-최근접객체 쌍을 찾는 알고리즘에 관한 연구)

  • Seon, Hwi-Joon;Kim, Hong-Ki
    • Spatial Information Research
    • /
    • v.20 no.2
    • /
    • pp.165-172
    • /
    • 2012
  • The query of the K closest object pairs between two object sets frequently occurs at recently retrieval systems. The circular location property of objects should be considered for efficiently process queries finding such a K nearest object pair. In this paper, we propose the optimal algorithm finding the K object pairs which are closest to each other in a search space with a circular domain and show its performance by experiments. The proposed algorithm optimizes the cost of finding the K nearest object pairs by using the circular search distances which is much applied the circular location property.

A study of Routing algorithm of USN for the Telemedicine (원격의료지원을 위한 USN 라우팅 알고리즘에 대한 연구)

  • Yun, Chan-Young
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2006.11a
    • /
    • pp.716-720
    • /
    • 2006
  • In this paper, we designed and proposed new routing algorithm that can support a variety of vital-sign traffic characteristic and could be applicable to USN for telemedicine by using adaptive transmission power level and increase frequency of routing request message. In proposed routing algorithm, when an emergency vital-sign traffic is applied, we use large transmission power to reduce route query response time and make the priority order in route process. On the other hand, for non emergency vital-sign traffic, we use low transmission power and adaptive decrease frequency of routing request message. which is insensitive to delay. The proposed scheme should be better QoS performance in complex USN than conventional method, which is performed based on uniform transmission power level.

  • PDF