• Title/Summary/Keyword: XML data

Search Result 1,240, Processing Time 0.04 seconds

An XPDL-Based Workflow Control-Structure and Data-Sequence Analyzer

  • Kim, Kwanghoon Pio
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.3
    • /
    • pp.1702-1721
    • /
    • 2019
  • A workflow process (or business process) management system helps to define, execute, monitor and manage workflow models deployed on a workflow-supported enterprise, and the system is compartmentalized into a modeling subsystem and an enacting subsystem, in general. The modeling subsystem's functionality is to discover and analyze workflow models via a theoretical modeling methodology like ICN, to graphically define them via a graphical representation notation like BPMN, and to systematically deploy those graphically defined models onto the enacting subsystem by transforming into their textual models represented by a standardized workflow process definition language like XPDL. Before deploying those defined workflow models, it is very important to inspect its syntactical correctness as well as its structural properness to minimize the loss of effectiveness and the depreciation of efficiency in managing the corresponding workflow models. In this paper, we are particularly interested in verifying very large-scale and massively parallel workflow models, and so we need a sophisticated analyzer to automatically analyze those specialized and complex styles of workflow models. One of the sophisticated analyzers devised in this paper is able to analyze not only the structural complexity but also the data-sequence complexity, especially. The structural complexity is based upon combinational usages of those control-structure constructs such as subprocesses, exclusive-OR, parallel-AND and iterative-LOOP primitives with preserving matched pairing and proper nesting properties, whereas the data-sequence complexity is based upon combinational usages of those relevant data repositories such as data definition sequences and data use sequences. Through the devised and implemented analyzer in this paper, we are able eventually to achieve the systematic verifications of the syntactical correctness as well as the effective validation of the structural properness on those complicate and large-scale styles of workflow models. As an experimental study, we apply the implemented analyzer to an exemplary large-scale and massively parallel workflow process model, the Large Bank Transaction Workflow Process Model, and show the structural complexity analysis results via a series of operational screens captured from the implemented analyzer.

Managing and Modeling Strategy of Geo-features in Web-based 3D GIS

  • Kim, Kyong-Ho;Choe, Seung-Keol;Lee, Jong-Hun;Yang, Young-Kyu
    • Proceedings of the KSRS Conference
    • /
    • 1999.11a
    • /
    • pp.75-79
    • /
    • 1999
  • Geo-features play a key role in object-oriented or feature-based geo-processing system. So the strategy for how-to-model and how-to-manage the geo-features builds the main architecture of the entire system and also supports the efficiency and functionality of the system. Unlike the conventional 2D geo-processing system, geo-features in 3B GIS have lots to be considered to model regarding the efficient manipulation and analysis and visualization. When the system is running on the Web, it should also be considered that how to leverage the level of detail and the level of automation of modeling in addition to the support for client side data interoperability. We built a set of 3D geo-features, and each geo-feature contains a set of aspatial data and 3D geo-primitives. The 3D geo-primitives contain the fundamental modeling data such as the height of building and the burial depth of gas pipeline. We separated the additional modeling data on the geometry and appearance of the model from the fundamental modeling data to make the table in database more concise and to allow the users more freedom to represent the geo-object. To get the users to build and exchange their own data, we devised a file format called VGFF 2.0 which stands for Virtual GIS File Format. It is to describe the three dimensional geo-information in XML(eXtensible Markup Language). The DTD(Document Type Definition) of VGFF 2.0 is parsed using the DOM(Document Object Model). We also developed the authoring tools for. users can make their own 3D geo-features and model and save the data to VGFF 2.0 format. We are now expecting the VGFF 2.0 evolve to the 3D version of SVG(Scalable Vector Graphics) especially for 3D GIS on the Web.

  • PDF

Decision Method of Importance of E-Mail based on User Profiles (사용자 프로파일에 기반한 전자 메일의 중요도 결정)

  • Lee, Samuel Sang-Kon
    • The KIPS Transactions:PartB
    • /
    • v.15B no.5
    • /
    • pp.493-500
    • /
    • 2008
  • Although modern day people gather many data from the network, the users want only the information needed. Using this technology, the users can extract on the data that satisfy the query. As the previous studies use the single data in the document, frequency of the data for example, it cannot be considered as the effective data clustering method. What is needed is the effective clustering technology that can process the electronic network documents such as the e-mail or XML that contain the tags of various formats. This paper describes the study of extracting the information from the user query based on the multi-attributes. It proposes a method of extracting the data such as the sender, text type, time limit syntax in the text, and title from the e-mail and using such data for filtering. It also describes the experiment to verify that the multi-attribute based clustering method is more accurate than the existing clustering methods using only the word frequency.

A Design of TopicMap System based on XMDR for Efficient Data Retrieve in Distributed Environment (분산환경에서 효율적인 데이터 검색을 위한 XMDR 기반의 토픽맵 시스템 설계)

  • Hwang, Chi-Gon;Jung, Kye-Dong;Kang, Seok-Joong;Choi, Young-Keun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.13 no.3
    • /
    • pp.586-593
    • /
    • 2009
  • As most of the data configuration at distributed environment has a tree structure following the hierarchical classification, relative data retrieve is limited. Among these data, the data stored in a database has a problem in integration and efficient retrieve. Accordingly, we suggest the system that uses XMDR for distributed database integration and links XMDR to TopicMap for efficient retrieve of knowledge expressed hierarchically. We proposes a plan for efficient integration retrieve through using the XMDR which is composed of Meta Semantic Ontology, Instance Semantic Ontology and meta location, solves data heterogeneity and metadata heterogeneity problem and integrates them, and replaces the occurrence of the TopicMap with the Meta Location of the XMDR, which expresses the resource location of TopicMap by linking Meta Semantic Ontology and Instance Semantic Ontology of XMDR to the TopicMap.

Design of Efficient Storage Exploiting Structural Similarity in Microarray Data (마이크로어레이 데이터의 구조적 유사성을 이용한 효율적인 저장 구조의 설계)

  • Yun, Jong-Han;Shin, Dong-Kyu;Shin, Dong-Il
    • The KIPS Transactions:PartD
    • /
    • v.16D no.5
    • /
    • pp.643-650
    • /
    • 2009
  • As one of typical techniques for acquiring bio-information, microarray has contributed greatly to development of bioinformatics. Although it is established as a core technology in bioinformatics, it has difficulty in sharing and storing data because data from experiments has huge and complex type. In this paper, we propose a new method which uses the feature that microarray data format in MAGE-ML, a standard format for exchanging data, has frequent structurally similar patterns. This method constructs compact database by simplifying MAGE-ML schema. In this method, Inlining techniques and newly proposed classification techniques using structural similarity of elements are used. The structure of database becomes simpler and number of table-joins is reduced, performance is enhanced using this method.

MATERIAL MATCHING PROCESS FOR ENERGY PERFORMANCE ANALYSIS

  • Jung-Ho Yu;Ka-Ram Kim;Me-Yeon Jeon
    • International conference on construction engineering and project management
    • /
    • 2011.02a
    • /
    • pp.213-220
    • /
    • 2011
  • In the current construction industry where various stakeholders take part, BIM Data exchange using standard format can provide a more efficient working environment for related staffs during the life-cycle of the building. Currently, the formats used to exchange the data from 3D-CAD application to structure energy analysis at the design stages are IFC, the international standard format provided by IAI, and gbXML, developed by Autodesk. However, because of insufficient data compatibility, the BIM data produced in the 3D-CAD application cannot be directly used in the energy analysis, thus there needs to be additional data entry. The reasons for this are as follows: First, an IFC file cannot contain all the data required for energy simulation. Second, architects sometimes write material names on the drawings that are not matching to those in the standard material library used in energy analysis tools. DOE-2.2 and Energy Plus are the most popular energy analysis engines. And both engines have their own material libraries. However, our investigation revealed that the two libraries are not compatible. First, the types and unit of properties were different. Second, material names used in the library and the codes of the materials were different. Furthermore, there is no material library in Korean language. Thus, by comparing the basic library of DOE-2, the most commonly used energy analysis engine worldwide, and EnergyPlus regarding construction materials; this study will analyze the material data required for energy analysis and propose a way to effectively enter these using semantic web's ontology. This study is meaningful as it enhances the objective credibility of the analysis result when analyzing the energy, and as a conceptual study on the usage of ontology in the construction industry.

  • PDF

Building a Integrated Protein Data Management System Using the XPath Query Process (XPath 질의 처리를 적용한 단백질 데이터 통합 관리시스템 구축)

  • 차효성;정광수;정영진;류근호
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2004.10b
    • /
    • pp.103-105
    • /
    • 2004
  • 최근 바이오 인포매틱스 분야의 발전에 따라 방대한 양의 유전체 데이터에 대한 연구가 진행되고 있으며, 이러한 데이터를 효율적으로 다루기 위해 다양한 형태의 파일과 데이터베이스들이 사용되고 있다. 하지만 표준화의 미비로 인하여 데이터의 관리 및 변환에 어려움이 많다. 따라서 이 논문에서는 시퀀싱을 통해 생성된 유전체 및 단백질 서열 데이터의 통합 저장 관리를 위해 서열 정보의 편집, 저장 및 검색과 서열 파일 포맷 변환을 수행하는 서열 정보관리 시스템의 구현을 목적으로 한다. 이러한 요구사항을 만족시키기 위해 바이오 인포메틱스 데이터를 다루기 위한 표준으로 BSML(Bioinformatic Sequence Markup Language)을 채택하고 이질적 플랫파일들은 DTD를 기반으로 BSML 스키마로 통합 및 저장한다. 그리고 객체 관계 데이터베이스 특성을 적용하여 XML 문서를 보다 쉽게 저장 관리하고 범위 또는 구조적 질의에 효율적인 XPath 질의 처리를 위한 시스템을 개발하였다.

  • PDF

Outpatient Clinical Summary System Design for Continuity of Care (계속 진료를 위한 외래환자 Clinical Summary 시스템 설계)

  • Lee, Hyo-Jung;Song, Jin-Tae;Kim, Il-Kon;Cho, Hune;Kwak, Yun-Sik
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2005.11a
    • /
    • pp.859-861
    • /
    • 2005
  • 의료 환경이 점차 변화함에 따라 효율적인 건강관리와 의료사고를 룰이기 위한 새로운 병원정보시스템의 개발이 활발히 이루어지고 있다. 이에 본 논문에서는 지속적인 건강관리와 의료정보 공유를 위한 외래환자 Clinical Summary 시스템을 제안하였다. 환자진료에 필요한 core data set을 XML기반의 구조적 문서로 정의하고 이를 웹 서비스(Web Services)를 통해 여러 의료기관에서 활용할 수 있도록 함으로써 진료의 계속성을 높일 수 있도록 설계하였다.

  • PDF

QoS Based Enhanced Collaboration System Using JMF in MDO

  • Kim Jong-Sung
    • Proceedings of the IEEK Conference
    • /
    • 2004.06a
    • /
    • pp.281-284
    • /
    • 2004
  • This paper presents the design and implementation of a QoS based enhanced collaboration system in MDO. This is an efficient distributed communication tool between designers. It supports text communication, audio/video communication, file transfer and XML data sending/receiving. Specially, this system supports a dynamic QoS self-adaptation by using the improved direct adjustment algorithm (DAA+). The original direct adjustment algorithm adjusts the transmission rate according to the congestion level of the network, based on the end to end real time transport protocol (RTP), and controls the transmission rate by using the information of loss ratio in real time transport control protocol (RTCP). But the direct adjustment algorithm does not consider when the RTCP packets are lost. We suggest an improved direct adjustment algorithm to solve this problem. We apply our improved direct adjustment algorithm to our of QoS (Quality of Service) [1] based collaboration system and show the improved performance of transmission rate and loss ratio.

  • PDF

Forgery Detection Mechanism with Abnormal Structure Analysis on Office Open XML based MS-Word File

  • Lee, HanSeong;Lee, Hyung-Woo
    • International journal of advanced smart convergence
    • /
    • v.8 no.4
    • /
    • pp.47-57
    • /
    • 2019
  • We examine the weaknesses of the existing OOXML-based MS-Word file structure, and analyze how data concealment and forgery are performed in MS-Word digital documents. In case of forgery by including hidden information in MS-Word digital document, there is no difference in opening the file with the MS-Word Processor. However, the computer system may be malfunctioned by malware or shell code hidden in the digital document. If a malicious image file or ZIP file is hidden in the document by using the structural vulnerability of the MS-Word document, it may be infected by ransomware that encrypts the entire file on the disk even if the MS-Word file is normally executed. Therefore, it is necessary to analyze forgery and alteration of digital document through internal structure analysis of MS-Word file. In this paper, we designed and implemented a mechanism to detect this efficiently and automatic detection software, and presented a method to proactively respond to attacks such as ransomware exploiting MS-Word security vulnerabilities.