• Title/Summary/Keyword: Schemas

Search Result 117, Processing Time 0.022 seconds

Component-Z: A Formal Specification Language Extended Object-Z for Designing Components (Component-Z: Object-Z를 확장한 컴포넌트 정형 명세 언어)

  • 이종국;신숙경;김수동
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.5
    • /
    • pp.677-696
    • /
    • 2004
  • Component-based software engineering (CBSE) composes reusable components and develops applications with the components. CBSE is admitted to be a new paradigm that reduces the costs and times to develop software systems. The high quality of component designs can be assured if the consistency and correctness among the elements of a component are verified with formal specifications. Current formal languages for components include only some parts of contracts between interfaces, structural aspects and behavioral aspects of component, component-based system, component composition and variability. Therefore, it is not adequate to use current formal languages in all steps of a component design process. In this paper, we suggest a formal language to specify component designs Component-Z. Component-Z extends Object-Z, adds new notations to specify components. It can be possible to specify interfaces, the inner structure of a component, inner workflows, and workflows among interfaces with Component-Z. In addition, Component-Z provides the notations and semantics to specify variability with variation points, variants and required interfaces. The relation between interfaces and components is defined with mapping schemas. Parallel operator is used to specify component composition. It can be possible to describe deployed components with the specifications of component-based systems. Therefore, the formal specification language proposed in this paper can represent all elements to design components. In the case study, we specify an account management system in a bank so that we show that Component-Z can be used in all steps of component design.

XML Schema Evolution Approach Assuring the Automatic Propagation to XML Documents (XML 문서에 자동 전파하는 XML 스키마 변경 접근법)

  • Ra, Young-Gook
    • The KIPS Transactions:PartD
    • /
    • v.13D no.5 s.108
    • /
    • pp.641-650
    • /
    • 2006
  • XML has the characteristics of self-describing and uses DTD or XML schema in order to constraint its structure. Even though the XML schema is only at the stage of recommendation yet, it will be prevalently used because DTD is not itself XML and has the limitation on the expression power. The structure defined by the XML schema as well as the data of the XML documents can vary due to complex reasons. Those reasons are errors in the XML schema design, new requirements due to new applications, etc. Thus, we propose XML schema evolution operators that are extracted from the analysis of the XML schema updates. These schema evolution operators enable the XML schema updates that would have been impossible without supporting tools if there are a large number of XML documents complying the U schema. In addition, these operators includes the function of automatically finding the update place in the XML documents which are registered to the XSE system, and maintaining the XML documents valid to the XML schema rather than merely well-formed. This paper is the first attempt to update XML schemas of the XML documents and provides the comprehensive set of schema updating operations. Our work is necessary for the XML application development and maintenance in that it helps to update the structure of the XML documents as well as the data in the easy and precise manner.

The Effect of Un-tact Emotional Schema Group Counseling Program on the Improvement of Emotional Ability of Unmarried Couples with Relationship Conflict Experiences (비대면 정서도식 커플 집단상담 프로그램이 관계갈등경험이 있는 미혼 커플의 정서능력 향상에 미치는 효과)

  • Jang, Sung-Ho;Park, Jae-Seo;Hwang, Boon-Hong;Shin, Sung-Man
    • Journal of Digital Convergence
    • /
    • v.19 no.9
    • /
    • pp.373-383
    • /
    • 2021
  • The purpose of this research was to develop an un-tact group counseling program to deal with the conflicts in the relationship of unmarried couples and to verify the effect of the program for improving emotional cognition and emotion regulation of unmarried couples. The research procedure consisted of the program development step and the program verification step, and in detail, 8 steps of program development processes were established. The subjects of the research were 11 unmarried couples, and 4 group were organized. The group counseling program was conducted once a week (2 sessions), a total of 10 sessions for 5 weeks. The effect was analyzed quantitatively and qualitatively. As a result of quantitative analysis, the participant's relationship satisfaction was significantly improved and there was a significant change in some relationship emotional schemas of participants. As a result of qualitatively analysis through participant's program review, participant's emotional understanding and the couple relationship were improved, and the group effect of the couple group counseling was found. Finally, the significance and limitations of this study were presented.

A Study on Preservation Metadata for Long Term Preservation of Electronic Records (전자기록의 장기적 보존을 위한 보존메타데이터 요소 분석)

  • Lee, Kyung-Nam
    • The Korean Journal of Archival Studies
    • /
    • no.14
    • /
    • pp.191-240
    • /
    • 2006
  • For long-term preservation of electronic records, the information on the whole processes of management from the time of creation of the electronic information should be captured and managed together. Such information is supported by preservation metadata thus the implementation of preservation metadata is important for preservation of electronic records maintaining the record-ness. Preservation metadata is the information that supports the process of digital preservation and functions th maintain long-term viability, renderability, understandability, authenticity and identity of digital resources. Preservation metadata should be developed applying the international standard Reference Model for an Open Archival Information System(OAIS) to have international interoperability for exchange and reuse. Initial international preservation metadata schemas were developed standardizing the OAIS Reference Model. But the preservation metadata schema of Victorian Electronic Records Strategy(VERS) and recently published Data Dictionary of PREMIS Working Group were developed in advanced types that are different from the existing framework. Those were advanced th practical ones from conceptual one. Comparing these two cases, proposed the elements of integral preservation metadata for long-term preservation of electronic records. This thesis has the significance that it has suggested the direction for future development of the elements of preservation metadata by setting the past discussions related to preservation metadata in order and proposing integral preservation metadata elements for long-term preservation of electronic records.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

Branching Path Query Processing for XML Documents using the Prefix Match Join (프리픽스 매취 조인을 이용한 XML 문서에 대한 분기 경로 질의 처리)

  • Park Young-Ho;Han Wook-Shin;Whang Kyu-Young
    • Journal of KIISE:Databases
    • /
    • v.32 no.4
    • /
    • pp.452-472
    • /
    • 2005
  • We propose XIR-Branching, a novel method for processing partial match queries on heterogeneous XML documents using information retrieval(IR) techniques and novel instance join techniques. A partial match query is defined as the one having the descendent-or-self axis '//' in its path expression. In its general form, a partial match query has branch predicates forming branching paths. The objective of XIR-Branching is to efficiently support this type of queries for large-scale documents of heterogeneous schemas. XIR-Branching has its basis on the conventional schema-level methods using relational tables(e.g., XRel, XParent, XIR-Linear[21]) and significantly improves their efficiency and scalability using two techniques: an inverted index technique and a novel prefix match join. The former supports linear path expressions as the method used in XIR-Linear[21]. The latter supports branching path expressions, and allows for finding the result nodes more efficiently than containment joins used in the conventional methods. XIR-Linear shows the efficiency for linear path expressions, but does not handle branching path expressions. However, we have to handle branching path expressions for querying more in detail and general. The paper presents a novel method for handling branching path expressions. XIR-Branching reduces a candidate set for a query as a schema-level method and then, efficiently finds a final result set by using a novel prefix match join as an instance-level method. We compare the efficiency and scalability of XIR-Branching with those of XRel and XParent using XML documents crawled from the Internet. The results show that XIR-Branching is more efficient than both XRel and XParent by several orders of magnitude for linear path expressions, and by several factors for branching path expressions.

A Product Model Centered Integration Methodology for Design and Construction Information (프로덕트 모델 중심의 설계, 시공 정보 통합 방법론)

  • Lee Keun-Hyoung;Kim Jae-Jun
    • Proceedings of the Korean Institute Of Construction Engineering and Management
    • /
    • autumn
    • /
    • pp.99-106
    • /
    • 2002
  • Researches on integration of design and construction information from earlier era focused on the conceptual data models. Development and prevalent use of commercial database management system led many researchers to design database schemas for enlightening of relationship between non-graphic data items. Although these researches became the foundation fur the proceeding researches. they did not utilize the graphic data providable from CAD system which is already widely used. 4D CAD concept suggests a way of integrating graphic data with schedule data. Although this integration provided a new possibility for integration, there exists a limitation in data dependency on a specific application. This research suggests a new approach on integration for design and construction information, 'Product Model Centered Integration Methodology'. This methodology achieves integration by preliminary research on existing methodology using 4D CAD concept. and by development and application of new integration methodology, 'Product Model Centered Integration Methodology'. 'Design Component' can be converted into digital format by object based CAD system. 'Unified Object-based Graphic Modeling' shows how to model graphic product model using CAD system. Possibility of reusing design information in latter stage depends on the ways of creating CAD model, so modeling guidelines and specifications are suggested. Then prototype system for integration management, and exchange are presented, using 'Product Frameworker', and 'Product Database' which also supports multiple-viewpoints. 'Product Data Model' is designed, and main data workflows are represented using 'Activity Diagram', one of UML diagrams. These can be used for writing programming codes and developing prototype in order to automatically create activity items in actual schedule management system. Through validation processes, 'Product Model Centered Integration Methodology' is suggested as the new approach for integration of design and construction information.

  • PDF