• 제목/요약/키워드: 관리정보베이스

Search Result 765, Processing Time 0.024 seconds

Improving Fault Tolerance for High-capacity Shared Distributed File Systems using the Rotational Lease Under Network Partitioning (대용량 공유 분산 화일 시스템에서 망 분할 시 순환 리스를 사용한 고장 감내성 향상)

  • Tak, Byung-Chul;Chung, Yon-Dohn;Kim, Myoung-Ho
    • Journal of KIISE:Databases
    • /
    • v.32 no.6
    • /
    • pp.616-627
    • /
    • 2005
  • In the shared storage file system, systems can directly access the shared storage device through specialized data-only subnetwork unlike in the network attached file server system. In this shared-storage architecture, data consistency is maintained by some designated set of lock servers which use control network to send and receive the lock information. Furthermore, lease mechanism is introduced to cope with the control network failure. But when the control network is partitioned, participating systems can no longer make progress after the lease term expires until the network recovers. This paper addresses this limitation and proposes a method that allows partitioned systems to make progress under the partition of control network. The proposed method works in a manner that each participating system is rotationally given a predefined lease term periodically. It is also shown that the proposed mechanism always preserves data consistency.

Mobile Transaction Processing in Hybrid Broadcasting Environment (복합 브로드캐스팅 환경에서 이동 트랜잭션 처리)

  • 김성석;양순옥
    • Journal of KIISE:Databases
    • /
    • v.31 no.4
    • /
    • pp.422-431
    • /
    • 2004
  • In recent years, different models in data delivery have been explored in mobile computing systems. Particularly, there were a lot of research efforts in the periodic push model where the server repetitively disseminates information without explicit request. However, average waiting time per data operation highly depends on the length of a broadcast cycle and different access pattern among clients may deteriorate the response time considerably. In this case, clients are preferably willing to send a data request to the server explicitly through backchannel in order to obtain optimal response time. We call the broadcast model supporting backchannel as hybrid broadcast. In this paper, we devise a new transaction processing algorithm(O-PreH) in hybrid broadcast environments. The data objects which the server maintains are divided into Push_Data for periodic broadcasting and Pull_Data for on-demand processing. Clients tune in broadcast channel or demand the data of interests according to the data type. Periodic invalidation reports from the server support maintaining transactional consistency. If one or more conflicts are found, conflict orders are determined not to violate the consistency(pre-reordering) and then the remaining operations have to be executed pessimistically. Through extensive simulations, we demonstrate the improved throughput of the proposed algorithm.

Taxonomy of XML Document Types (XML 문서 타입의 분류)

  • Lee Jung-Won;Park Seung-Soo
    • Journal of KIISE:Databases
    • /
    • v.32 no.2
    • /
    • pp.161-176
    • /
    • 2005
  • oping and applying XML techniques. One key aspect of our taxonomy is the support of the credibility of the result by evaluating which XML document types can be processed by a method. Another key aspect is to provide a basis for determining which is the best for target XML document types. Application with preparations for XML document mining shows that our taxonomy may present XML document types to be able to consider during the preparation process and target XML document types to be used for experiments.

Incremental Clustering of XML Documents based on Similar Structures (유사 구조 기반 XML 문서의 점진적 클러스터링)

  • Hwang Jeong Hee;Ryu Keun Ho
    • Journal of KIISE:Databases
    • /
    • v.31 no.6
    • /
    • pp.699-709
    • /
    • 2004
  • XML is increasingly important in data exchange and information management. Starting point for retrieving the structure and integrating the documents efficiently is clustering the documents that have similar structure. The reason is that we can retrieve the documents more flexible and faster than the method treating the whole documents that have different structure. Therefore, in this paper, we propose the similar structure-based incremental clustering method useful for retrieving the structure of XML documents and integrating them. As a novel method, we use a clustering algorithm for transactional data that facilitates the large number of data, which is quite different from the existing methods that measure the similarity between documents, using vector. We first extract the representative structures of XML documents using sequential pattern algorithm, and then we perform the similar structure based document clustering, assuming that the document as a transaction, the representative structure of the document as the items of the transaction. In addition, we define the cluster cohesion and inter-cluster similarity, and analyze the efficiency of the Proposed method through comparing with the existing method by experiments.

A Message Conversion System based on MDR for Resolving Metadata Heterogeneity (메타데이타 이질성 해결을 위한 MDR 기반의 메시지 변환 시스템)

  • 김진관;김중일;정동원;백두권
    • Journal of KIISE:Databases
    • /
    • v.31 no.3
    • /
    • pp.232-242
    • /
    • 2004
  • Metadata is a general notion of data about data to improve data sharing and exchanging by definitely describing meaning and representation of data. However, metadata has been created in various ways and It caused another kind of heterogeneity problem named metadata heterogeneity problem. Recently, the research on metadata gateway approach that allows metadata heterogeneity is being more actively progressed. However, the existing commercialized systems that have been implemented with the metadata gateway approach are dependent on a metadata schema. In this paper, we propose a message conversion system which separates the mapping information from the mapping rules between heterogeneous metadata schemas. The proposed system dynamically manages standardized data elements by applying ISO/IEC l1179. Therefore, the proposed system provides the set of standard data elements to create consistently metadata of new databases and provides a fundamental resolution to the metadata heterogeneity problem.

MMJoin: An Optimization Technique for Multiple Continuous MJoins over Data Streams (데이타 스트림 상에서 다중 연속 복수 조인 질의 처리 최적화 기법)

  • Byun, Chang-Woo;Lee, Hun-Zu;Park, Seog
    • Journal of KIISE:Databases
    • /
    • v.35 no.1
    • /
    • pp.1-16
    • /
    • 2008
  • Join queries having heavy cost are necessary to Data Stream Management System in Sensor Network where plural short information is generated. It is reasonable that each join operator has a sliding-window constraint for preventing DISK I/O because the data stream represents the infinite size of data. In addition, the join operator should be able to take multiple inputs for overall results. It is possible for the MJoin operator with sliding-windows to do so. In this paper, we consider the data stream environment where multiple MJoin operators are registered and propose MMJoin which deals with issues of building and processing a globally shared query considering characteristics of the MJoin operator with sliding-windows. First, we propose a solution of building the global shared query execution plan. Second, we solved the problems of updating a window size and routing for a join result. Our study can be utilized as a fundamental research for an optimization technique for multiple continuous joins in the data stream environment.

Metadata Management for E-Commerce Transactions in Digital Library (디지털 도서관에서 전자상거래 트랜잭션을 위한 메타데이타 관리 기법)

  • Choe, Il-Hwan;Park, Seog
    • Journal of KIISE:Databases
    • /
    • v.29 no.1
    • /
    • pp.34-43
    • /
    • 2002
  • Since traditional static metadata set like Dublin Core has static metadata attributes about bibliography information, integration of metadata for various metadata, problems about standard and extension of metadata must be considered for applying it to new environment. Specially, as event-driven metadata write method included the notion of e-commerce come out for interoperability in digital libraries, traditional metadata management which cannot distinguish between different kinds of update operations to new extension of metadata set occurs unsuitable waiting of update operation. So, improvement is needed about it. In this paper, we show whether alleviative transaction consistency can be applied to digital library or not. Also it would divide newer metadata into static metadata attribute connected in read operation within user read-only transaction and dynamic metadata attribute in update operation within dynamic(e-commerce) update transactions. We propose newer metadata management algorithm considered in classfication of metadata attributes and dynamic update transaction. Using two version for minimal maintenance cost and ARU(Appended Refresh Unit) for dynamic update transaction, to minimize conflict between read and write operations shows fast response time and high recency ratio. As a result of the performance evaluation, we show our algorithm is proved to be better than other algorithms in newer metadata environments.

Efficient Transmission Structure and Key Management Mechanism Using Key Provisioning on Medical Sensor Networks (의료 센서 네트워크에서의 효율적인 전송 구조 및 Key Provisioning을 사용한 키 관리 기법 연구)

  • Seo, Jae-Won;Kim, Mi-Hui;Chae, Ki-Joon
    • The KIPS Transactions:PartC
    • /
    • v.16C no.3
    • /
    • pp.285-298
    • /
    • 2009
  • According to the development of ubiquitous technologies, sensor networks is used in various area. In particular, medical field is one of the significant application areas using sensor networks, and recently it has come to be more important according to standardization of the body sensor networks technology. There are special characteristics of their own for medical sensor networks, which are different from the one of sensor networks for general application or environment. In this paper, we propose a hierarchical medical sensor networks structure considering own properties of medical applications, and also introduce transmission mechanism based on hierarchical structure. Our mechanism uses the priority and threshold value for medical sensor nodes considering patient's needs and health condition. Through this way Cluster head can transmit emergency data to the Base station rapidly. We also present the new key establishment mechanism based on key management mechanism which is proposed by L. Eschenauer and V. Gligor for our proposed structure and transmission mechanism. We use key provisioning for emergency nodes that have high priority based on patients' health condition. This mechanism guarantees the emergency nodes to establish the key and transmit the urgent message to the new cluster head more rapidly through preparing key establishment with key provisioning. We analyze the efficiency of our mechanism through comparing the amount of traffic and energy consumption with analysis and simulation with QualNet simulator. We also implemented our key management mechanism on TmoteSKY sensor board using TinyOS 2.0 and through this experiments we proved that the new mechanism could be actually utilized in network design.

An Efficient Incremental Maintenance Method for Data Cubes in Data Warehouses (데이타 웨어하우스에서 데이타 큐브를 위한 효율적인 점진적 관리 기법)

  • Lee, Ki-Yong;Park, Chang-Sup;Kim, Myoung-Ho
    • Journal of KIISE:Databases
    • /
    • v.33 no.2
    • /
    • pp.175-187
    • /
    • 2006
  • The data cube is an aggregation operator that computes group-bys for all possible combination of dimension attributes. %on the number of the dimension attributes is n, a data cube computes $2^n$ group-bys. Each group-by in a data cube is called a cuboid. Data cubes are often precomputed and stored as materialized views in data warehouses. These data cubes need to be updated when source relation change. The incremental maintenance of a data cube is to compute and propagate only its changes. To compute the change of a data cube of $2^n$ cuboids, previous works compute a delta cube that has the same number of cuboids as the original data cube. Thus, as the number of dimension attributes increases, the cost of computing a delta cube increases significantly. Each cuboid in a delta cube is called a delta cuboid. In this paper. we propose an incremental cube maintenance method that can maintain a data cube by using only $_nC_{{\lceil}n/2{\rceil}}$ delta cuboids. As a result, the cost of computing a delta cube is substantially reduced. Through various experiments, we show the performance advantages of our method over previous methods.

Discovering Interdisciplinary Convergence Technologies Using Content Analysis Technique Based on Topic Modeling (토픽 모델링 기반 내용 분석을 통한 학제 간 융합기술 도출 방법)

  • Jeong, Do-Heon;Joo, Hwang-Soo
    • Journal of the Korean Society for information Management
    • /
    • v.35 no.3
    • /
    • pp.77-100
    • /
    • 2018
  • The objectives of this study is to present a discovering process of interdisciplinary convergence technology using text mining of big data. For the convergence research of biotechnology(BT) and information communications technology (ICT), the following processes were performed. (1) Collecting sufficient meta data of research articles based on BT terminology list. (2) Generating intellectual structure of emerging technologies by using a Pathfinder network scaling algorithm. (3) Analyzing contents with topic modeling. Next three steps were also used to derive items of BT-ICT convergence technology. (4) Expanding BT terminology list into superior concepts of technology to obtain ICT-related information from BT. (5) Automatically collecting meta data of research articles of two fields by using OpenAPI service. (6) Analyzing contents of BT-ICT topic models. Our study proclaims the following findings. Firstly, terminology list can be an important knowledge base for discovering convergence technologies. Secondly, the analysis of a large quantity of literature requires text mining that facilitates the analysis by reducing the dimension of the data. The methodology we suggest here to process and analyze data is efficient to discover technologies with high possibility of interdisciplinary convergence.