• Title/Summary/Keyword: Server Replication

Search Result 49, Processing Time 0.023 seconds

Implementation and Performance Measuring of Erasure Coding of Distributed File System (분산 파일시스템의 소거 코딩 구현 및 성능 비교)

  • Kim, Cheiyol;Kim, Youngchul;Kim, Dongoh;Kim, Hongyeon;Kim, Youngkyun;Seo, Daewha
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.41 no.11
    • /
    • pp.1515-1527
    • /
    • 2016
  • With the growth of big data, machine learning, and cloud computing, the importance of storage that can store large amounts of unstructured data is growing recently. So the commodity hardware based distributed file systems such as MAHA-FS, GlusterFS, and Ceph file system have received a lot of attention because of their scale-out and low-cost property. For the data fault tolerance, most of these file systems uses replication in the beginning. But as storage size is growing to tens or hundreds of petabytes, the low space efficiency of the replication has been considered as a problem. This paper applied erasure coding data fault tolerance policy to MAHA-FS for high space efficiency and introduces VDelta technique to solve data consistency problem. In this paper, we compares the performance of two file systems, MAHA-FS and GlusterFS. They have different IO processing architecture, the former is server centric and the latter is client centric architecture. We found the erasure coding performance of MAHA-FS is better than GlusterFS.

Implementation of the Large-scale Data Signature System Using Hash Tree Replication Approach (해시 트리 기반의 대규모 데이터 서명 시스템 구현)

  • Park, Seung Kyu
    • Convergence Security Journal
    • /
    • v.18 no.1
    • /
    • pp.19-31
    • /
    • 2018
  • As the ICT technologies advance, the unprecedently large amount of digital data is created, transferred, stored, and utilized in every industry. With the data scale extension and the applying technologies advancement, the new services emerging from the use of large scale data make our living more convenient and useful. But the cybercrimes such as data forgery and/or change of data generation time are also increasing. For the data security against the cybercrimes, the technology for data integrity and the time verification are necessary. Today, public key based signature technology is the most commonly used. But a lot of costly system resources and the additional infra to manage the certificates and keys for using it make it impractical to use in the large-scale data environment. In this research, a new and far less system resources consuming signature technology for large scale data, based on the Hash Function and Merkle tree, is introduced. An improved method for processing the distributed hash trees is also suggested to mitigate the disruptions by server failures. The prototype system was implemented, and its performance was evaluated. The results show that the technology can be effectively used in a variety of areas like cloud computing, IoT, big data, fin-tech, etc., which produce a large-scale data.

  • PDF

Data Synchronization Among Mobile Servers in Wireless Communication (무선통신 환경에서 이동 서버간의 데이터 동기화 기법)

  • Kim, Eun-Hee;Choi, Byung-Kab;Lee, Eung-Jae;Ryu, Keun-Ho
    • The KIPS Transactions:PartD
    • /
    • v.13D no.7 s.110
    • /
    • pp.901-908
    • /
    • 2006
  • With the development of wireless communication techniques and mobile environment we are able to transmit data between mobile systems without restriction of time and space. Recently, researches on the data communication between mobile systems have focused on a small amount of sending out or receiving data and data synchronization at a fixed server and mobile clients in mobile environment. However, two more servers should be able to move mutual independently, information is shared with other systems, and data is synchronized in the special environment like a battlefield situation. Therefore, we propose a data synchronization method between systems moving mutual independently in mobile environment. The proposed method is an optimization solution to data propagation path between servers that considers limited bandwidth and process of data for disconnection communication. In addition, we propose a data reduction method that considers importance and sharing of information in order to reduce data transmission between huge servers. We verified the accuracy of data after accomplishing our data synchronization method by applying it in the real world environment. Additionally, we showed that our method could accomplish data synchronization normally within an allowance tolerance when we considered data propagating delay time by server extension.

Implementation of Mobile Agent Multicast Migration Model for Minimizing Network Required Time (네트워크 소요시간 최소화를 위한 이동 에이전트의 멀티캐스트 이주 모델 구현)

  • Kim Kwang-jong;Ko Hyun;Kim Young Ja;Lee Yon-sik
    • The KIPS Transactions:PartD
    • /
    • v.12D no.2 s.98
    • /
    • pp.289-302
    • /
    • 2005
  • The mobile agent has very various performance according to the element of communication number of times between hosts, quantity of transmission data agent's size, network state etc. specially, migration method is caused much effect in whole performance of distributed system. Most existing migration methods have simplicity structure that it moves doing to accumulate continuously result after achieving task by visiting host in the fixed order. Therefore, in case there are situation such as fault, obstacle, and service absence etc. This can be inefficient due to mobile agent increased network required time. In this paper, we design and implementation Multicast Migration Model for minimizing network required time by solving this problems. Multicast Migration Model includes components such as mobile agent including call module and naming agent, which provides object replication information and distributed server's location transparence. And we evaluate and compare with existing migration method applying prototype system to verify implemented migration model.

The mitochondrial proteome analysis in wheat roots

  • Kim, Da-Eun;Roy, Swapan Kumar;Kamal, Abu Hena Mostafa;Kwon, Soo Jeong;Cho, Kun;Cho, Seong-Woo;Park, Chul-Soo;Woo, Sun-Hee
    • Proceedings of the Korean Society of Crop Science Conference
    • /
    • 2017.06a
    • /
    • pp.126-126
    • /
    • 2017
  • Mitochondria are important in wheat, as in all crops, as the main source of ATP for cell maintenance and growth including vitamin synthesis, amino acid metabolism and photorespiration. To investigate the mitochondrial proteome of the roots of wheat seedlings, a systematic and targeted analysis were carried out on the mitochondrial proteome from 15 day-old wheat seedling root material. Mitochondria were isolated by Percoll gradient centrifugation; and extracted proteins were separated and analyzed by Tricine SDS-PAGE along with LTQ-FTICR mass spectrometry. From the isolated the sample, 184 proteins were identified which is composed of 140 proteins as mitochondria and 44 proteins as other subcellular proteins that are predicted by the freeware subcellular predictor. The identified proteins in mitochondria were functionally classified into 12 classes using the ProtFun 2.2 server based on biological processes. Proteins were shown to be involved in amino acid biosynthesis (17.1%), biosynthesis of cofactors (6.4%), cell envelope (11.4%), central intermediary metabolism (10%), energy metabolism (20%), fatty acid metabolism (0.7%), purines and pyrimidines (5.7%), regulatory functions (0.7%), replication and transcription (1.4%), translation (22.1%), transport and binding (1.4%), and unknown (2.8%). These results indicate that many of the protein components present and functions of identifying proteins are common to other profiles of mitochondrial proteins performed to date. This dataset provides the first extensive picture, to our knowledge, of mitochondrial proteins from wheat roots. Future research is required on quantitative analysis of the wheat mitochondrial proteomes at the spatial and developmental level.

  • PDF

Data Replication and Migration Scheme for Load Balancing in Distributed Memory Environments (분산 인-메모리 환경에서 부하 분산을 위한 데이터 복제와 이주 기법)

  • Choi, Kitae;Yoon, Sangwon;Park, Jaeyeol;Lim, Jongtae;Bok, Kyoungsoo;Yoo, Jaesoo
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.1
    • /
    • pp.44-49
    • /
    • 2016
  • Recently, data has been growing dramatically along with the growth of social media and digital devices. A distributed memory processing system has been used to efficiently process large amounts of data. However, if a load is concentrated in a certain node in distributed environments, a node performance significantly degrades. In this paper, we propose a load balancing scheme to distribute load in a distributed memory environment. The proposed scheme replicates hot data to multiple nodes for managing a node's load and migrates the data by considering the load of the nodes when nodes are added or removed. The client reduces the number of accesses to the central server by directly accessing the data node through the metadata information of the hot data. In order to show the superiority of the proposed scheme, we compare it with the existing load balancing scheme through performance evaluation.

BeanFS: A Distributed File System for Large-scale E-mail Services (BeanFS: 대규모 이메일 서비스를 위한 분산 파일 시스템)

  • Jung, Wook;Lee, Dae-Woo;Park, Eun-Ji;Lee, Young-Jae;Kim, Sang-Hoon;Kim, Jin-Soo;Kim, Tae-Woong;Jun, Sung-Won
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.36 no.4
    • /
    • pp.247-258
    • /
    • 2009
  • Distributed file systems running on a cluster of inexpensive commodity hardware are being recognized as an effective solution to support the explosive growth of storage demand in large-scale Internet service companies. This paper presents the design and implementation of BeanFS, a distributed file system for large-scale e-mail services. BeanFS is adapted to e-mail services as follows. First, the volume-based replication scheme alleviates the metadata management overhead of the central metadata server in dealing with a very large number of small files. Second, BeanFS employs a light-weighted consistency maintenance protocol tailored to simple access patterns of e-mail message. Third, transient and permanent failures are treated separately and recovering from transient failures is done quickly and has less overhead.

Data Sharing Technique between Heterogeneous based on Cloud Service (클라우드 서비스 기반 이기종간의 데이터 공유 기법)

  • Seo, Jung-Hee;Park, Hung-Bog
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.3
    • /
    • pp.391-398
    • /
    • 2018
  • There are many problems caused by data sharing between general heterogeneous digital devices due to various interfaces. To solve this problem, this paper proposes heterogeneous data sharing with cloud service and mobile through D2D communication that supports communication between different devices. The proposed technique is used to reduce the load on the server to perform data synchronization. Also, in order to minimize data latency caused by data replication between different devices, a technique to enhance the speed of data writing with copying only the modified parts in the chunk list is adopted and cloud service model integrated with mobile environment is realized in order to minimize the network bandwidth consumed for synchronization for data sharing. Therefore, it is possible to share data in different spaces efficiently with maintaining data integrity and minimizing latency in data.

NextAuction: A DID-based Robust Auction Service for Digital Contents

  • Lee, Young-Eun;Kim, Hye-Won;Lee, Myung-Joon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.2
    • /
    • pp.115-124
    • /
    • 2022
  • In this paper, we present an NFT auction service for the next generation, named NextAuction, which can reliably trade ownership of individual content using DID technology. Recently, as the types and sizes of tradable digital assets have expanded, the number of NFT transactions has also increased, and a significant number of marketplaces are being operated. But, the current user authentication methods of NFT marketplaces are done only through the associated blockchain wallets. It is desirable that ownership transfer through NFT transactions be transparently managed based on a more reliable identity authentication service. NextAuction increases the reliability of auction service participants by transparently and consistently providing identity authentication for users of auction services based on the DID technique using the Klaytn blockchain. In addition, in preparation for server failure that may occur during the auction of individual content, it provides users with a robust auction service using the BR2K technique that continuously provides consistent service through the replication of a target service. The NextAuction service is developed by extending BCON, a blockchain-based content management service.