• Title/Summary/Keyword: Data Copy

Search Result 346, Processing Time 0.028 seconds

Modeling of a controlled retransmission scheme for loss recovery in optical burst switching networks

  • Duong, Phuoc Dat;Nguyen, Hong Quoc;Dang, Thanh Chuong;Vo, Viet Minh Nhat
    • ETRI Journal
    • /
    • v.44 no.2
    • /
    • pp.274-285
    • /
    • 2022
  • Retransmission in optical burst switching networks is a solution to recover data loss by retransmitting the dropped burst. The ingress node temporarily stores a copy of the complete burst and sends it each time it receives a retransmission request from the core node. Some retransmission schemes have been suggested, but uncontrolled retransmission often increases the network load, consumes more bandwidth, and consequently, increases the probability of contention. Controlled retransmission is therefore essential. This paper proposes a new controlled retransmission scheme for loss recovery, where the available bandwidth of wavelength channels and the burst lifetime are referred to as network conditions to determine whether to transmit a dropped burst. A retrial queue-based analysis model is also constructed to validate the proposed retransmission scheme. The simulation and analysis results show that the controlled retransmission scheme is more efficient than the previously suggested schemes regarding byte loss probability, successful retransmission rate, and network throughput.

A Study on Formative Characteristics of Digital Images: Focused on NFT Arts (디지털 이미지의 조형적 특성 연구: NFT 예술작품을 중심으로)

  • Kim, Hyesung
    • Journal of Information Technology Applications and Management
    • /
    • v.29 no.2
    • /
    • pp.1-15
    • /
    • 2022
  • Image has more power in the 21st century than ever before. With the advance in technology and media, image is either made or received more easily and promptly using digital technology. Because of that, it has been hard for digital images to gain value as a work of art. But after NFT technology was developed and then applied to digital images, we have come to distinguish the original from the copy of it. People are trying various ways to allow digital images to be acknowledged as art, and changing the nature of digital images that makes new paradigms in the history of art. Through NFT artworks rising as a new style, this paper analyzed and investigated the new properties and changing characteristics of digital images in order to anticipate the characteristics of artworks that the public produces, enjoys, and consumes. This study is grounded on experimental analysis methods, and this author expects that it can contribute to figuring out which kinds of visual culture and artworks digital image is influencing currently from various perspectives.

Complete chloroplast genome sequence of Clematis calcicola (Ranunculaceae), a species endemic to Korea

  • Beom Kyun PARK;Young-Jong JANG;Dong Chan SON;Hee-Young GIL;Sang-Chul KIM
    • Korean Journal of Plant Taxonomy
    • /
    • v.52 no.4
    • /
    • pp.262-268
    • /
    • 2022
  • The complete chloroplast genome (cp genome) sequence of Clematis calcicola J. S. Kim (Ranunculaceae) is 159,655 bp in length. It consists of large (79,451 bp) and small (18,126 bp) single-copy regions and a pair of identical inverted repeats (31,039 bp). The genome contains 92 protein-coding genes, 36 transfer RNA genes, eight ribosomal RNA genes, and two pseudogenes. A phylogenetic analysis based on the cp genome of 19 taxa showed high similarity between our cp genome and data published for C. calcicola, which is recognized as a species endemic to the Korean Peninsula. The complete cp genome sequence of C. calcicola reported here provides important information for future phylogenetic and evolutionary studies of Ranunculaceae.

Manchu Script Letters Dataset Creation and Labeling

  • Aaron Daniel Snowberger;Choong Ho Lee
    • Journal of information and communication convergence engineering
    • /
    • v.22 no.1
    • /
    • pp.80-87
    • /
    • 2024
  • The Manchu language holds historical significance, but a complete dataset of Manchu script letters for training optical character recognition machine-learning models is currently unavailable. Therefore, this paper describes the process of creating a robust dataset of extracted Manchu script letters. Rather than performing automatic letter segmentation based on whitespace or the thickness of the central word stem, an image of the Manchu script was manually inspected, and one copy of the desired letter was selected as a region of interest. This selected region of interest was used as a template to match all other occurrences of the same letter within the Manchu script image. Although the dataset in this study contained only 4,000 images of five Manchu script letters, these letters were collected from twenty-eight writing styles. A full dataset of Manchu letters is expected to be obtained through this process. The collected dataset was normalized and trained using a simple convolutional neural network to verify its effectiveness.

Secure and Efficient Client-side Deduplication for Cloud Storage (안전하고 효율적인 클라이언트 사이드 중복 제거 기술)

  • Park, Kyungsu;Eom, Ji Eun;Park, Jeongsu;Lee, Dong Hoon
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.25 no.1
    • /
    • pp.83-94
    • /
    • 2015
  • Deduplication, which is a technique of eliminating redundant data by storing only a single copy of each data, provides clients and a cloud server with efficiency for managing stored data. Since the data is saved in untrusted public cloud server, however, both invasion of data privacy and data loss can be occurred. Over recent years, although many studies have been proposed secure deduplication schemes, there still remains both the security problems causing serious damages and inefficiency. In this paper, we propose secure and efficient client-side deduplication with Key-server based on Bellare et. al's scheme and challenge-response method. Furthermore, we point out potential risks of client-side deduplication and show that our scheme is secure against various attacks and provides high efficiency for uploading big size of data.

Data Central Network Technology Trend Analysis using SDN/NFV/Edge-Computing (SDN, NFV, Edge-Computing을 이용한 데이터 중심 네트워크 기술 동향 분석)

  • Kim, Ki-Hyeon;Choi, Mi-Jung
    • KNOM Review
    • /
    • v.22 no.3
    • /
    • pp.1-12
    • /
    • 2019
  • Recently, researching using big data and AI has emerged as a major issue in the ICT field. But, the size of big data for research is growing exponentially. In addition, users of data transmission of existing network method suggest that the problem the time taken to send and receive big data is slower than the time to copy and send the hard disk. Accordingly, researchers require dynamic and flexible network technology that can transmit data at high speed and accommodate various network structures. SDN/NFV technologies can be programming a network to provide a network suitable for the needs of users. It can easily solve the network's flexibility and security problems. Also, the problem with performing AI is that centralized data processing cannot guarantee real-time, and network delay occur when traffic increases. In order to solve this problem, the edge-computing technology, should be used which has moved away from the centralized method. In this paper, we investigate the concept and research trend of SDN, NFV, and edge-computing technologies, and analyze the trends of data central network technologies used by combining these three technologies.

Vector Data Hashing Using Line Curve Curvature (라인 곡선 곡률 기반의 벡터 데이터 해싱)

  • Lee, Suk-Hwan;Kwon, Ki-Ryong
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.2C
    • /
    • pp.65-77
    • /
    • 2011
  • With the rapid expansion of application fields of vector data model such as CAD design drawing and GIS digital map, the security technique for vector data model has been issued. This paper presents the vector data hashing for the authentication and copy protection of vector data model. The proposed hashing groups polylines in main layers of a vector data model and generates the group coefficients by the line curve curvatures of the first and second type of all poly lines. Then we calculate the feature coefficients by projecting the group coefficients onto the random pattern and generate finally the binary hash from the binarization of the feature coefficients. From experimental results using a number of CAD drawings and GIS digital maps, we verified that the proposed hashing has the robustness against various attacks and the uniqueness and security by the random key.

A Plagiarism Detection Technique for Source Codes Considering Data Structures (데이터 구조를 고려한 소스코드 표절 검사 기법)

  • Lee, Kihwa;Kim, Yeoneo;Woo, Gyun
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.3 no.6
    • /
    • pp.189-196
    • /
    • 2014
  • Though the plagiarism is illegal and should be avoided, it still occurs frequently. Particularly, the plagiarism of source codes is more frequently committed than others since it is much easier to copy them because of their digital nature. To prevent code plagiarism, there have been reported a variety of studies. However, previous studies for plagiarism detection techniques on source codes do not consider the data structures although a source code consists both of data structures and algorithms. In this paper, a plagiarism detection technique for source codes considering data structures is proposed. Specifically, the data structures of two source codes are represented as sets of trees and compared with each other using Hungarian Method. To show the usefulness of this technique, an experiment has been performed on 126 source codes submitted as homework results in an object-oriented programming course. When both the data structures and the algorithms of the source codes are considered, the precision and the F-measure score are improved 22.6% and 19.3%, respectively, than those of the case where only the algorithms are considered.

Primary Copy based Data Replication Scheme for Ensuring Data Consistency in Mobile Ad-hoc Networks (이동적응망에서 데이터 일관성 보장을 위한 주사본 기반 데이터 중복 기법)

  • Moon, Ae-Kyung
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2005.11a
    • /
    • pp.334-336
    • /
    • 2005
  • 이동적응망(MANET: Mobile Ad-hoc Network)은 네트워크 하부 구조를 필요로 하지 않은 무선 단말들로 구성된 네트워크이다. 이러한 특성은 네트워크 단절 가능성을 높게 하기 때문에 이동단말들의 데이터 액세스률이 낮아지게 된다는 문제점을 갖는다. 이를 해결하기 위하여 이동 노드들은 데이터의 중복사본을 갖는다. 이동 노드가 갖는 중복사본은 데이터 일관성을 유지하기 위하여 별도의 중복관리 기법이 필요하다. 하지만 MANET을 구성하는 이동 노드들은 일반적으로 제한된 전력을 가지고 있고 단절될 가능성이 높기 때문에 중복 사본의 일관성 보장은 어려운 문제로 지적되고 있다. 기존에 제안된 MANET에서의 데이터 중복관리 기법은 데이터 액세스 빈도수를 계산하여 액세스률을 높이는 방법에 주안점을 두고 있고 갱신 데이터의 일관성 보장은 그 어려움 때문에 주로 판독 연산만 고려하였다. 갱신 트랜잭션을 지원하는 경우 대부분 높은 통신비용을 이유로 데이터 일관성을 보장하지 않는다. 또한 이동 노드가 다수의 서버를 통해서 갱신 연산을 실행하기 때문에 통신 오버헤드로 인하여 전력소모가 크다. 본 논문에서는 주사본 노드를 통하여 갱신을 가능하게 함으로써 데이터 일관성을 유지할 수 있는 데이터 중복 기법을 제안한다. 제안된 기법은 이동 노드들의 에너지 특성을 고려하여 더 않은 에너지를 가진 노드에게 갱신 전파 및 일관성 유지를 의뢰함으로써 상대적으로 낮은 에너지를 갖는 이동 노드의 에너지 효율을 고려하였다.

  • PDF

An Avoidance-Based Cache Consistency Algorithm without Unnecessary Callback (불필요한 콜백을 제거한 회피 기반의 캐쉬 일관성 알고리즘)

  • Kim, Chi-Yeon
    • Journal of Advanced Navigation Technology
    • /
    • v.10 no.2
    • /
    • pp.120-127
    • /
    • 2006
  • Data-caching of a client is an important technology for interaction with a server where data are cached and operated at the client node. Caching reduces network latency and increases resource utilization at the client. In a Client-server environment, the absence of cache consistency algorithm, we cannot guarantee the correctness of client applications. In this paper, we propose a new asynchronous avoidance-based cache consistency algorithm. We remove the additional callback due to the lock escalation in AACC. With a comprehensive performance analysis, we show that the proposed algorithm has less message exchange than the AACC. One-copy serializability is used for proving correctness of the proposed algorithm.

  • PDF