• Title/Summary/Keyword: Data Copy

Search Result 346, Processing Time 0.024 seconds

Design and Performance Evaluation of Software RAID for Video-on-Demand Servers (주문형 비디오 서버를 위한 소프트웨어 RAID의 설계 및 성능 분석)

  • Koh, Jeong-Gook
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.3 no.2
    • /
    • pp.167-178
    • /
    • 2000
  • Software RAID(Redundant Arrays of Inexpensive Disks) is defined as a storage system that provides capabilities of hardware RAID, and guarantees high reliability as well as high performance. In this paper, we propose an enhanced disk scheduling algorithm and a scheme to guarantee reliability of data. We also design and implement software RAID by utilizing these mechanism to develop a storage system for multimedia applications. Because the proposed algorithm improves a defect of traditional GSS algorithm that disk I/O requests arc served in a fixed order, it minimizes buffer consumption and reduces the number of deadline miss through service group exchange. Software RAID also alleviates data copy overhead during disk services by sharing kernel memory. Even though the implemented software RAID uses the parity approach to guarantee reliability of data, it adopts different data allocation scheme. Therefore, we reduce disk accesses in logical XOR operations to compute the new parity data on all write operations. In the performance evaluation experiments, we found that if we apply the proposed schemes to implement the Software RAID, it can be used as a storage system for small-sized video-on-demand servers.

  • PDF

Garbage Collection Technique for Reduction of Migration Overhead and Lifetime Prolongment of NAND Flash Memory (낸드 플래시 메모리의 이주 오버헤드 감소 및 수명연장을 위한 가비지 컬렉션 기법)

  • Hwang, Sang-Ho;Kwak, Jong Wook
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.11 no.2
    • /
    • pp.125-134
    • /
    • 2016
  • NAND flash memory has unique characteristics like as 'out-place-update' and limited lifetime compared with traditional storage systems. According to out-of-place update scheme, a number of invalid (or called dead) pages can be generated. In this case, garbage collection is needed to reclaim invalid pages. Because garbage collection results in not only erase operations but also copy operations of valid (or called live) pages to other blocks, many garbage collection techniques have proposed to reduce the overhead and to increase the lifetime of NAND Flash systems. This techniques sometimes select victim blocks including cold data for the wear leveling. However, most of them overlook the cost of selecting victim blocks including cold data. In this paper, we propose a garbage collection technique named CAPi (Cost Age with Proportion of invalid pages). Considering the additional overhead of what to select victim blocks including cold data, CAPi improves the response time in garbage collection and increase the lifetime in memory systems. Additionally, the proposed scheme also improves the efficiency of garbage collection by separating cold data from hot data in valid pages. In experimental evaluation, we showed that CAPi yields up to, at maximum, 73% improvement in lifetime compared with existing garbage collections.

Analysis of copy number abnormality (CNA) and loss of heterozygosity (LOH) in the whole genome using single nucleotide polymorphism (SNP) genotyping arrays in tongue squamous cell carcinoma (설편평상피암에 있어서의 고밀도 SNP Genotyping 어레이를 이용한 전게놈북제수와 헤테로접합성 소실의 분석)

  • Kuroiwa, Tsukasa;Yamamoto, Nobuharu;Onda, Takeshi;Bessyo, Hiroki;Yakushiji, Takashi;Katakura, Akira;Takano, Nobuo;Shibahara, Takahiko
    • Journal of the Korean Association of Oral and Maxillofacial Surgeons
    • /
    • v.37 no.6
    • /
    • pp.550-555
    • /
    • 2011
  • Chromosomal loss of heterozygosity (LOH) is a common mechanism for the inactivation of tumor suppressor genes in human epithelial cancers. LOH patterns can be generated through allelotyping using polymorphic microsatellite markers; however, owing to the limited number of available microsatellite markers and the requirement for large amounts of DNA, only a modest number of microsatellite markers can be screened. Hybridization to single nucleotide polymorphism (SNP) arrays using Affymetarix GeneChip Mapping 10 K 2.0 Array is an efficient method to detect genome-wide cancer LOH. We determined the presence of LOH in oral SCCs using these arrays. DNA was extracted from tissue samples obtained from 10 patients with tongue SCCs who presented at the Hospital of Tokyo Dental College. We examined the presence of LOH in 3 of the 10 patients using these arrays. At the locus that had LOH, we examined the presence of LOH using microsatellite markers. LOH analysis using Affymetarix GeneChip Mapping 10K Array showed LOH in all patients at the 1q31.1. The LOH regions were detected and demarcated by the copy number 1 with the series of three SNP probes. LOH analysis of 1q31.1 using microsatellite markers (D1S1189, D1S2151, D1S2595) showed LOH in all 10 patients (100). Our data may suggest that a putative tumor suppressor gene is located at the 1q31.1 region. Inactivation of such a gene may play a role in tongue tumorigenesis.

Cache-Filter: A Cache Permission Policy for Information-Centric Networking

  • Feng, Bohao;Zhou, Huachun;Zhang, Mingchuan;Zhang, Hongke
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.12
    • /
    • pp.4912-4933
    • /
    • 2015
  • Information Centric Networking (ICN) has recently attracted great attention. It names the content decoupling from the location and introduces network caching, making the content to be cached anywhere within the network. The benefits of such design are obvious, however, many challenges still need to be solved. Among them, the local caching policy is widely discussed and it can be further divided into two parts, namely the cache permission policy and the cache replacement policy. The former is used to decide whether an incoming content should be cached while the latter is used to evict a cached content if required. The Internet is a user-oriented network and popular contents always have much more requests than unpopular ones. Caching such popular contents closer to the user's location can improve the network performance, and consequently, the local caching policy is required to identify popular contents. However, considering the line speed requirement of ICN routers, the local caching policy whose complexity is larger than O(1) cannot be applied. In terms of the replacement policy, Least Recently Used (LRU) is selected as the default one for ICN because of its low complexity, although its ability to identify the popular content is poor. Hence, the identification of popular contents should be completed by the cache permission policy. In this paper, a cache permission policy called Cache-Filter, whose complexity is O(1), is proposed, aiming to store popular contents closer to users. Cache-Filter takes the content popularity into account and achieves the goal through the collaboration of on-path nodes. Extensive simulations are conducted to evaluate the performance of Cache-Filter. Leave Copy Down (LCD), Move Copy Down (MCD), Betw, ProbCache, ProbCache+, Prob(p) and Probabilistic Caching with Secondary List (PCSL) are also implemented for comparison. The results show that Cache-Filter performs well. For example, in terms of the distance to access to contents, compared with Leave Copy Everywhere (LCE) used by Named Data Networking (NDN) as the permission policy, Cache-Filter saves over 17% number of hops.

A Fairness Control Scheme in Multicast ATM Switches (멀티캐스트 ATM 스위치에서의 공정성 제어 방법)

  • 손동욱;손유익
    • Journal of KIISE:Information Networking
    • /
    • v.30 no.1
    • /
    • pp.134-142
    • /
    • 2003
  • We present an ATM switch architectures based on the multistage interconnection network(MIN) for the efficient multicast traffic control. Many of these applications require multicast connections as well as point-to-point connections. Muiticast connection in which the same message is delivered from a source to arbitrary number of destinations is fundamental in the areas such as teleconferencing, VOD(video on demand), distributed data processing, etc. In designing the multicast ATM switches to support those services, we should consider the fairness(impartiality) and priority control, in addition to the overflow problem, cell processing with large number of copies, and the blocking problem. In particular, the fairness problem which is to distribute the incoming cells to input ports smoothly is occurred when a cell with the large copy number enters upper input port. In this case, the upper input port sends before the lower input port because of the calculating method of running sum, and therefore cell arrived into lower input port Is delayed to next cycle to be sent and transmission delay time becomes longer. In this paper, we propose the cell splitting and group splitting algorithm, and also the fairness scheme on the basis of the nonblocking characteristics for issuing appropriate copy number depending on the number of Input cell in demand. We evaluate the performance of the proposed schemes in terms of the throughput, cell loss rate and cell delay.

Design and Implementation of Autonomic De-fragmentation for File System Aging (파일 시스템 노화를 해소하기 위한 자동적인 단편화 해결 시스템의 설계와 구현)

  • Lee, Jun-Seok;Park, Hyun-Chan;Yoo, Chuck
    • The KIPS Transactions:PartA
    • /
    • v.16A no.2
    • /
    • pp.101-112
    • /
    • 2009
  • Existing techniques for defragmentation of the file system need intensive disk operation for some periods at specific time such as disk defragmentation program. In this paper, for solving this problem, we design and implement the automatic and continuous defragmentation free system by distributing the disk operation. We propose the Automatic Layout Scoring(ALS) mechanism for measuring defragmentation degree and suggest the Lazy Copy mechanism that copies the defragmented data at idle time for scattering the disk operation. We search the defragmented file by Automatic Layout Scoring mechanism and then find for empty spaces for that searched file. After lazy copy of searched fils to empty space for preventing that file from being lost, the algorithm solves the defragmentation problem by updating the I-node of that file. We implement these algorithms in Linux and evaluate them for small and defragmented file to get the layout scoring. We outperform the Linux EXT2 file system by $2.4%{\sim}10.4%$ in layout scoring evaluation. And the performance of read and write for various file size is better than the EXT2 by $1%{\sim}8.5%$ for write performance and by $1.2%{\sim}7.5%$ for read performance. We suggest this system for solving the problem of defragmentation automatically without disturbing the I/O task and manual management.

Development and Validation of Figure-Copy Test for Dementia Screening (치매 선별을 위한 도형모사검사 개발 및 타당화)

  • Kim, Chobok;Heo, Juyeon;Hong, Jiyun;Yi, Kyongmyon;Park, Jungkyu;Shin, Changhwan
    • 한국노년학
    • /
    • v.40 no.2
    • /
    • pp.325-340
    • /
    • 2020
  • Early diagnosis and intervention of dementia is critical to minimize future risk and cost for patients and their families. The purpose of this study was to develop and validate Figure-Copy Test(FCT), as a new dementia screening test, that can measure neurological damage and cognitive impairment, and then to examine whether the grading precesses for screening can be automated through machine learning procedure by using FCT imag es. For this end, FCT, Korean version of MMSE for Dementia Screening (MMSE-DS) and Clock Drawing Test were administrated to a total of 270 participants from normal and damaged elderly groups. Results demonstrated that FCT scores showed high internal constancy and significant correlation coefficients with the other two test scores. Discriminant analyses showed that the accuracy of classification for the normal and damag ed g roups using FCT were 90.8% and 77.1%, respectively, and these were relatively higher than the other two tests. Importantly, we identified that the participants whose MMSE-DS scores were higher than the cutoff but showed lower scores in FCT were successfully screened out through clinical diagnosis. Finally, machine learning using the FCT image data showed an accuracy of 73.70%. In conclusion, our results suggest that FCT, a newly developed drawing test, can be easily implemented for efficient dementia screening.

The Design and Implementation of Automatic Communication System using Mobile Instant Messenger (모바일 인스턴스 메신저를 활용한 자동화 커뮤니케이션 시스템 설계 및 구현)

  • Kim, Tae Yeol;Lee, Dae Sik
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.10 no.3
    • /
    • pp.11-21
    • /
    • 2014
  • In this paper, concerning the various advertising and policy advertising of the election with respect to whether to deliver a message to a large number of people, we design and implement an automative system what enables sending the text messages directly from the server to the client and also fast feedback is enabled by utilizing a number of operational programs to connect to the server. Therefore, we design and implement the automative communication system which enables delivering message to each user mobile terminal from a plurality of relay mobile terminals by utilizing the mobile instant messenger, not to deliver a message from the server to the mobile instant messenger user directly. In result of comparative analysis on the number of times of data transmission, this automative communication system utilizing mobile instant messenger shows the result that it enables transmitting five times per minute as it can copy and paste in the automation system regardless of the size of the data loading, otherwise in case of transmitting manually it show the result that the number of times of data transmission is reduced if the size of the data is larger.

Design and Implementation of a Backup System for Digital Contents (디지털콘텐츠의 특성을 고려한 백업 시스템의 설계 및 구현)

  • Lee Seok Jae;Yun Jong Hyun;Hwang Sok Choel;Yoo Jae Soo
    • The Journal of the Korea Contents Association
    • /
    • v.6 no.2
    • /
    • pp.105-116
    • /
    • 2006
  • With the development of IT technology, the amount of digital contents used in various environments of wired/wireless networks have been increased hugely and rapidly To protect the loss of the digital contents from the sudden accident, continuous data backup is required. In this paper, we design and implement the backup system that stores digital contents in backup storage by objectifying the contents with a unit of I/O size and giving them the unique E using the properties of digital contents to avoid duplicated store of the same data. The backup system reduces the amount of backup data efficiently by backing up the only one copy of the duplicated data. as a result, the backup system can back up the digital contents more efficiently in a constrained storage space.

  • PDF

Genome-Wide Association Studies of the Korea Association REsource (KARE) Consortium

  • Hong, Kyung-Won;Kim, Hyung-Lae;Oh, Berm-Seok
    • Genomics & Informatics
    • /
    • v.8 no.3
    • /
    • pp.101-102
    • /
    • 2010
  • During the last decade, large community cohorts have been established by the Korea National Institutes of Health (KNIH), and enormous epidemiological and clinical data have been accumulated. Using these information and samples in the cohorts, KNIH set out to do a large-scale genome-wide association study (GWAS) in 2007, and the Korea Association REsource (KARE) consortium was launched to analyze the data to identify the underlying genetic risk factors of diseases and diverse health indexes, such as blood pressure, obesity, bone density, and blood biochemical traits. The consortium consisted of 6 research divisions, formed by 25 principal investigators in 19 organizations, including 18 universities, 2 institutes, and 1 company. Each division focused on one of the following subjects: the identification of genetic factors, the statistical analysis of gene-gene interactions, the genetic epidemiology of gene-environment interactions, copy number variation, the bioinformatics related to a GWAS, and a GWAS of nutrigenomics. In this special issue, the study results of the KARE consortium are provided as 9 articles. We hope that this special issue might encourage the genomics community to share data and scientists, including clinicians, to analyze the valuable Korean data of KARE.