• Title/Summary/Keyword: write

Search Result 1,576, Processing Time 0.034 seconds

Tips to Write a Medical Paper More Effectively (효과적인 의학 논문 작성을 위한 요령)

  • Hwang, Jin-Bok
    • Pediatric Gastroenterology, Hepatology & Nutrition
    • /
    • v.13 no.2
    • /
    • pp.117-127
    • /
    • 2010
  • This paper aims to give beginners an introductory course on how to write a medical paper more effectively. Bear in mind the reviewer and the reader will be reading your paper for the first time, so you should write it easily. Everything in your paper must be coherent. Use of the active voice is usually shorter and clearer. Organize your story carefully and logically, and then you can avoid unnecessary repetition in different sections. Think hard, because research is made by the mind, not by the hands. Write technically and powerfully. Above all, you have to meet the submission regulation of the target journal exactly.

A Locality-Aware Write Filter Cache for Energy Reduction of STTRAM-Based L1 Data Cache

  • Kong, Joonho
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.16 no.1
    • /
    • pp.80-90
    • /
    • 2016
  • Thanks to superior leakage energy efficiency compared to SRAM cells, STTRAM cells are considered as a promising alternative for a memory element in on-chip caches. However, the main disadvantage of STTRAM cells is high write energy and latency. In this paper, we propose a low-cost write filter (WF) cache which resides between the load/store queue and STTRAM-based L1 data cache. To maximize efficiency of the WF cache, the line allocation and access policies are optimized for reducing energy consumption of STTRAM-based L1 data cache. By efficiently filtering the write operations in the STTRAM-based L1 data cache, our proposed WF cache reduces energy consumption of the STTRAM-based L1 data cache by up to 43.0% compared to the case without the WF cache. In addition, thanks to the fast hit latency of the WF cache, it slightly improves performance by 0.2%.

Bit Flip Reduction Schemes to Improve PCM Lifetime: A Survey

  • Han, Miseon;Han, Youngsun
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.5 no.5
    • /
    • pp.337-345
    • /
    • 2016
  • Recently, as the number of cores in computer systems has increased, the need for larger memory capacity has also increased. Unfortunately, dynamic random access memory (DRAM), popularly used as main memory for decades, now faces a scalability limitation. Phase change memory (PCM) is considered one of the strong alternatives to DRAM due to its advantages, such as high scalability, non-volatility, low idle power, and so on. However, since PCM suffers from short write endurance, direct use of PCM in main memory incurs a significant problem due to its short lifetime. To solve the lifetime limitation, many studies have focused on reducing the number of bit flips per write request. In this paper, we describe the PCM operating principles in detail and explore various bit flip reduction schemes. Also, we compare their performance in terms of bit reduction rate and lifetime improvement.

Mirror-Switching Scheme for High-Speed Embedded Storage Systems (고속 임베디드 저장 시스템을 위한 복제전환 기법)

  • Byun, Si-Woo;Jang, Seok-Woo
    • Transactions of the Society of Information Storage Systems
    • /
    • v.7 no.1
    • /
    • pp.7-12
    • /
    • 2011
  • The flash memory has been remarked as the next generation media of portable and desktop computers' storage devices. Their features include non-volatility, low power consumption, and fast access time for read operations, which are sufficient to present flash memories as major data storage components for desktop and servers. The purpose of our study is to upgrade a traditional mirroring scheme based on SSD storages due to the relatively slow or freezing characteristics of write operations, as compared to fast read operations. For this work, we propose a new storage management scheme called Memory Mirror-Switching based on traditional mirroring scheme. Our Mirror-Switching scheme improves flash operation performance by switching write-workloads from flash memory to RAM and delaying write operations to avoid freezing. Our test results show that our scheme significantly reduces the write operation delay and storage freezing.

A Study on NFS and iSCSI in Small Random Write Intensive Applications (Small Random Write 환경에서의 NFS와 iSCSI에 대한 연구)

  • Choi, Chanho;Kim, Shin-gyu;Eom, Hyeonsang;Yeom, HY.
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2012.04a
    • /
    • pp.139-141
    • /
    • 2012
  • 클라우드 시스템에서 사용되는 스토리지 서버는 다수의 워크로드가 혼재됨으로 인하여 상당수의 쓰기 연산이 Small Random한 특성을 가지게 된다. 좀 더 높은 성능을 위해서 이런 특성에 적합한 스토리지 서버를 구축하는 것이 필요하며 이를 위해 본 논문에서는 자주 사용되는 스토리지 프로토콜인 NFS와 iSCSI를 비교하여 어떤 쪽이 Small Random Write에 더 적합한지 실험을 통해 알아 보았다. 결과적으로 Small Random Write들은 캐시에 합쳐지는 효과에 상당한 영향을 받으며, 이런 캐시 효과가 더 효율적인 것은 iSCSI 임을 확인하였다.

A Study on Design and Cache Replacement Policy for Cascaded Cache Based on Non-Volatile Memories (비휘발성 메모리 시스템을 위한 저전력 연쇄 캐시 구조 및 최적화된 캐시 교체 정책에 대한 연구)

  • Juhee Choi
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.3
    • /
    • pp.106-111
    • /
    • 2023
  • The importance of load-to-use latency has been highlighted as state-of-the-art computing cores adopt deep pipelines and high clock frequencies. The cascaded cache was recently proposed to reduce the access cycle of the L1 cache by utilizing differences in latencies among banks of the cache structure. However, this study assumes the cache is comprised of SRAM, making it unsuitable for direct application to non-volatile memory-based systems. This paper proposes a novel mechanism and structure for lowering dynamic energy consumption. It inserts monitoring logic to keep track of swap operations and write counts. If the ratio of swap operations to total write counts surpasses a set threshold, the cache controller skips the swap of cache blocks, which leads to reducing write operations. To validate this approach, experiments are conducted on the non-volatile memory-based cascaded cache. The results show a reduction in write operations by an average of 16.7% with a negligible increase in latencies.

  • PDF

An Optimized Cache Coherence Protocol in Multiprocessor System Connected by Slotted Ring (슬롯링으로 연결된 다중처리기 시스템에서 최적화된 캐쉬일관성 프로토콜)

  • Min, Jun-Sik;Chang, Tae-Mu
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.12
    • /
    • pp.3964-3975
    • /
    • 2000
  • There are two policies for maintaining consistency among the multiple processor caches in a multiprocessor system: Write invalidate and Write update. In the write invalidate policy, whenever a processor attempt to write its cached block, it has to invalidate all the same copies of the updated block in the system. As a results of this frequent invalidations, this policy results in high cache miss ratio. On the other hand, the write update policy renew them, instead of invalidating all the same copies. This policy has to transfer the updated contents through interconnection network, whether the updated block is ptivate or not. Therefore the network suffer from heavy transaction traffic. In this paper we present an efficient cache coherence protocol for shared memory multiprocessor system connected by slotted ring. This protocol is based on the write update policy, but the updated contents are transferred only in case of updating the shared block. Otherwise, if the updated block is private, the updated contents are not transferred. We analyze the proposed protocol and enforce simulation to compare it with previous version.

  • PDF

Partial Garbage Collection Technique for Improving Write Performance of Log-Structured File Systems (부분 가비지 컬렉션을 이용한 로그 구조 파일시스템의 쓰기 성능 개선)

  • Gwak, Hyunho;Shin, Dongkun
    • Journal of KIISE
    • /
    • v.41 no.12
    • /
    • pp.1026-1034
    • /
    • 2014
  • Recently, flash storages devices have become popular. Log-structured file systems (LFS) are suitable for flash storages since these can provide high write performance by only generating sequential writes to the flash device. However, LFS should perform garbage collections (GC) in order to reclaim obsolete space. Recently, a slack space recycling (SSR) technique was proposed to reduce the GC overhead. However, since SSR generates random writes, write performance can be negatively impacted if the random write performance is significantly lower than sequential write performance of the target device. This paper proposes a partial garbage collection technique that copies only a part of valid blocks in a victim segment in order to increase the size of the contiguous invalid space to be used by SSR. The experiments performed in this study show that the write performance in an SD card improves significantly as a result of the partial GC technique.

Efficiently Managing the B-tree using Write Pattern Conversion on NAND Flash Memory (낸드 플래시 메모리 상에서 쓰기 패턴 변환을 통한 효율적인 B-트리 관리)

  • Park, Bong-Joo;Choi, Hae-Gi
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.36 no.6
    • /
    • pp.521-531
    • /
    • 2009
  • Flash memory has physical characteristics different from hard disk where two costs of a read and write operations differ each other and an overwrite on flash memory is impossible to be done. In order to solve these restrictions with software, storage systems equipped with flash memory deploy FTL(Flash Translation Layer) software. Several FTL algorithms have been suggested so far and most of them prefer sequential write pattern to random write pattern. In this paper, we provide a new technique to efficiently store and maintain the B-tree index on flash memory. The operations like inserts, deletes, updates of keys for the B-tree generate random writes rather than sequential writes on flash memory, leading to inefficiency to the B-tree maintenance. In our technique, we convert random writes generated by the B-tree into sequential writes and then store them to the write-buffer on flash memory. If the buffer is full later, some sequential writes in the buffer will be issued to FTL. Our diverse experimental results show that our technique outperforms the existing ones with respect to the I/O cost of flash memory.