• Title/Summary/Keyword: Buffer(Memory)

Search Result 369, Processing Time 0.029 seconds

Implementation of a Shared Buffer ATM Switch Embedded Scalable Pipelined Buffer Memory (가변형 파이프라인방식 메모리를 내장한 공유버퍼 ATM 스위치의 구현)

  • 정갑중
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.6 no.5
    • /
    • pp.703-717
    • /
    • 2002
  • This paper illustrates the implementation of a scalable shared buffer asynchronous transfer mode (ATM) switch. The designed shared buffer ATM switch has a shared buffet of a pipelined memory which has the access time of 4 ns. The high-speed buffer access time supports a possibility of the implementation of a shared buffer ATM switch which has a large switching capacity. The designed switch architecture provides flexible switching performance and port size scalability with the independence of queue address control from buffer memory control. The switch size and the buffer size of the designed ATM switch can be reconfigured without serious circuit redesign. The designed prototype chip has a shared buffer of 128-cell and 4 ${\times}$ 4 switch size. It is integrated in 0.6um, double-metal, and single-poly CMOS technology. It has 80MHz operating frequency and supports 640Mbps per port.

Language performance analysis based on multi-dimensional verbal short-term memories in patients with conduction aphasia (다차원 구어 단기기억에 따른 전도 실어증 환자의 언어수행력 분석)

  • Ha, Ji-Wan;Hwang, Yu Mi;Pyun, Sung-Bom
    • Korean Journal of Cognitive Science
    • /
    • v.23 no.4
    • /
    • pp.425-455
    • /
    • 2012
  • Multi-dimensional verbal short-term memory mechanisms are largely divided into the phonological channel and the lexical-semantic channel. The former is called phonological short-term memory and the latter is called semantic short-term memory. Phonological short-term memory is further segmented into the phonological input buffer and the phonological output buffer. In this study, the language performance of each of three patients with similar levels of conduction aphasia was analyzed in terms of multi-dimensional verbal short-term memory. To this end, three patients with conduction aphasia were instructed to perform four different aspects of language tasks that are spontaneous speaking, repetition, spontaneous writing, and dictation in both word and sentence level. Moreover, the patients' phonological memories and semantic short-term memories were evaluated using digit span tests and verbal learning tests. As a result, the three subjects exhibited various types of performances and error responses in the four aspects of language tests, and the short-term memory tests also did not produce identical results. The language performance of three patients with conduction aphasia can be explained according to whether the defects occurred in the semantic short-term memory, phonological input buffer and/or phonological output buffer. In this study, the relations between language and multi-dimensional verbal short-term memory were discussed based on the results of language tests and short-term memory tests in patients with conduction aphasia.

  • PDF

Research on the Main Memory Access Count According to the On-Chip Memory Size of an Artificial Neural Network (인공 신경망 가속기 온칩 메모리 크기에 따른 주메모리 접근 횟수 추정에 대한 연구)

  • Cho, Seok-Jae;Park, Sungkyung;Park, Chester Sungchung
    • Journal of IKEEE
    • /
    • v.25 no.1
    • /
    • pp.180-192
    • /
    • 2021
  • One widely used algorithm for image recognition and pattern detection is the convolution neural network (CNN). To efficiently handle convolution operations, which account for the majority of computations in the CNN, we use hardware accelerators to improve the performance of CNN applications. In using these hardware accelerators, the CNN fetches data from the off-chip DRAM, as the massive computational volume of data makes it difficult to derive performance improvements only from memory inside the hardware accelerator. In other words, data communication between off-chip DRAM and memory inside the accelerator has a significant impact on the performance of CNN applications. In this paper, a simulator for the CNN is developed to analyze the main memory or DRAM access with respect to the size of the on-chip memory or global buffer inside the CNN accelerator. For AlexNet, one of the CNN architectures, when simulated with increasing the size of the global buffer, we found that the global buffer of size larger than 100kB has 0.8x as low a DRAM access count as the global buffer of size smaller than 100kB.

Effect of the MgO buffer layer for MFIS structure using the BLT thin film (BLT 박막을 이용한 MFIS 구조에서 MgO buffer layer의 영향)

  • Lee, Jung-Mi;Kim, Kyoung-Tae;Kim, Chang-Il
    • Proceedings of the Korean Institute of Electrical and Electronic Material Engineers Conference
    • /
    • 2003.11a
    • /
    • pp.23-26
    • /
    • 2003
  • The BLT thin film and MgO buffer layer were fabricated using a metalorganic decomposition method and the DC sputtering technique. The MgO thin film was deposited as a buffer layer on $SiO_2/Si$ and BLT thin films were used as a ferroelectric layer. The electrical of the MFIS structure were investigated by varying the MgO layer thickness. TEM showsno interdiffusion and reaction that suppressed by using the MgO film as abuffer layer. The width of the memory window in the C-Y curves for the MFIS structure decreased with increasing thickness of the MgO layer Leakage current density decreased by about three orders of magnitude after using MgO buffer layer. The results show that the BLT and MgO-based MFIS structure is suitable for non-volatile memory FETs with large memory window.

  • PDF

PCM Main Memory for Low Power Embedded System (저전력 내장형 시스템을 위한 PCM 메인 메모리)

  • Lee, Jung-Hoon
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.10 no.6
    • /
    • pp.391-397
    • /
    • 2015
  • Nonvolatile memories in memory hierarchy have been investigated to reduce its energy consumption because nonvolatile memories consume zero leakage power in memory cells. One of the difficulties is, however, that the endurance of most nonvolatile memory technologies is much shorter than the conventional SRAM and DRAM technology. This has limited its usage to only the low levels of a memory hierarchy, e.g., disks, that is far from the CPU. In this paper, we study the use of a new type of nonvolatile memories - the Phase Change Memory (PCM) with a DRAM buffer system as the main memory. Our design reduced the total energy of a DRAM main memory of the same capacity by 80%. These results indicate that it is feasible to use PCM technology in place of DRAM in the main memory for better energy efficiency.

Dual Write Buffer Algorithm for Improving Performance and Lifetime of SSDs (이중 쓰기 버퍼를 활용한 SSD의 성능 향상 및 수명 연장 기법)

  • Han, Se Jun;Kang, Dong Hyun;Eom, Young Ik
    • Journal of KIISE
    • /
    • v.43 no.2
    • /
    • pp.177-185
    • /
    • 2016
  • In this paper, we propose a hybrid write buffer architecture comprised of DRAM and NVRAM on SSD and a write buffer algorithm for the hybrid write buffer architecture. Unlike other write buffer algorithms, the proposed algorithm considers read pages as well as write pages to improve the performance of storage devices because most actual workloads are read-write mixed workloads. Through effectively managing NVRAM pages, the proposed algorithm extends the endurance of SSD by reducing the number of erase operations on NAND flash memory. Our experimental results show that our algorithm improved the buffer hit ratio by up to 116.51% and reduced the number of erase operations of NAND flash memory by up to 56.66%.

An Efficient Buffer Page Replacement Strategy for System Software on Flash Memory (플래시 메모리상에서 시스템 소프트웨어의 효율적인 버퍼 페이지 교체 기법)

  • Park, Jong-Min;Park, Dong-Joo
    • Journal of KIISE:Databases
    • /
    • v.34 no.2
    • /
    • pp.133-140
    • /
    • 2007
  • Flash memory has penetrated our life in various forms. For example, flash memory is important storage component of ubiquitous computing or mobile products such as cell phone, MP3 player, PDA, and portable storage kits. Behind of the wide acceptance as memory is many advantages of flash memory: for instances, low power consumption, nonvolatile, stability and portability. In addition to mentioned strengths, the recent development of gigabyte range capacity flash memory makes a careful prediction that the flash memory might replace some of storage area dominated by hard disks. In order to have overwriting function, one block must be erased before overwriting is performed. This difference results in the cost of reading, writing and erasing in flash memory[1][5][6]. Since this difference has not been considered in traditional buffer replacement technologies adopted in system software such as OS and DBMS, a new buffer replacement strategy becomes necessary. In this paper, a new buffer replacement strategy, reflecting difference I/O cost and applicable to flash memory, suggest and compares with other buffer replacement strategies using workloads as Zipfian distribution and real data.

An Efficient Buffer Cache Management Scheme for Heterogeneous Storage Environments (이기종 저장 장치 환경을 위한 버퍼 캐시 관리 기법)

  • Lee, Se-Hwan;Koh, Kern;Bahn, Hyo-Kyung
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.37 no.5
    • /
    • pp.285-291
    • /
    • 2010
  • Flash memory has many good features such as small size, shock-resistance, and low power consumption, but the cost of flash memory is still high to substitute for hard disk entirely. Recently, some mobile devices, such as laptops, attempt to use both flash memory and hard disk together for taking advantages of merits of them. However, existing OSs (Operating Systems) are not optimized to use the heterogeneous storage media. This paper presents a new buffer cache management scheme. First, we allocate buffer cache space according to access patterns of block references and the characteristics of storage media. Second, we prefetch data blocks selectively according to the location of them and access patterns of them. Third, we moves destaged data from buffer cache to hard disk or flash memory considering the access patterns of block references. Trace-driven simulation shows that the proposed schemes enhance the buffer cache hit ratio by up to 29.9% and reduce the total I/O elapsed time by up to 49.5%.

MLC-LFU : The Multi-Level Buffer Cache Management Policy for Flash Memory (MLC-LFU : 플래시 메모리를 위한 멀티레벨 버퍼 캐시 관리 정책)

  • Ok, Dong-Seok;Lee, Tae-Hoon;Chung, Ki-Dong
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.1
    • /
    • pp.14-20
    • /
    • 2009
  • Recently, NAND flash memory is used not only for portable devices, but also for personal computers and server computers. Buffer cache replacement policies for the hard disks such as LRU and LFU are not good for NAND flash memories because they do not consider about the characteristics of NAND flash memory. CFLRU and its variants, CFLRU/C, CFLRU/E and DL-CFLRU/E(CFLRUs) are the buffer cache replacement policies considered about the characteristics of NAND flash memories, but their performances are not better than those of LRD. In this paper, we propose a new buffer cache replacement policy for NAND flash memory. Which is based on LFU and is taking into account the characteristics of NAND flash memory. And we estimate the performance of hit ratio and flush operation numbers. The proposed policy shows better hit ratio and the number of flush operation than any other policies.

Hybrid Main Memory based Buffer Cache Scheme by Using Characteristics of Mobile Applications (모바일 애플리케이션의 특성을 이용한 하이브리드 메모리 기반 버퍼 캐시 정책)

  • Oh, Chansoo;Kang, Dong Hyun;Lee, Minho;Eom, Young Ik
    • Journal of KIISE
    • /
    • v.42 no.11
    • /
    • pp.1314-1321
    • /
    • 2015
  • Mobile devices employ buffer cache mechanisms, just as in computer systems such as desktops or servers, to mitigate the performance gap between main memory and secondary storage. However, DRAM has a problem in that it accelerates battery consumption by performing refresh operations periodically to maintain the stored data. In this paper, we propose a novel buffer cache scheme to increase the battery lifecycle in mobile devices based on a hybrid main memory architecture consisting of DRAM and non-volatile PCM. We also suggest a new buffer cache policy that allocates buffers based on process states to optimize the performance and endurance of PCM. In particular, our algorithm allocates each page to the appropriate position corresponding to the state of the application that owns the page, and tries to ensure a rapid response of foreground applications even with a small amount of DRAM memory. The experimental results indicate that the proposed scheme reduces the elapsed time of foreground applications by 58% on average and power consumption by 23% on average without negatively impacting the performance of background applications.