• Title/Summary/Keyword: Mobile cache memory

Search Result 36, Processing Time 0.02 seconds

Web-Based Distributed Visualization System for Large Scale Geographic Data (대용량 지형 데이터를 위한 웹 기반 분산 가시화 시스템)

  • Hwang, Gyu-Hyun;Yun, Seong-Min;Park, Sang-Hun
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.6
    • /
    • pp.835-848
    • /
    • 2011
  • In this paper, we propose a client server based distributed/parallel system to effectively visualize huge geographic data. The system consists of a web-based client GUI program and a distributed/parallel server program which runs on multiple PC clusters. To make the client program run on mobile devices as well as PCs, the graphical user interface has been designed by using JOGL, the java-based OpenGL graphics library, and sending the information about current available memory space and maximum display resolution the server can minimize the amount of tasks. PC clusters used to play the role of the server access requested geographic data from distributed disks, and properly re-sample them, then send the results back to the client. To minimize the latency happened in repeatedly access the distributed stored geography data, cache data structures have been maintained in both every nodes of the server and the client.

Real-time Implementation of MPEG-4 HVXC Encoder and Decoder on Floating Point DSP (부동 소수점 DSP를 이용한 MPEG-4 HVXC 인코더 및 디코더의 실시간 구현)

  • Kang, Kyeong-ok;Na, Hoon;Hong, Jin-Woo;Jeong, Dae-Gwon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.19 no.4
    • /
    • pp.37-44
    • /
    • 2000
  • In this paper, we described the real-time implementation effort of MPEG-4 audio HVXC (Harmonic Vector eXcitation Coding) algorithm for very low bitrates, which has target applications from mobile communications to Internet telephony, on current high performance floating point TMS320C6701 DSP. We adopted a hardware structure for real-time operation. In order for software optimization, we used C- and assembly-language level optimizations for time-critical functional codes. Utilizing the internal program memory of the DSP as the program cache, the internal data memory overlap technique and DMA functionality, we could get a goal of realtime operation of HVXC codec both at 2 kbit/s and at 4 kbit/s. For an encoder at 2 kbit/s, the optimization ratio to original code is about 96 %. Finally, we got the subjective quality of MOS 2.45 at 2 kbit/s from an informal quality test.

  • PDF

Ultra-low-power DSP for Audio Signal Processing (오디오 신호 처리를 위한 초저전력 DSP 프로세서)

  • Kwon, Kiseok;Ahn, Minwook;Jo, Seokhwan;Lee, Yeonbok;Lee, Seungwon;Park, Young-Hwan;Kim, Sukjin;Kim, Do-Hyung;Kim, Jaehyun
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2014.06a
    • /
    • pp.157-159
    • /
    • 2014
  • In this paper, we introduce SlimSRP, an ultra-low-power digital signal processor (DSP) solution for mobile audio and voice applications. So far, application processors (APs) have taken charge of all the tasks in mobile devices. However, they have suffered from short battery life problems to deal with complex usage scenarios, such as always-on voice trigger with continuous audio playback. From extensive analysis of audio and voice application characteristics, SlimSRP is designed to relive the performance and power burden of APs. It employs three-issue VLIW architecture, and the major low-power and high-performance techniques include: (1) an optimized register-file architecture friendly for constants generation, (2) a powerful instruction set to reduce the number of register file accesses and (3) a unique instruction compression scheme that contributes to saved memory size and reduced cache miss. An implementation of SlimSRP runs at up to 200MHz and the logic occupies 95K NAND2 gates in Samsung 28LPP process. The experimental results demonstrate that a MP3 decoder application with a 128kbps 44.1kHz input can run at 5.1MHz and the logic consumes only 22uW/MHz.

  • PDF

The Electrical Characteristics of SRAM Cell with Stacked Single Crystal Silicon TFT Cell (단결정 실리콘 TFT Cell의 적용에 따른 SRAM 셀의 전기적 특성)

  • Lee, Deok-Jin;Kang, Ey-Goo
    • Journal of the Korea Computer Industry Society
    • /
    • v.6 no.5
    • /
    • pp.757-766
    • /
    • 2005
  • There have been great demands for higher density SRAM in all area of SRAM applications, such as mobile, network, cache, and embedded applications. Therefore, aggressive shrinkage of 6T Full CMOS SRAM had been continued as the technology advances, However, conventional 6T Full CMOS SRAM has a basic limitation in the cell size because it needs 6 transistors on a silicon substrate compared to 1 transistor in a DRAM cell. The typical cell area of 6T Full CMOS SRAM is $70{\sim}90F^{2}$, which is too large compared to $8{\sim}9F^{2}$ of DRAM cell. With 80nm design rule using 193nm ArF lithography, the maximum density is 72M bits at the most. Therefore, pseudo SRAM or 1T SRAM, whose memory cell is the same as DRAM cell, is being adopted for the solution of the high density SRAM applications more than 64M bits. However, the refresh time limits not only the maximum operation temperature but also nearly all critical electrical characteristics of the products such as stand_by current and random access time. In order to overcome both the size penalty of the conventional 6T Full CMOS SRAM cell and the poor characteristics of the TFT load cell, we have developed $S^{3}$ cell. The Load pMOS and the Pass nMOS on ILD have nearly single crystal silicon channel according to the TEM and electron diffraction pattern analysis. In this study, we present $S^{3}$ SRAM cell technology with 100nm design rule in further detail, including the process integration and the basic characteristics of stacked single crystal silicon TFT.

  • PDF

Electrical Characteristics of SRAM Cell with Stacked Single Crystal Silicon TFT Cell (Stacked Single Crystal Silicon TFT Cell의 적용에 의한 SRAM 셀의 전기적인 특성에 관한 연구)

  • Kang, Ey-Goo;Kim, Jin-Ho;Yu, Jang-Woo;Kim, Chang-Hun;Sung, Man-Young
    • Journal of the Korean Institute of Electrical and Electronic Material Engineers
    • /
    • v.19 no.4
    • /
    • pp.314-321
    • /
    • 2006
  • There have been great demands for higher density SRAM in all area of SRAM applications, such as mobile, network, cache, and embedded applications. Therefore, aggressive shrinkage of 6 T Full CMOS SRAM had been continued as the technology advances. However, conventional 6 T Full CMOS SRAM has a basic limitation in the cell size because it needs 6 transistors on a silicon substrate compared to 1 transistor in a DRAM cell. The typical cell area of 6 T Full CMOS SRAM is $70{\sim}90\;F^2$, which is too large compared to $8{\sim}9\;F^2$ of DRAM cell. With 80 nm design rule using 193 nm ArF lithography, the maximum density is 72 Mbits at the most. Therefore, pseudo SRAM or 1 T SRAM, whose memory cell is the same as DRAM cell, is being adopted for the solution of the high density SRAM applications more than 64 M bits. However, the refresh time limits not only the maximum operation temperature but also nearly all critical electrical characteristics of the products such as stand_by current and random access time. In order to overcome both the size penalty of the conventional 6 T Full CMOS SRAM cell and the poor characteristics of the TFT load cell, we have developed S3 cell. The Load pMOS and the Pass nMOS on ILD have nearly single crystal silicon channel according to the TEM and electron diffraction pattern analysis. In this study, we present $S^3$ SRAM cell technology with 100 nm design rule in further detail, including the process integration and the basic characteristics of stacked single crystal silicon TFT.

Efficient DRAM Buffer Access Scheduling Techniques for SSD Storage System (SSD 스토리지 시스템을 위한 효율적인 DRAM 버퍼 액세스 스케줄링 기법)

  • Park, Jun-Su;Hwang, Yong-Joong;Han, Tae-Hee
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.48 no.7
    • /
    • pp.48-56
    • /
    • 2011
  • Recently, new storage device SSD(Solid State Disk) based on NAND flash memory is gradually replacing HDD(Hard Disk Drive) in mobile device and thus a variety of research efforts are going on to find the cost-effective ways of performance improvement. By increasing the NAND flash channels in order to enhance the bandwidth through parallel processing, DRAM buffer which acts as a buffer cache between host(PC) and NAND flash has become the bottleneck point. To resolve this problem, this paper proposes an efficient low-cost scheme to increase SSD performance by improving DRAM buffer bandwidth through scheduling techniques which utilize DRAM multi-banks. When both host and NAND flash multi-channels request access to DRAM buffer concurrently, the proposed technique checks their destination and then schedules appropriately considering properties of DRAMs. It can reduce overheads of bank active time and row latency significantly and thus optimizes DRAM buffer bandwidth utilization. The result reveals that the proposed technique improves the SSD performance by 47.4% in read and 47.7% in write operation respectively compared to conventional methods with negligible changes and increases in the hardware.