• Title/Summary/Keyword: Embedded memory

Search Result 724, Processing Time 0.033 seconds

An Explicit Free Method for the Garbage Objects in Java-based Embedded System (자바기반 내장형 시스템에서 쓰레기 객체의 명시적 자유화 방법)

  • Bae, Soo-kang;Lee, Sung-young
    • The KIPS Transactions:PartA
    • /
    • v.9A no.4
    • /
    • pp.441-450
    • /
    • 2002
  • As the size of embedded system software increase bigger and bigger, and it's complexity is grower and grower, the usage of dynamic memory management scheme such collector also has been increased. Using the garbage collector, however, inherently lead us performance degradation. In order to resolve this kind of performance problem in the Java based embedded system. we introduce an explicit dynamic memory free method to the automated dynamic memory management environment. which can be performed by a programmer. In the worst case, the prosed scheme shows the same performance as the case of that only garbage collector is working, since the unclaimed garbage objects will eventually be collected later by the garbage collector. In the best case. our method is free from any runtime overhead because the applications can be implemented without any intervention of the garbage collector. Although the proposed method can be facilitated with all the existing garbage collection algorithms, it shows an outperform in the case of mark-and-sweep algorithm.

Efficient Implementation of SVM-Based Speech/Music Classifier by Utilizing Temporal Locality (시간적 근접성 향상을 통한 효율적인 SVM 기반 음성/음악 분류기의 구현 방법)

  • Lim, Chung-Soo;Chang, Joon-Hyuk
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.49 no.2
    • /
    • pp.149-156
    • /
    • 2012
  • Support vector machines (SVMs) are well known for their pattern recognition capability, but proper care should be taken to alleviate their inherent implementation cost resulting from high computational intensity and memory requirement, especially in embedded systems where only limited resources are available. Since the memory requirement determined by the dimensionality and the number of support vectors is generally too high for a cache in embedded systems to accomodate, frequent accesses to the main memory occur inevitably whenever the cache is not able to provide requested data to the processor. These frequent accesses to the main memory result in overall performance degradation and increased energy consumption because a memory access typically takes longer and consumes more energy than a cache access or a register access. In this paper, we propose a technique that reduces the number of main memory accesses by optimizing the data access pattern of the SVM-based classifier in such a way that the temporal locality of the accesses increases, fully utilizing data loaded into the processor chip. With experiments, we confirm the enhancement made by the proposed technique in terms of the number of memory accesses, overall execution time, and energy consumption.

Acceleration Techniques of Application Startup for Embedded Systems (임베디드 환경에서 응용프로그램 시작의 가속 기법)

  • Park, Eun-Byung;Lee, Yong-Jun;Kim, Seungkyun;Lee, Jaejin;Park, Kyungmin
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.4 no.4
    • /
    • pp.174-179
    • /
    • 2009
  • Due to digital convergence, mobile embedded systems need more functionalities and a fully fledged OS. Applications for such embedded systems are linked with many shared libraries available in the OS and access a large data set at launch time. This results in increased application launch time. In this paper, we propose two techniques for reducing the application launch time: lazy-loading and pinning. Lazy-loading defers loading shared libraries that are not used in the application at launch time, whereas pinning guarantees the residence of shared libraries and data used at launch time in the main memory.

  • PDF

Duplication-Aware Garbage Collection for Flash Memory-Based Virtual Memory Systems (플래시 메모리 기반의 가상 메모리 시스템을 위한 중복성을 고려한 GC 기법)

  • Ji, Seung-Gu;Shin, Dong-Kun
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.37 no.3
    • /
    • pp.161-171
    • /
    • 2010
  • As embedded systems adopt monolithic kernels, NAND flash memory is used for swap space of virtual memory systems. While flash memory has the advantages of low-power consumption, shock-resistance and non-volatility, it requires garbage collections due to its erase-before-write characteristic. The efficiency of garbage collection scheme largely affects the performance of flash memory. This paper proposes a novel garbage collection technique which exploits data redundancy between the main memory and flash memory in flash memory-based virtual memory systems. The proposed scheme takes the locality of data into consideration to minimize the garbage collection overhead. Experimental results demonstrate that the proposed garbage collection scheme improves performance by 37% on average compared to previous schemes.

The Improvement of the Data Overlapping Phenomenon with Memory Accessing Mode

  • Yang, Jin-Wook;Woo, Doo-Hyung;Kim, Dong-Hwan;Yi, Jun-Sin
    • Journal of Information Display
    • /
    • v.9 no.1
    • /
    • pp.6-13
    • /
    • 2008
  • Mobile phones use the embedded memory in LDI (LCD Driver IC). In memory accessing mode, data overlapping phenomenon can occur. These days, various contents such as DMB, Camera, Game are merged to phone. Accordingly, with more data transmission, there would be more data overlapping phenomenon in memory accessing mode. Human eyes perceive this data overlapping phenomenon as simply horizontal line noise. The cause of the data overlapping phenomenon was analysed in this paper. The data overlapping phenomenon can be changed by the speed of data transmission between the host and LDI. The optimum memory accessing position can be defined. This paper proposes a new algorithm for avoiding data overlapping.

A Study on Software-based Memory Testing of Embedded System (임베디드 시스템의 소프트웨어 기반 메모리 테스팅에 관한 연구)

  • Roh, Myong-Ki;Kim, Sang-Il;Rhew, Sung-Yul
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2004.05a
    • /
    • pp.309-312
    • /
    • 2004
  • 임베디드 시스템은 특별한 목적을 수행하기 위해 컴퓨터 하드웨어와 소프트웨어를 결합시킨 것이다. 임베디드 시스템은 일반 데스크탑보다 작은 규모의 하드웨어에서 운영된다. 임베디드 시스템은 파워, 공간, 메모리 등의 여러 가지 환경적 요소에 제약을 받는다. 그리고 임베디드 시스템은 실시간으로 동작하기 때문에 임베디드 시스템에서 소프트웨어의 실패는 일반 데스크탑에서보다 훨씬 심각한 문제를 발생시킨다. 따라서 임베디드 시스템은 주어진 자원을 효율적으로 사용하여야 하고 임베디드 시스템의 실패율을 낮춰야만 한다. 치명적인 문제를 발생시킬 수 있는 임베디드 시스템의 실패의 원인 중 하나가 메모리에 관련한 문제이다. 임베디드 시스템 특정상 메모리 문제는 크게 하드웨어 기반의 메모리 문제와 소프트웨어 기반의 메모리 문제로 분류된다. 소프트웨어 기반의 메모리에 관련한 문제는 Memory Leak, Freeing Free Memory, Freeing Unallocated Memory, Memory Allocation Failed, Late Detect Array Bounds Write, Late Detect Freed Memory Write 등과 같은 것들이 있다. 본 논문에서는 임베디드 시스템의 메모리 관련에 대한 문제점을 파악하고 관련 툴을 연구하여 그 문제점들을 효율적으로 해결할 수 있는 기법을 점증적으로 연구하고자 한다.

  • PDF

Performance Enhancement and Evaluation of a Deep Learning Framework on Embedded Systems using Unified Memory (통합메모리를 이용한 임베디드 환경에서의 딥러닝 프레임워크 성능 개선과 평가)

  • Lee, Minhak;Kang, Woochul
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.7
    • /
    • pp.417-423
    • /
    • 2017
  • Recently, many embedded devices that have the computing capability required for deep learning have become available; hence, many new applications using these devices are emerging. However, these embedded devices have an architecture different from that of PCs and high-performance servers. In this paper, we propose a method that improves the performance of deep-learning framework by considering the architecture of an embedded device that shares memory between the CPU and the GPU. The proposed method is implemented in Caffe, an open-source deep-learning framework, and is evaluated on an NVIDIA Jetson TK1 embedded device. In the experiment, we investigate the image recognition performance of several state-of-the-art deep-learning networks, including AlexNet, VGGNet, and GoogLeNet. Our results show that the proposed method can achieve significant performance gain. For instance, in AlexNet, we could reduce image recognition latency by about 33% and energy consumption by about 50%.

Characteristic of the Class Library for Embedded Java System (내장형 자바 시스템을 위한 클래스 라이브러리의 특성)

  • 양희재
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.7 no.4
    • /
    • pp.788-797
    • /
    • 2003
  • Class library is one of the most crucial element of Java runtime environment in addition to Java virtual machine. In particular, embedded Java system depends heavily on the class library due to having a low bandwidth communication link and a small amount of memory which are a common restriction of embedded system. It is therefore quite necessary to find the characteristic of the class library for embedded Java system to build an efficient Java runtime environment. In this paper we have analyzed the characteristic of the class library for embedded system. The analysis includes sorts of classes in the library, typical size of the file which contains the class, and the composition of constant pool which is a major part of the file. We also have found typical number of field and method a class contains, the sizes of stack and local variable array each method requires, and the length of bytecode in the method. The result of this study can be used to estimate the startup time for class loading and the size of memory to create an instance of class which are a mandatory information to design an efficient embedded Java virtual machine.

Development of a Flash Memory Drive for ATA bus (ATA 버스 방식을 위한 Flash Memory Drive 개발)

  • Kang, Kyung-Sik;Jang, Moon-Kee;Hwang, Yeon-Bum;Jung, Nam-Mo;Park, Jin-Soo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • v.9 no.1
    • /
    • pp.547-550
    • /
    • 2005
  • This treatise studies and developed flash memory drive of ATA bus that use flash memory which is employment amount memory semiconductor to improve problem of hard-disk that is existing ATA bus. While general hard-disk is sensitive external impact or shock, but flash memory drive do save chapter as well as is strong in external impact using semiconductor memory element that disk is not low electric power, light weight possible . Practical use is expected do save chapter for embedded system or black box for vehicles, soldiers hereafter therefore.

  • PDF

Modeling and Analysis of High Speed Serial Links (SerDes) for Hybrid Memory Cube Systems (하이브리드 메모리 큐브 (HMC) 시스템의 고속 직렬 링크 (SerDes)를 위한 모델링 및 성능 분석)

  • Jeon, Dong-Ik;Chung, Ki-Seok
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.12 no.4
    • /
    • pp.193-204
    • /
    • 2017
  • Various 3D-stacked DRAMs have been proposed to overcome the memory wall problem. Hybrid Memory Cube (HMC) is a true 3D-stacked DRAM with stacked DRAM layers on top of a logic layer. The logic die is mainly used to implement a memory controller for HMC, and it is connected through a high speed serial link called SerDes with a host that is either a processor or another HMC. In HMC, the serial link is crucial for both performance and power consumption. Therefore, it is important that the link is configured properly so that the required performance should be satisfied while the power consumption is minimized. In this paper, we propose a HMC system model included the high speed serial link to estimate performance accurately. Since the link modeling strictly follows the link flow control mechanism defined in the HMC spec, the actual HMC performance can be estimated accurately with respect to each link configuration. Various simulations are conducted in order to deduce the correlation between the HMC performance and the link configuration with regard to memory utilization. It is confirmed that there is a strong correlation between the achievable maximum performance of HMC and the link configuration in terms of both bandwidth and latency. Therefore, it is possible to find the best link configuration when the required HMC performance is known in advance, and finding the best configuration will lead to significant power saving while the performance requirement is satisfied.