• Title/Summary/Keyword: in-memory computing

Search Result 766, Processing Time 0.03 seconds

An Implementation of Efficient M-tree based Indexing on Flash-Memory Storage System (플래시 메모리 저장장치에서 효율적인 M-트리 기반의 인덱싱 구현)

  • Yu, Jeong-Soo;Nang, Jong-Ho
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.1
    • /
    • pp.70-74
    • /
    • 2010
  • As the storage capacity of the flash memories increased portable devices began to store mass amount of multimedia data on flash memory. Therefore, there has been a need for an effective data management scheme by indexing structure. Among many indexing schemes, M-tree is well known for it's suitability for multimedia data with high dimensional matrix space. Since flash memories have writing operation restriction, there is a performance limitation in indexing scheme with frequent write operation. In this paper, a new node split method with reduced write operation for m-tree indexing scheme in flash memory is proposed. According to experiments the proposed method reduced the write operation to about 7% of the original method. The proposed method will effectively construct an indexing structure for multimedia data in flash memories.

Garbage Collection Method for NAND Flash Memory based on Analysis of Page Ratio (페이지 비율 분석 기반의 NAND 플래시 메모리를 위한 가비지 컬렉션 기법)

  • Lee, Seung-Hwan;Ok, Dong-Seok;Yoon, Chang-Bae;Lee, Tae-Hoon;Chung, Ki-Dong
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.9
    • /
    • pp.617-625
    • /
    • 2009
  • NAND flash memory is widely used in embedded systems because of many attractive features, such as small size, light weight, low power consumption and fast access speed. However, it requires garbage collection, which includes erase operations. Erase operation is very slow. Besides, the number of the erase operations allowed to be carried out for each block is limited. The proposed garbage collection method focuses on minimizing the total number of erase operations, the deviation value of each block and the garbage collection time. NAND flash memory consists of pages of three types, such as valid pages, invalid pages and free pages. In order to achieve above goals, we use a page rate to decide when to do garbage collection and to select the target victim block. Additionally, We implement allocating method and group management method. Simulation results show that the proposed policy performs better than Greedy or CAT with the maximum rate at 82% of reduction in the deviation value of erase operation and 75% reduction in garbage collection time.

Performance Comparison of Virtualization Domain in User Level Virtualization (사용자 레벨 가상화에서 가상화 영역 성능 비교)

  • Jeong, Chan-Joo;Kang, Tae-Geun
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.8 no.11
    • /
    • pp.1741-1748
    • /
    • 2013
  • In this paper, we proposed new virtualization technology that is more convenient and stable in local computing environment, then found technique elements need to desktop virtualization which is based on clients in various virtualization technologies. After running excution of process explorer utility in user level virtualization and VMWare, we found memory capacity that is used 30.1MB in VMWare and 16.6MB in user level virtualization respectively to compare private bytes each of process. We found no significant difference of CPU utilization which is executed application program in local computing environment and user domain with user level virtualization. In this result, proposed virtualization technology is able to minimize performance degradation of local computing environment.

A memory protection method for application programs on the Android operating system (안드로이드에서 어플리케이션의 메모리 보호를 위한 연구)

  • Kim, Dong-ryul;Moon, Jong-sub
    • Journal of Internet Computing and Services
    • /
    • v.17 no.6
    • /
    • pp.93-101
    • /
    • 2016
  • As the Android smart phones become more popular, applications that handle users' personal data such as IDs or passwords and those that handle data directly related to companies' income such as in-game items are also increasing. Despite the need for such information to be protected, it can be modified by malicious users or leaked by attackers on the Android. The reason that this happens is because debugging functions of the Linux, base of the Android, are abused. If an application uses debugging functions, it can access the virtual memory of other applications. To prevent such abuse, access controls should be reinforced. However, these functions have been incorporated into Android O.S from its Linux base in unmodified form. In this paper, based on an analysis of both existing memory access functions and the Android environment, we proposes a function that verifies thread group ID and then protects against illegal use to reinforce access control. We conducted experiments to verify that the proposed method effectively reinforces access control. To do that, we made a simple application and modified data of the experimental application by using well-established memory editing applications. Under the existing Android environment, the memory editor applications could modify our application's data, but, after incorporating our changes on the same Android Operating System, it could not.

Design Considerations on Large-scale Parallel Finite Element Code in Shared Memory Architecture with Multi-Core CPU (멀티코어 CPU를 갖는 공유 메모리 구조의 대규모 병렬 유한요소 코드에 대한 설계 고려 사항)

  • Cho, Jeong-Rae;Cho, Keunhee
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.30 no.2
    • /
    • pp.127-135
    • /
    • 2017
  • The computing environment has changed rapidly to enable large-scale finite element models to be analyzed at the PC or workstation level, such as multi-core CPU, optimal math kernel library implementing BLAS and LAPACK, and popularization of direct sparse solvers. In this paper, the design considerations on a parallel finite element code for shared memory based multi-core CPU system are proposed; (1) the use of optimized numerical libraries, (2) the use of latest direct sparse solvers, (3) parallelism using OpenMP for computing element stiffness matrices, and (4) assembly techniques using triplets, which is a type of sparse matrix storage. In addition, the parallelization effect is examined on the time-consuming works through a large scale finite element model.

A Neighbor Prefetching Scheme for a Hybrid Storage System (SSD 캐시를 위한 이웃 프리페칭 기법)

  • Baek, Sung Hoon
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.14 no.5
    • /
    • pp.40-52
    • /
    • 2018
  • Solid state drive (SSD) cache technologies that are used as a second-tier cache between the main memory and hard disk drive (HDD) have been widely studied. The SSD cache requires a new prefetching scheme as well as cache replacement algorithms. This paper presents a prefetching scheme for a storage-class cache using SSD. This prefetching scheme is designed for the storage-class cache and based on a long-term scheduling in contrast to the short-term prefetching in the main memory. Traditional prefetching algorithms just consider only read, but the presented prefetching scheme considers both read and write. An experimental evaluation shows 2.3% to 17.8% of hit rate with a 64GB of SSD and the 4GiB of prefetching size using an I/O trace of 14 days. The proposed prefetching scheme showed significant improvement of cache hit rate and can be easily implemented in storage-class cache systems.

Design of Fast Operation Method In NAND Flash Memory File System (NAND 플래시 메모리 파일 시스템에 빠른 연산을 위한 설계)

  • Jin, Jong-Won;Lee, Tae-Hoon;Chung, Ki-Dong
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.1
    • /
    • pp.91-95
    • /
    • 2008
  • Flash memory is widely used in embedded systems because of its benefits such as non-volatile, shock resistant, and low power consumption. But NAND flash memory suffers from out-place-update, limited erase cycles, and page based read/write operations. To solve these problems, log-structured filesystem was proposed such as YAFFS. However, YAFFS sequentially retrieves an array of all block information to allocate free block for a write operation. Also before the write operation, YAFPS read the array of block information to find invalid block for erase. These could reduce the performance of the filesystem. This paper suggests fast operation method for NAND flash filesystem that solves the above-mentioned problems. We implemented the proposed methods in YAFFS. And we measured the performance compared with the original technique.

Optimization of Color Format Conversion of WebCam Images Using the CUDA (CUDA를 이용한 웹캠 영상의 색상 형식 변환 최적화)

  • Kim, Jin-Woo;Jung, Yun-Hye;Park, Jin-Hong;Park, Yong-Jin;Han, Tack-Don
    • Journal of Korea Game Society
    • /
    • v.11 no.1
    • /
    • pp.147-157
    • /
    • 2011
  • Webcam doesn't perform memory-alignment in order to reduce the transmission time of image data. Memory-unaligned image data is unsuitable for the processing on GPU. Accordingly, we convert it to available color format for optimization in high speed image processing. In this paper, we propose a technique that accelerates webcam's color format conversion by using NVDIA CUDA. We propose an optimization which is about memory accesses and thread composition, also evaluate memory and computing performance for verifying a hypothesis which is the performance of the proposed architecture and optimizing degree on low-performance GPU. Following the optimization technique, we show performance improvements over maximum 68 percent.

The Trace Analysis of SaaS from a Client's Perspective (클라이언트관점의 SaaS 사용 흔적 분석)

  • Kang, Sung-Lim;Park, Jung-Heum;Lee, Sang-Jin
    • The KIPS Transactions:PartC
    • /
    • v.19C no.1
    • /
    • pp.1-8
    • /
    • 2012
  • Recently, due to the development of broadband, there is a significant increase in utilizing on-demand Saas (Software as a Service) which takes advantage of the technology. Nevertheless, the academic and practical levels of digital forensics have not yet been established in cloud computing environment. In addition, the data of user behavior is not likely to be stored on the local system. The relevant data may be stored across the various remote servers. Therefore, the investigators may encounter some problems in performing digital forensics in cloud computing environment. it is important to analysis History files, Cookie files, Temporary Internet Files, physical memory, etc. in a viewpoint of client, since the SaaS basically uses the web to connects the internet service. In this paper, we propose the method that analysis the usuage trace of the Saas which is the one of the most popular cloud computing services.

On reducing the computing time of EFDC hydrodynamic model (EFDC 해수유동모형의 계산시간 효율화)

  • Jung, Tae-Sung;Choi, Jong-Hwa
    • Journal of the Korean Society for Marine Environment & Energy
    • /
    • v.14 no.2
    • /
    • pp.121-129
    • /
    • 2011
  • The EFDC model has been simplified to enhance the computing performance in hydrodynamic modeling. Water quality module and unnecessary conditional statements were deleted in subroutine list and memory allocation. The performance of the enhanced model (EFDC-E) was checked by applying EFDC and EFDC-E models to simulating the tidal flow in Mokpo coastal zone. Both two-dimensional models and threedimensional models have been applied and compared. Three-dimensional models showed better simulation results agreeing with observed currents than two-dimensional models. The simulation results of EFDC-E model gave good results agreeing with the simulation results of EFDC model and the observed data. The computing speed of EFDC-E model is improved 3 times faster than that of EFDC model in modeling hydrodynamic flow for real time of 3 days in both 2-dimensional modeling and 3-dimensional modeling. The EFDC-E model can be used widely for hydrodynamic modeling because of improved simulation speed.