• Title/Summary/Keyword: data memory

Search Result 3,343, Processing Time 0.03 seconds

Level Shifts and Long-term Memory in Stock Distribution Markets (주식유통시장의 층위이동과 장기기억과정)

  • Chung, Jin-Taek
    • Journal of Distribution Science
    • /
    • v.14 no.1
    • /
    • pp.93-102
    • /
    • 2016
  • Purpose - The purpose of paper is studying the static and dynamic side for long-term memory storage properties, and increase the explanatory power regarding the long-term memory process by looking at the long-term storage attributes, Korea Composite Stock Price Index. The reason for the use of GPH statistic is to derive the modified statistic Korea's stock market, and to research a process of long-term memory. Research design, data, and methodology - Level shifts were subjected to be an empirical analysis by applying the GPH method. It has been modified by taking into account the daily log return of the Korea Composite Stock Price Index a. The Data, used for the stock market to analyze whether deciding the action by the long-term memory process, yield daily stock price index of the Korea Composite Stock Price Index and the rate of return a log. The studies were proceeded with long-term memory and long-term semiparametric method in deriving the long-term memory estimators. Chapter 2 examines the leading research, and Chapter 3 describes the long-term memory processes and estimation methods. GPH statistics induced modifications of statistics and discussed Whittle statistic. Chapter 4 used Korea Composite Stock Price Index to estimate the long-term memory process parameters. Chapter 6 presents the conclusions and implications. Results - If the price of the time series is generated by the abnormal process, it may be located in long-term memory by a time series. However, test results by price fixed GPH method is not followed by long-term memory process or fractional differential process. In the case of the time-series level shift, the present test method for a long-term memory processes has a considerable amount of bias, and there exists a structural change in the stock distribution market. This structural change has implications in level shift. Stratum level shift assays are not considered as shifted strata. They exist distinctly in the stock secondary market as bias, and are presented in the test statistic of non-long-term memory process. It also generates an error as a long-term memory that could lead to false results. Conclusions - Changes in long-term memory characteristics associated with level shift present the following two suggestions. One, if any impact outside is flowed for a long period of time, we can know that the long-term memory processes have characteristic of the average return gradually. When the investor makes an investment, the same reasoning applies to him in the light of the characteristics of the long-term memory. It is suggested that when investors make decisions on investment, it is necessary to consider the characters of the long-term storage in reference with causing investors to increase the uncertainty and potential. The other one is the thing which must be considered variously according to time-series. The research for price-earnings ratio and investment risk should be composed of the long-term memory characters, and it would have more predictability.

Accelerating Memory Access with Address Phase Skipping in LPDDR2-NVM

  • Park, Jaehyun;Shin, Donghwa;Chang, Naehyuck;Lee, Hyung Gyu
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.14 no.6
    • /
    • pp.741-749
    • /
    • 2014
  • Low power double data rate 2 non-volatile memory (LPDDR2-NVM) has been deemed the standard interface to connect non-volatile memory devices such as phase-change memory (PCM) directly to the main memory bus. However, most of the previous literature does not consider or overlook this standard interface. In this paper, we propose address phase skipping by reforming the way of interfacing with LPDDR2-NVM. To verify effectiveness and functionality, we also develop a system-level prototype that includes our customized LPDDR2-NVM controller and commercial PCM devices. Extensive simulations and measurements demonstrate up to a 3.6% memory access time reduction for commercial PCM devices and a 31.7% reduction with optimistic parameters of the PCM research prototypes in industries.

Comparison of Traditional Workloads and Deep Learning Workloads in Memory Read and Write Operations

  • Jeongha Lee;Hyokyung Bahn
    • International journal of advanced smart convergence
    • /
    • v.12 no.4
    • /
    • pp.164-170
    • /
    • 2023
  • With the recent advances in AI (artificial intelligence) and HPC (high-performance computing) technologies, deep learning is proliferated in various domains of the 4th industrial revolution. As the workload volume of deep learning increasingly grows, analyzing the memory reference characteristics becomes important. In this article, we analyze the memory reference traces of deep learning workloads in comparison with traditional workloads specially focusing on read and write operations. Based on our analysis, we observe some unique characteristics of deep learning memory references that are quite different from traditional workloads. First, when comparing instruction and data references, instruction reference accounts for a little portion in deep learning workloads. Second, when comparing read and write, write reference accounts for a majority of memory references, which is also different from traditional workloads. Third, although write references are dominant, it exhibits low reference skewness compared to traditional workloads. Specifically, the skew factor of write references is small compared to traditional workloads. We expect that the analysis performed in this article will be helpful in efficiently designing memory management systems for deep learning workloads.

Formal Analysis of Distributed Shared Memory Algorithms

  • Muhammad Atif;Muhammad Adnan Hashmi;Mudassar Naseer;Ahmad Salman Khan
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.4
    • /
    • pp.192-196
    • /
    • 2024
  • The memory coherence problem occurs while mapping shared virtual memory in a loosely coupled multiprocessors setup. Memory is considered coherent if a read operation provides same data written in the last write operation. The problem is addressed in the literature using different algorithms. The big question is on the correctness of such a distributed algorithm. Formal verification is the principal term for a group of techniques that routinely use an analysis that is established on mathematical transformations to conclude the rightness of hardware or software behavior in divergence to dynamic verification techniques. This paper uses UPPAAL model checker to model the dynamic distributed algorithm for shared virtual memory given by K.Li and P.Hudak. We analyse the mechanism to keep the coherence of memory in every read and write operation by using a dynamic distributed algorithm. Our results show that the dynamic distributed algorithm for shared virtual memory partially fulfils its functional requirements.

A lightweight technique for hot data identification considering the continuity of a Nand flash memory system (낸드 플래시 메모리 시스템 기반의 지속성을 고려한 핫 데이터 식별 경량 기법)

  • Lee, Seungwoo
    • Journal of Internet of Things and Convergence
    • /
    • v.8 no.5
    • /
    • pp.77-83
    • /
    • 2022
  • Nand flash memory requires an Erase-Before-Write operation structurally. In order to solve this problem, it can be solved by classifying a page (hot data page) where data update operation occurs frequently and storing it in a separate block. The MHF (Multi Hash Function Framework) technique records the frequency of data update requests in the system memory, and when the recorded value exceeds a certain standard, the data update request is judged as hot data. However, the method of simply counting only the frequency of the data update request has a limit in judging it as accurate hot data. In addition, in the case of a technique that determines the persistence of a data update request, the fact of the update request is recorded sequentially based on a time interval and then judged as hot data. In the case of such a persistence-based method, its implementation and operation are complicated, and there is a problem of inaccurate judgment if frequency is not considered in the update request. This paper proposes a lightweight hot data determination technique that considers both frequency and persistence in data update requests.

Garbage Collection Technique for Reduction of Migration Overhead and Lifetime Prolongment of NAND Flash Memory (낸드 플래시 메모리의 이주 오버헤드 감소 및 수명연장을 위한 가비지 컬렉션 기법)

  • Hwang, Sang-Ho;Kwak, Jong Wook
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.11 no.2
    • /
    • pp.125-134
    • /
    • 2016
  • NAND flash memory has unique characteristics like as 'out-place-update' and limited lifetime compared with traditional storage systems. According to out-of-place update scheme, a number of invalid (or called dead) pages can be generated. In this case, garbage collection is needed to reclaim invalid pages. Because garbage collection results in not only erase operations but also copy operations of valid (or called live) pages to other blocks, many garbage collection techniques have proposed to reduce the overhead and to increase the lifetime of NAND Flash systems. This techniques sometimes select victim blocks including cold data for the wear leveling. However, most of them overlook the cost of selecting victim blocks including cold data. In this paper, we propose a garbage collection technique named CAPi (Cost Age with Proportion of invalid pages). Considering the additional overhead of what to select victim blocks including cold data, CAPi improves the response time in garbage collection and increase the lifetime in memory systems. Additionally, the proposed scheme also improves the efficiency of garbage collection by separating cold data from hot data in valid pages. In experimental evaluation, we showed that CAPi yields up to, at maximum, 73% improvement in lifetime compared with existing garbage collections.

Data Scrambling Scheme that Controls Code Density with Data Occurrence Frequency (데이터 출현 빈도를 이용하여 코드 밀도를 조절하는 데이터 스크램블링 기법)

  • Hyun, Choulseung;Jeong, Gwanil;You, Soowon;Lee, Donghee
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.10 no.9
    • /
    • pp.235-242
    • /
    • 2021
  • Most data scrambling schemes generate pure random codes. Unlike these schemes, we propose a variable density scrambling scheme (VDSC) that differentiates densities of generated codes. First, we describe conditions and methods to translate plain codes to cipher codes with different densities. Then we apply the VDSC to flash memory such that preferred cell states occur more than others. To restrain error rate, specifically, the VDSC controls code densities so as to increase the ratio of center state among all possible cell states in flash memory. Scrambling experiments of data in Windows and Linux systems show that the VDSC increases the ratio of cells having near-center states in flash memory.

The design of a 32-bit Microprocessor for a Sequence Control using an Application Specification Integrated Circuit(ASIC) (ICEIC'04)

  • Oh Yang
    • Proceedings of the IEEK Conference
    • /
    • 2004.08c
    • /
    • pp.486-490
    • /
    • 2004
  • Programmable logic controller (PLC) is widely used in manufacturing system or process control. This paper presents the design of a 32-bit microprocessor for a sequence control using an Application Specification Integrated Circuit (ASIC). The 32-bit microprocessor was designed by a VHDL with top down method; the program memory was separated from the data memory for high speed execution of 274 specified sequence instructions. Therefore it was possible that sequence instructions could be operated at the same time during the instruction fetch cycle. And in order to reduce the instruction decoding time and the interface time of the data memory interface, an instruction code size was implemented by 32-bits. And the real time debugging as single step run, break point run was implemented. Pulse instruction, step controller, master controllers, BIN and BCD type arithmetic instructions, barrel shit instructions were implemented for many used in PLC system. The designed microprocessor was synthesized by the S1L50000 series which contains 70,000 gates with 0.65um technology of SEIKO EPSON. Finally, the benchmark was performed to show that designed 32-bit microprocessor has better performance than Q4A PLC of Mitsubishi Corporation.

  • PDF

The Conceptual Design of Mass Memory Unit for High Speed Data Processing in the STSAT-3 (고속 데이터 처리를 위한 과학기술위성 3호 대용량 메모리 유닛의 개념 설계)

  • Seo, In-Ho;Oh, Dae-Soo;Myung, Noh-Hoon
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.38 no.4
    • /
    • pp.389-394
    • /
    • 2010
  • This paper describes the conceptual design of mass memory unit for high speed data processing and mass memory management in the STSAT-3 compared to that of STSAT-2. The FPGA directly controls the data receiving from two payloads with the maximum 100Mbps speed and 32Gb mass memory management to satisfy these requirements. We used SRAM-based FPGA from XILINX having fast operating speed and large logic cells. Therefore, the Triple Modular Redundancy(TMR) and configuration memory scrubbing techniques will also be used to protect FPGA from Single Event Upset(SEU) in space.