• Title/Summary/Keyword: long-memory

Search Result 1,121, Processing Time 0.028 seconds

A Survey on Neural Networks Using Memory Component (메모리 요소를 활용한 신경망 연구 동향)

  • Lee, Jihwan;Park, Jinuk;Kim, Jaehyung;Kim, Jaein;Roh, Hongchan;Park, Sanghyun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.8
    • /
    • pp.307-324
    • /
    • 2018
  • Recently, recurrent neural networks have been attracting attention in solving prediction problem of sequential data through structure considering time dependency. However, as the time step of sequential data increases, the problem of the gradient vanishing is occurred. Long short-term memory models have been proposed to solve this problem, but there is a limit to storing a lot of data and preserving it for a long time. Therefore, research on memory-augmented neural network (MANN), which is a learning model using recurrent neural networks and memory elements, has been actively conducted. In this paper, we describe the structure and characteristics of MANN models that emerged as a hot topic in deep learning field and present the latest techniques and future research that utilize MANN.

No Arbitrage Condition for Multi-Facor HJM Model under the Fractional Brownian Motion

  • Rhee, Joon-Hee;Kim, Yoon-Tae
    • Communications for Statistical Applications and Methods
    • /
    • v.16 no.4
    • /
    • pp.639-645
    • /
    • 2009
  • Fractional Brwonian motion(fBm) has properties of behaving tails and exhibiting long memory while remaining Gaussian. In particular, it is well known that interest rates show some long memories and non-Markovian. We present no aribitrage condition for HJM model under the multi-factor fBm reflecting the long range dependence in the interest rate model.

Is it necessary to distinguish semantic memory from episodic memory\ulcorner (의미기억과 일화기억의 구분은 필요한가)

  • 이정모;박희경
    • Korean Journal of Cognitive Science
    • /
    • v.11 no.3_4
    • /
    • pp.33-43
    • /
    • 2000
  • The distinction between short-term store (STS) and long-term store (LTS) has been made in the perspective of information processing. Memory system theorists have argued that memory could be conceived as multiple memory systems beyond the concept of a single LTS. Popular memory system models are Schacter & Tulving (994)'s multiple memory systems and Squire (987)'s the taxonomy of long-term memory. Those m models agree that amnesic patients have intact STS but impaired LTS and have preserved implicit memory. However. there is a debate about the nature of the long-term memory impairment. One model considers amnesic deficit as a selective episodic memory impairment. whereas the other sees the deficits as both episodic and semantic memory impairment. At present, it remains unclear that episodic memory should be distinguished from semantic memory in terms of retrieval operation. The distinction between declarative memory and nondeclarative memory would be the alternative way to reflect explicit memory and implicit memory. The research focused on the function of frontal lobe might give clues to the debate about the nature of LTS.

  • PDF

Empirical Study of the Long-Term Memory Effect of the KOSPI200 Earning rate volatility (KOSPI200 수익률 변동성의 장기기억과정탐색)

  • Choi, Sang-Kyu
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.15 no.12
    • /
    • pp.7018-7024
    • /
    • 2014
  • This study examined the squared returns and absolute returns of KOSPI 200 with GPH (Geweke and Porter-Hudak, 1983) estimators. GPH was estimated by the long-term memory preserving time series parameter d in linear regression. This called the GPH estimator, which depends on a bandwidth m. m was decided by confirming the stable section of the point estimate by validating the track of the GPH estimator according to the value of m. The result suggests that by satisfying 0< d <0.5, the squared returns and absolute returns of KOPI 200 retains long-term memory.

Emotion Classification based on EEG signals with LSTM deep learning method (어텐션 메커니즘 기반 Long-Short Term Memory Network를 이용한 EEG 신호 기반의 감정 분류 기법)

  • Kim, Youmin;Choi, Ahyoung
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.26 no.1
    • /
    • pp.1-10
    • /
    • 2021
  • This study proposed a Long-Short Term Memory network to consider changes in emotion over time, and applied an attention mechanism to give weights to the emotion states that appear at specific moments. We used 32 channel EEG data from DEAP database. A 2-level classification (Low and High) experiment and a 3-level classification experiment (Low, Middle, and High) were performed on Valence and Arousal emotion model. As a result, accuracy of the 2-level classification experiment was 90.1% for Valence and 88.1% for Arousal. The accuracy of 3-level classification was 83.5% for Valence and 82.5% for Arousal.

Study of fall detection for the elderly based on long short-term memory(LSTM) (장단기 메모리 기반 노인 낙상감지에 대한 연구)

  • Jeong, Seung Su;Yu, Yun Seop
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.249-251
    • /
    • 2021
  • In this paper, we introduce the deep-learning system using Tensorflow for recognizing situations that can occur fall situations when the elderly are moving or standing. Fall detection uses the LSTM (long short-term memory) learned using Tensorflow to determine whether it is a fall or not by data measured from wearable accelerator sensor. Learning is carried out for each of the 7 behavioral patterns consisting of 4 types of activity of daily living (ADL) and 3 types of fall. The learning was conducted using the 3-axis acceleration sensor data. As a result of the test, it was found to be compliant except for the GDSVM(Gravity Differential SVM), and it is expected that better results can be expected if the data is mixed and learned.

  • PDF

Radar Quantitative Precipitation Estimation using Long Short-Term Memory Networks

  • Thi, Linh Dinh;Yoon, Seong-Sim;Bae, Deg-Hyo
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2020.06a
    • /
    • pp.183-183
    • /
    • 2020
  • Accurate quantitative precipitation estimation plays an important role in hydrological modelling and prediction. Instantaneous quantitative precipitation estimation (QPE) by utilizing the weather radar data is a great applicability for operational hydrology in a catchment. Previously, regression technique performed between reflectivity (Z) and rain intensity (R) is used commonly to obtain radar QPEs. A novel, recent approaching method which might be applied in hydrological area for QPE is Long Short-Term Memory (LSTM) Networks. LSTM networks is a development and evolution of Recurrent Neuron Networks (RNNs) method that overcomes the limited memory capacity of RNNs and allows learning of long-term input-output dependencies. The advantages of LSTM compare to RNN technique is proven by previous works. In this study, LSTM networks is used to estimate the quantitative precipitation from weather radar for an urban catchment in South Korea. Radar information and rain-gauge data are used to evaluate and verify the estimation. The estimation results figure out that LSTM approaching method shows the accuracy and outperformance compared to Z-R relationship method. This study gives us the high potential of LSTM and its applications in urban hydrology.

  • PDF

Study of Fall Detection System According to Number of Nodes of Hidden-Layer in Long Short-Term Memory Using 3-axis Acceleration Data (3축 가속도 데이터를 이용한 장단기 메모리의 노드수에 따른 낙상감지 시스템 연구)

  • Jeong, Seung Su;Kim, Nam Ho;Yu, Yun Seop
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.516-518
    • /
    • 2022
  • In this paper, we introduce a dependence of number of nodes of hidden-layer in fall detection system using Long Short-Term Memory that can detect falls. Its training is carried out using the parameter theta(θ), which indicates the angle formed by the x, y, and z-axis data for the direction of gravity using a 3-axis acceleration sensor. In its learning, validation is performed and divided into training data and test data in a ratio of 8:2, and training is performed by changing the number of nodes in the hidden layer to increase efficiency. When the number of nodes is 128, the best accuracy is shown with Accuracy = 99.82%, Specificity = 99.58%, and Sensitivity = 100%.

  • PDF

Self-adaptive testing to determine sample size for flash memory solutions

  • Byun, Chul-Hoon;Jeon, Chang-Kyun;Lee, Taek;In, Hoh Peter
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.6
    • /
    • pp.2139-2151
    • /
    • 2014
  • Embedded system testing, especially long-term reliability testing, of flash memory solutions such as embedded multi-media card, secure digital card and solid-state drive involves strategic decision making related to test sample size to achieve high test coverage. The test sample size is the number of flash memory devices used in a test. Earlier, there were physical limitations on the testing period and the number of test devices that could be used. Hence, decisions regarding the sample size depended on the experience of human testers owing to the absence of well-defined standards. Moreover, a lack of understanding of the importance of the sample size resulted in field defects due to unexpected user scenarios. In worst cases, users finally detected these defects after several years. In this paper, we propose that a large number of potential field defects can be detected if an adequately large test sample size is used to target weak features during long-term reliability testing of flash memory solutions. In general, a larger test sample size yields better results. However, owing to the limited availability of physical resources, there is a limit on the test sample size that can be used. In this paper, we address this problem by proposing a self-adaptive reliability testing scheme to decide the sample size for effective long-term reliability testing.

Performance Analysis and Identifying Characteristics of Processing-in-Memory System with Polyhedral Benchmark Suite (프로세싱 인 메모리 시스템에서의 PolyBench 구동에 대한 동작 성능 및 특성 분석과 고찰)

  • Jeonggeun Kim
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.3
    • /
    • pp.142-148
    • /
    • 2023
  • In this paper, we identify performance issues in executing compute kernels from PolyBench, which includes compute kernels that are the core computational units of various data-intensive workloads, such as deep learning and data-intensive applications, on Processing-in-Memory (PIM) devices. Therefore, using our in-house simulator, we measured and compared the various performance metrics of workloads based on traditional out-of-order and in-order processors with Processing-in-Memory-based systems. As a result, the PIM-based system improves performance compared to other computing models due to the short-term data reuse characteristic of computational kernels from PolyBench. However, some kernels perform poorly in PIM-based systems without a multi-layer cache hierarchy due to some kernel's long-term data reuse characteristics. Hence, our evaluation and analysis results suggest that further research should consider dynamic and workload pattern adaptive approaches to overcome performance degradation from computational kernels with long-term data reuse characteristics and hidden data locality.

  • PDF