• Title/Summary/Keyword: Memory-Based Learning

Search Result 556, Processing Time 0.023 seconds

Comparison of Traditional Workloads and Deep Learning Workloads in Memory Read and Write Operations

  • Jeongha Lee;Hyokyung Bahn
    • International journal of advanced smart convergence
    • /
    • v.12 no.4
    • /
    • pp.164-170
    • /
    • 2023
  • With the recent advances in AI (artificial intelligence) and HPC (high-performance computing) technologies, deep learning is proliferated in various domains of the 4th industrial revolution. As the workload volume of deep learning increasingly grows, analyzing the memory reference characteristics becomes important. In this article, we analyze the memory reference traces of deep learning workloads in comparison with traditional workloads specially focusing on read and write operations. Based on our analysis, we observe some unique characteristics of deep learning memory references that are quite different from traditional workloads. First, when comparing instruction and data references, instruction reference accounts for a little portion in deep learning workloads. Second, when comparing read and write, write reference accounts for a majority of memory references, which is also different from traditional workloads. Third, although write references are dominant, it exhibits low reference skewness compared to traditional workloads. Specifically, the skew factor of write references is small compared to traditional workloads. We expect that the analysis performed in this article will be helpful in efficiently designing memory management systems for deep learning workloads.

Electroencephalography of Learning and Memory (학습과 기억의 뇌파)

  • Jeon, Hyeonjin;Lee, Seung-Hwan
    • Korean Journal of Biological Psychiatry
    • /
    • v.23 no.3
    • /
    • pp.102-107
    • /
    • 2016
  • This review will summarize EEG studies of learning and memory based on frequency bands including theta waves (4-7 Hz), gamma waves (> 30 Hz) and alpha waves (7-12 Hz). Authors searched and reviewed EEG papers especially focusing on learning and memory from PubMed. Theta waves are associated with acquisition of new information from stimuli. Gamma waves are connected with comparing and binding old information in preexisting memory and new information from stimuli. Alpha waves are linked with attention. Eventually it mediates the learning and memory process. Although EEG studies of learning and memory still have controversial issues, the future EEG studies will facilitate clinical benefits by virtue of more developed and encouraging prospects.

Optical Pattern Recognition Based on Holographic Associative Memory (홀로그램 연상기억을 이용한 광학적 영상인식에 관한 연구)

  • 서호형;김병윤;이상수
    • Proceedings of the Optical Society of Korea Conference
    • /
    • 1991.07a
    • /
    • pp.33-39
    • /
    • 1991
  • We have developed a new holographic associative memory(HAN) based on an adaptive learning which uses learning pattern method (LPM). The LPM utilizes the simple optical implementation of outer-product learning, performance of adapitive learning. simulation are represented.

  • PDF

Distributed In-Memory Caching Method for ML Workload in Kubernetes (쿠버네티스에서 ML 워크로드를 위한 분산 인-메모리 캐싱 방법)

  • Dong-Hyeon Youn;Seokil Song
    • Journal of Platform Technology
    • /
    • v.11 no.4
    • /
    • pp.71-79
    • /
    • 2023
  • In this paper, we analyze the characteristics of machine learning workloads and, based on them, propose a distributed in-memory caching technique to improve the performance of machine learning workloads. The core of machine learning workload is model training, and model training is a computationally intensive task. Performing machine learning workloads in a Kubernetes-based cloud environment in which the computing framework and storage are separated can effectively allocate resources, but delays can occur because IO must be performed through network communication. In this paper, we propose a distributed in-memory caching technique to improve the performance of machine learning workloads performed in such an environment. In particular, we propose a new method of precaching data required for machine learning workloads into the distributed in-memory cache by considering Kubflow pipelines, a Kubernetes-based machine learning pipeline management tool.

  • PDF

Study on Memory Performance Improvement based on Machine Learning (머신러닝 기반 메모리 성능 개선 연구)

  • Cho, Doosan
    • The Journal of the Convergence on Culture Technology
    • /
    • v.7 no.1
    • /
    • pp.615-619
    • /
    • 2021
  • This study focuses on memory systems that are optimized to increase performance and energy efficiency in many embedded systems such as IoT, cloud computing, and edge computing, and proposes a performance improvement technique. The proposed technique improves memory system performance based on machine learning algorithms that are widely used in many applications. The machine learning technique can be used for various applications through supervised learning, and can be applied to a data classification task used in improving memory system performance. Data classification based on highly accurate machine learning techniques enables data to be appropriately arranged according to data usage patterns, thereby improving overall system performance.

Lifetime Extension Method for Non-Volatile Memory based Deep Learning System by analyzing Data Write Pattern (데이터 쓰기 패턴 분석을 통한 비휘발성 메모리 기반 딥러닝 시스템의 수명 연장 기법)

  • Choi, Juhee
    • Journal of the Semiconductor & Display Technology
    • /
    • v.21 no.3
    • /
    • pp.1-6
    • /
    • 2022
  • Modern computer systems usually have special hardware for operations used in deep learning workload even edge computing environment. Non-volatile memories (NVMs) have been considered for alternative memory storage because they consume little static energy and occupy small area. However, there is a problem for NVMs to be directly adopted. An NVM cell has limited write endurance, so that the lifetime of NVM-based memory system is much shorter than that of conventional memory system. To overcome this problem for the deep learning system, this paper proposes a novel method to extend the lifetime based on the analysis of the deep learning workloads. If an incoming block has more than a predefined number of frequently used values, the cacheline is defined as write friendly block. During the victim selection, the cacheline has lower possibility to be chosen as victim. The experimental results show that the lifetime is increased by about 50% and energy consumption is decreased by 3% with a little performance hurt.

Improvement of Track Tracking Performance Using Deep Learning-based LSTM Model (딥러닝 기반 LSTM 모형을 이용한 항적 추적성능 향상에 관한 연구)

  • Hwang, Jin-Ha;Lee, Jong-Min
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.189-192
    • /
    • 2021
  • This study applies a deep learning-based long short-term memory(LSTM) model to track tracking technology. In the case of existing track tracking technology, the weight of constant velocity, constant acceleration, stiff turn, and circular(3D) flight is automatically changed when tracking track in real time using LMIPDA based on Kalman filter according to flight characteristics of an aircraft such as constant velocity, constant acceleration, stiff turn, and circular(3D) flight. In this process, it is necessary to improve performance of changing flight characteristic weight, because changing flight characteristics such as stiff turn flight during constant velocity flight could incur the loss of track and decreasing of the tracking performance. This study is for improving track tracking performance by predicting the change of flight characteristics in advance and changing flight characteristic weigh rapidly. To get this result, this study makes deep learning-based Long Short-Term Memory(LSTM) model study the plot and target of simulator applied with radar error model, and compares the flight tracking results of using Kalman filter with those of deep learning-based Long Short-Term memory(LSTM) model.

  • PDF

Smart device based short-term memory training system for interpretation (스마트 단말에서의 통역용 단기기억력 향상 훈련 시스템)

  • Pyo, Ji Hye;An, Donghyeok
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.9 no.3
    • /
    • pp.747-756
    • /
    • 2019
  • Students studying interpretation perform additional study and training in addition to regular class. In simultaneous interpreting and consecutive interpreting, interpreter should memorize speaker's announcement because of different language structure. To improve short-term memory, students perform memory training that requires a pair of students. Therefore, they can not perform self-learning, and therefore, efficiency of studying decreases. To resolve this problem, computer based short-term memory training system has been proposed. Student can perform self-learning by changing words in text to special character in the training system. However, efficiency of studying decreases because computer has low portability. Since the number of words is larger than the number of words to be switched into special character, learning difficulty decreases. To resolve this problem, smart device based short-term memory training system has been proposed. Student can perform smart device based training system without space constraints. Since the proposed training system increases the number of words to be changed into special character, learning difficulty increases. We implemented and evaluated the functionalities of the proposed training system.

A Study on the Storage Requirement and Incremental Learning of the k-NN Classifier (K_NN 분류기의 메모리 사용과 점진적 학습에 대한 연구)

  • 이형일;윤충화
    • The Journal of Information Technology
    • /
    • v.1 no.1
    • /
    • pp.65-84
    • /
    • 1998
  • The MBR (Memory Based Reasoning) is a supervised learning method that utilizes the distances among the input and trained patterns in its classification, and is also called a distance based learning algorithm. The MBR is based on the k-NN classifier, in which teaming is performed by simply storing training patterns in the memory without any further processing. This paper proposes a new learning algorithm which is more efficient than the traditional k-NN classifier and has incremental learning capability, Furthermore, our proposed algorithm is insensitive to noisy patterns, and guarantees more efficient memory usage.

  • PDF

A Hybrid of Rule based Method and Memory based Loaming for Korean Text Chunking (한국어 구 단위화를 위한 규칙 기반 방법과 기억 기반 학습의 결합)

  • 박성배;장병탁
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.3
    • /
    • pp.369-378
    • /
    • 2004
  • In partially free word order languages like Korean and Japanese, the rule-based method is effective for text chunking, and shows the performance as high as machine learning methods even with a few rules due to the well-developed overt Postpositions and endings. However, it has no ability to handle the exceptions of the rules. Exception handling is an important work in natural language processing, and the exceptions can be efficiently processed in memory-based teaming. In this paper, we propose a hybrid of rule-based method and memory-based learning for Korean text chunking. The proposed method is primarily based on the rules, and then the chunks estimated by the rules are verified by memory-based classifier. An evaluation of the proposed method on Korean STEP 2000 corpus yields the improvement in F-score over the rules or various machine teaming methods alone. The final F-score is 94.19, while those of the rules and SVMs, the best machine learning method for this task, are just 91.87 and 92.54 respectively.