• Title/Summary/Keyword: Memory Augmented Neural Network

Search Result 7, Processing Time 0.02 seconds

A Survey on Neural Networks Using Memory Component (메모리 요소를 활용한 신경망 연구 동향)

  • Lee, Jihwan;Park, Jinuk;Kim, Jaehyung;Kim, Jaein;Roh, Hongchan;Park, Sanghyun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.8
    • /
    • pp.307-324
    • /
    • 2018
  • Recently, recurrent neural networks have been attracting attention in solving prediction problem of sequential data through structure considering time dependency. However, as the time step of sequential data increases, the problem of the gradient vanishing is occurred. Long short-term memory models have been proposed to solve this problem, but there is a limit to storing a lot of data and preserving it for a long time. Therefore, research on memory-augmented neural network (MANN), which is a learning model using recurrent neural networks and memory elements, has been actively conducted. In this paper, we describe the structure and characteristics of MANN models that emerged as a hot topic in deep learning field and present the latest techniques and future research that utilize MANN.

Rare Malware Classification Using Memory Augmented Neural Networks (메모리 추가 신경망을 이용한 희소 악성코드 분류)

  • Kang, Min Chul;Kim, Huy Kang
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.28 no.4
    • /
    • pp.847-857
    • /
    • 2018
  • As the number of malicious code increases steeply, cyber attack victims targeting corporations, public institutions, financial institutions, hospitals are also increasing. Accordingly, academia and security industry are conducting various researches on malicious code detection. In recent years, there have been a lot of researches using machine learning techniques including deep learning. In the case of research using Convolutional Neural Network, ResNet, etc. for classification of malicious code, it can be confirmed that the performance improvement is higher than the existing classification method. However, one of the characteristics of the target attack is that it is custom malicious code that makes it operate only for a specific company, so it is not a form spreading widely to a large number of users. Since there are not many malicious codes of this kind, it is difficult to apply the previously studied machine learning or deep learning techniques. In this paper, we propose a method to classify malicious codes when the amount of samples is insufficient such as targeting type malicious code. As a result of the study, we confirmed that the accuracy of 97% can be achieved even with a small amount of data by applying the Memory Augmented Neural Networks model.

Robustness of Differentiable Neural Computer Using Limited Retention Vector-based Memory Deallocation in Language Model

  • Lee, Donghyun;Park, Hosung;Seo, Soonshin;Son, Hyunsoo;Kim, Gyujin;Kim, Ji-Hwan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.3
    • /
    • pp.837-852
    • /
    • 2021
  • Recurrent neural network (RNN) architectures have been used for language modeling (LM) tasks that require learning long-range word or character sequences. However, the RNN architecture is still suffered from unstable gradients on long-range sequences. To address the issue of long-range sequences, an attention mechanism has been used, showing state-of-the-art (SOTA) performance in all LM tasks. A differentiable neural computer (DNC) is a deep learning architecture using an attention mechanism. The DNC architecture is a neural network augmented with a content-addressable external memory. However, in the write operation, some information unrelated to the input word remains in memory. Moreover, DNCs have been found to perform poorly with low numbers of weight parameters. Therefore, we propose a robust memory deallocation method using a limited retention vector. The limited retention vector determines whether the network increases or decreases its usage of information in external memory according to a threshold. We experimentally evaluate the robustness of a DNC implementing the proposed approach according to the size of the controller and external memory on the enwik8 LM task. When we decreased the number of weight parameters by 32.47%, the proposed DNC showed a low bits-per-character (BPC) degradation of 4.30%, demonstrating the effectiveness of our approach in language modeling tasks.

Industrial Process Monitoring and Fault Diagnosis Based on Temporal Attention Augmented Deep Network

  • Mu, Ke;Luo, Lin;Wang, Qiao;Mao, Fushun
    • Journal of Information Processing Systems
    • /
    • v.17 no.2
    • /
    • pp.242-252
    • /
    • 2021
  • Following the intuition that the local information in time instances is hardly incorporated into the posterior sequence in long short-term memory (LSTM), this paper proposes an attention augmented mechanism for fault diagnosis of the complex chemical process data. Unlike conventional fault diagnosis and classification methods, an attention mechanism layer architecture is introduced to detect and focus on local temporal information. The augmented deep network results preserve each local instance's importance and contribution and allow the interpretable feature representation and classification simultaneously. The comprehensive comparative analyses demonstrate that the developed model has a high-quality fault classification rate of 95.49%, on average. The results are comparable to those obtained using various other techniques for the Tennessee Eastman benchmark process.

Meta Learning based Global Relation Extraction trained by Traditional Korean data (전통 문화 데이터를 이용한 메타 러닝 기반 전역 관계 추출)

  • Kim, Kuekyeng;Kim, Gyeongmin;Jo, Jaechoon;Lim, Heuiseok
    • Journal of the Korea Convergence Society
    • /
    • v.9 no.11
    • /
    • pp.23-28
    • /
    • 2018
  • Recent approaches to Relation Extraction methods mostly tend to be limited to mention level relation extractions. These types of methods, while featuring high performances, can only extract relations limited to a single sentence or so. The inability to extract these kinds of data is a terrible amount of information loss. To tackle this problem this paper presents an Augmented External Memory Neural Network model to enable Global Relation Extraction. the proposed model's Global relation extraction is done by first gathering and analyzing the mention level relation extraction by the Augmented External Memory. Additionally the proposed model shows high level of performances in korean due to the fact it can take the often omitted subjects and objectives into consideration.

A Korean speech recognition based on conformer (콘포머 기반 한국어 음성인식)

  • Koo, Myoung-Wan
    • The Journal of the Acoustical Society of Korea
    • /
    • v.40 no.5
    • /
    • pp.488-495
    • /
    • 2021
  • We propose a speech recognition system based on conformer. Conformer is known to be convolution-augmented transformer, which combines transfer model for capturing global information with Convolution Neural Network (CNN) for exploiting local feature effectively. The baseline system is developed to be a transfer-based speech recognition using Long Short-Term Memory (LSTM)-based language model. The proposed system is a system which uses conformer instead of transformer with transformer-based language model. When Electronics and Telecommunications Research Institute (ETRI) speech corpus in AI-Hub is used for our evaluation, the proposed system yields 5.7 % of Character Error Rate (CER) while the baseline system results in 11.8 % of CER. Even though speech corpus is extended into other domain of AI-hub such as NHNdiguest speech corpus, the proposed system makes a robust performance for two domains. Throughout those experiments, we can prove a validation of the proposed system.

Utilizing Deep Learning for Early Diagnosis of Autism: Detecting Self-Stimulatory Behavior

  • Seongwoo Park;Sukbeom Chang;JooHee Oh
    • International Journal of Advanced Culture Technology
    • /
    • v.12 no.3
    • /
    • pp.148-158
    • /
    • 2024
  • We investigate Autism Spectrum Disorder (ASD), which is typified by deficits in social interaction, repetitive behaviors, limited vocabulary, and cognitive delays. Traditional diagnostic methodologies, reliant on expert evaluations, frequently result in deferred detection and intervention, particularly in South Korea, where there is a dearth of qualified professionals and limited public awareness. In this study, we employ advanced deep learning algorithms to enhance early ASD screening through automated video analysis. Utilizing architectures such as Convolutional Long Short-Term Memory (ConvLSTM), Long-term Recurrent Convolutional Network (LRCN), and Convolutional Neural Networks with Gated Recurrent Units (CNN+GRU), we analyze video data from platforms like YouTube and TikTok to identify stereotypic behaviors (arm flapping, head banging, spinning). Our results indicate that the LRCN model exhibited superior performance with 79.61% accuracy on the augmented platform video dataset and 79.37% on the original SSBD dataset. The ConvLSTM and CNN+GRU models also achieved higher accuracy than the original SSBD dataset. Through this research, we underscore AI's potential in early ASD detection by automating the identification of stereotypic behaviors, thereby enabling timely intervention. We also emphasize the significance of utilizing expanded datasets from social media platform videos in augmenting model accuracy and robustness, thus paving the way for more accessible diagnostic methods.