• 제목/요약/키워드: Memory-Based Learning

Search Result 574, Processing Time 0.025 seconds

Fast Content Adaptive Interpolation Algorithm Using One-Dimensional Patch-Based Learning (일차원 패치 학습을 이용한 고속 내용 기반 보간 기법)

  • Kang, Young-Uk;Jeong, Shin-Cheol;Song, Byung-Cheol
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.1
    • /
    • pp.54-63
    • /
    • 2011
  • This paper proposes a fast learning-based interpolation algorithm to up-scale an input low-resolution image into a high-resolution image. In conventional learning-based super-resolution, a certain relationship between low-resolution and high-resolution images is learned from various training images and a specific high frequency synthesis information is derived. And then, an arbitrary low resolution image can be super-resolved using the high frequency synthesis information. However, such super-resolution algorithms require heavy memory space to store huge synthesis information as well as significant computation due to two-dimensional matching process. In order to mitigate this problem, this paper presents one-dimensional patch-based learning and synthesis. So, we can noticeably reduce memory cost and computational complexity. Simulation results show that the proposed algorithm provides higher PSNR and SSIM of about 0.7dB and 0.01 on average, respectively than conventional bicubic interpolation algorithm.

Prediction of Cryogenic- and Room-Temperature Deformation Behavior of Rolled Titanium using Machine Learning (타이타늄 압연재의 기계학습 기반 극저온/상온 변형거동 예측)

  • S. Cheon;J. Yu;S.H. Lee;M.-S. Lee;T.-S. Jun;T. Lee
    • Transactions of Materials Processing
    • /
    • v.32 no.2
    • /
    • pp.74-80
    • /
    • 2023
  • A deformation behavior of commercially pure titanium (CP-Ti) is highly dependent on material and processing parameters, such as deformation temperature, deformation direction, and strain rate. This study aims to predict the multivariable and nonlinear tensile behavior of CP-Ti using machine learning based on three algorithms: artificial neural network (ANN), light gradient boosting machine (LGBM), and long short-term memory (LSTM). The predictivity for tensile behaviors at the cryogenic temperature was lower than those in the room temperature due to the larger data scattering in the train dataset used in the machine learning. Although LGBM showed the lowest value of root mean squared error, it was not the best strategy owing to the overfitting and step-function morphology different from the actual data. LSTM performed the best as it effectively learned the continuous characteristics of a flow curve as well as it spent the reduced time for machine learning, even without sufficient database and hyperparameter tuning.

Estimation of Software Reliability with Immune Algorithm and Support Vector Regression (면역 알고리즘 기반의 서포트 벡터 회귀를 이용한 소프트웨어 신뢰도 추정)

  • Kwon, Ki-Tae;Lee, Joon-Kil
    • Journal of Information Technology Services
    • /
    • v.8 no.4
    • /
    • pp.129-140
    • /
    • 2009
  • The accurate estimation of software reliability is important to a successful development in software engineering. Until recent days, the models using regression analysis based on statistical algorithm and machine learning method have been used. However, this paper estimates the software reliability using support vector regression, a sort of machine learning technique. Also, it finds the best set of optimized parameters applying immune algorithm, changing the number of generations, memory cells, and allele. The proposed IA-SVR model outperforms some recent results reported in the literature.

Automization of grinding process by CMAC (CMAC 메모리에 의한 연마공정자동화)

  • 정재문;김기엽;정광조
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1990.10a
    • /
    • pp.186-189
    • /
    • 1990
  • The automization of manufacturing lines may be accomplished by replacing the human operator with computer system. This paper describes an idea to fully automize the razor qrinding process. Now, in this system, to control the process, human operator must estimate the qrinded states and control the grinding machine continuously. We propose two methods to automize this process by using CMAC memory. One is about learning expert-rules without direct communication with operator. And the other is complete self-learning method based on CMAC's learning algorithm. These ideas may be applied for another manufacturing processes.

  • PDF

Robustness of Differentiable Neural Computer Using Limited Retention Vector-based Memory Deallocation in Language Model

  • Lee, Donghyun;Park, Hosung;Seo, Soonshin;Son, Hyunsoo;Kim, Gyujin;Kim, Ji-Hwan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.3
    • /
    • pp.837-852
    • /
    • 2021
  • Recurrent neural network (RNN) architectures have been used for language modeling (LM) tasks that require learning long-range word or character sequences. However, the RNN architecture is still suffered from unstable gradients on long-range sequences. To address the issue of long-range sequences, an attention mechanism has been used, showing state-of-the-art (SOTA) performance in all LM tasks. A differentiable neural computer (DNC) is a deep learning architecture using an attention mechanism. The DNC architecture is a neural network augmented with a content-addressable external memory. However, in the write operation, some information unrelated to the input word remains in memory. Moreover, DNCs have been found to perform poorly with low numbers of weight parameters. Therefore, we propose a robust memory deallocation method using a limited retention vector. The limited retention vector determines whether the network increases or decreases its usage of information in external memory according to a threshold. We experimentally evaluate the robustness of a DNC implementing the proposed approach according to the size of the controller and external memory on the enwik8 LM task. When we decreased the number of weight parameters by 32.47%, the proposed DNC showed a low bits-per-character (BPC) degradation of 4.30%, demonstrating the effectiveness of our approach in language modeling tasks.

Research on Forecasting Framework for System Marginal Price based on Deep Recurrent Neural Networks and Statistical Analysis Models

  • Kim, Taehyun;Lee, Yoonjae;Hwangbo, Soonho
    • Clean Technology
    • /
    • v.28 no.2
    • /
    • pp.138-146
    • /
    • 2022
  • Electricity has become a factor that dramatically affects the market economy. The day-ahead system marginal price determines electricity prices, and system marginal price forecasting is critical in maintaining energy management systems. There have been several studies using mathematics and machine learning models to forecast the system marginal price, but few studies have been conducted to develop, compare, and analyze various machine learning and deep learning models based on a data-driven framework. Therefore, in this study, different machine learning algorithms (i.e., autoregressive-based models such as the autoregressive integrated moving average model) and deep learning networks (i.e., recurrent neural network-based models such as the long short-term memory and gated recurrent unit model) are considered and integrated evaluation metrics including a forecasting test and information criteria are proposed to discern the optimal forecasting model. A case study of South Korea using long-term time-series system marginal price data from 2016 to 2021 was applied to the developed framework. The results of the study indicate that the autoregressive integrated moving average model (R-squared score: 0.97) and the gated recurrent unit model (R-squared score: 0.94) are appropriate for system marginal price forecasting. This study is expected to contribute significantly to energy management systems and the suggested framework can be explicitly applied for renewable energy networks.

Water Level Forecasting based on Deep Learning: A Use Case of Trinity River-Texas-The United States (딥러닝 기반 침수 수위 예측: 미국 텍사스 트리니티강 사례연구)

  • Tran, Quang-Khai;Song, Sa-kwang
    • Journal of KIISE
    • /
    • v.44 no.6
    • /
    • pp.607-612
    • /
    • 2017
  • This paper presents an attempt to apply Deep Learning technology to solve the problem of forecasting floods in urban areas. We employ Recurrent Neural Networks (RNNs), which are suitable for analyzing time series data, to learn observed data of river water and to predict the water level. To test the model, we use water observation data of a station in the Trinity river, Texas, the U.S., with data from 2013 to 2015 for training and data in 2016 for testing. Input of the neural networks is a 16-record-length sequence of 15-minute-interval time-series data, and output is the predicted value of the water level at the next 30 minutes and 60 minutes. In the experiment, we compare three Deep Learning models including standard RNN, RNN trained with Back Propagation Through Time (RNN-BPTT), and Long Short-Term Memory (LSTM). The prediction quality of LSTM can obtain Nash Efficiency exceeding 0.98, while the standard RNN and RNN-BPTT also provide very high accuracy.

Development of Mobile-application based Cognitive Training Program for Cancer Survivors with Cognitive Complaints (암 환자를 위한 앱 기반의 인지건강훈련 프로그램의 개발)

  • Oh, Pok Ja;Youn, Jung-Hae;Kim, Ji Hyun
    • Korean Journal of Adult Nursing
    • /
    • v.29 no.3
    • /
    • pp.266-277
    • /
    • 2017
  • Purpose: The purpose of this study was to design a mobile-application of a cognitive training program for people who have chemo-related cognitive complaints. Methods: The program was developed based on the network-based instructional system design proposed by Jung. The program consisted of several tasks centered on four cognitive domains: learning, memory, working memory, and attention. For memory learning, a target-image and all its elements (color, position, and number) were presented on the screen that had to be recognized among a number of distractor-figures. In working memory training, the previous learned target-figure according to the level of difficulty had to be remembered among many different figures. In attention training named "Find the same figure," two identical symbols in a grid-pattern filled with different images were presented on the screen, and these had to be simultaneously touched. In attention training named "Find the different figure," a different symbol in a grid pattern filled with same figures had to be selected. This program was developed to train for a minimum of 20 min/day, four days/week for six weeks. Results: This cognitive training revealed statistically significant improvement in subjective cognitive impairments (t=3.88, p=.006) at six weeks in eight cancer survivors. Conclusion: This cognitive training program is expected to offer individualized training opportunities for improving cognitive function and further research is needed to test the effect in various settings.

Agent with Low-latency Overcoming Technique for Distributed Cluster-based Machine Learning

  • Seo-Yeon, Gu;Seok-Jae, Moon;Byung-Joon, Park
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.15 no.1
    • /
    • pp.157-163
    • /
    • 2023
  • Recently, as businesses and data types become more complex and diverse, efficient data analysis using machine learning is required. However, since communication in the cloud environment is greatly affected by network latency, data analysis is not smooth if information delay occurs. In this paper, SPT (Safe Proper Time) was applied to the cluster-based machine learning data analysis agent proposed in previous studies to solve this delay problem. SPT is a method of remotely and directly accessing memory to a cluster that processes data between layers, effectively improving data transfer speed and ensuring timeliness and reliability of data transfer.

Sinusoidal Map Jumping Gravity Search Algorithm Based on Asynchronous Learning

  • Zhou, Xinxin;Zhu, Guangwei
    • Journal of Information Processing Systems
    • /
    • v.18 no.3
    • /
    • pp.332-343
    • /
    • 2022
  • To address the problems of the gravitational search algorithm (GSA) in which the population is prone to converge prematurely and fall into the local solution when solving the single-objective optimization problem, a sine map jumping gravity search algorithm based on asynchronous learning is proposed. First, a learning mechanism is introduced into the GSA. The agents keep learning from the excellent agents of the population while they are evolving, thus maintaining the memory and sharing of evolution information, addressing the algorithm's shortcoming in evolution that particle information depends on the current position information only, improving the diversity of the population, and avoiding premature convergence. Second, the sine function is used to map the change of the particle velocity into the position probability to improve the convergence accuracy. Third, the Levy flight strategy is introduced to prevent particles from falling into the local optimization. Finally, the proposed algorithm and other intelligent algorithms are simulated on 18 benchmark functions. The simulation results show that the proposed algorithm achieved improved the better performance.