• Title/Summary/Keyword: neural learning scheme

Search Result 260, Processing Time 0.029 seconds

Dynamic Neural Units and Genetic Algorithms With Applications to the Control of Unknown Nonlinear Systems (미지의 비선형 시스템 제어를 위한 DNU와 GA알고리즘 적용에 관한 연구)

  • XiaoBing, Zhao;Min, Lin;Cho, Hyeon-Seob;Jeon, Jeong-Chay
    • Proceedings of the KIEE Conference
    • /
    • 2002.07d
    • /
    • pp.2486-2489
    • /
    • 2002
  • Pool of the central nervous system. In this thesis, we present a genetic DNU-control scheme for unknown nonlinear systems. Our method is different from those using supervised learning algorithms, such as the backpropagation (BP) algorithm, that needs training information in each step. The contributions of this thesis are the new approach to constructing neural network architecture and its training.

  • PDF

A Learning Scheme for Hardware Implementation of Feedforward Neural Networks (FNNs의 하드웨어 구현을 위한 학습방안)

  • Park, Jin-Sung;Cho, Hwa-Hyun;Chae, Jong-Seok;Choi, Myung-Ryul
    • Proceedings of the KIEE Conference
    • /
    • 1999.07g
    • /
    • pp.2974-2976
    • /
    • 1999
  • 본 논문에서는 단일패턴과 다중패턴 학습이 가능한 FNNs(Feedforward Neural Networks)을 하드웨어로 구현하는데 필요한 학습방안을 제안한다. 제안된 학습방안은 기존의 하드웨어 구현에 이용되는 방식과는 전혀 다른 방식이며, 오히려 기존의 소프트웨어 학습방식과 유사하다. 기존의 하드웨어 구현에서 사용되는 방법은 오프라인 학습이나 단일패턴 온 칩(on-chip) 학습방식인데 반해, 제안된 학습방식은 단일/다중패턴은 칩 학습방식으로 다층 FNNs 회로와 학습회로 사이에 스위칭 회로를 넣어 구현되었으며, FNNs의 학습회로는 선형 시냅스 회로와 선형 곱셈기 회로를 사용하여MEBP(Modified Error Back-Propagation) 학습규칙을 구현하였다. 제안된 방식은 기존의 CMOS 공정으로 구현되었고 HSPICE 회로 시뮬레이터로 그 동작을 검증하였다 구현된 FNNs은 어떤 학습패턴 쌍에 의해 유일하게 결정되는 출력 전압을 생성한다. 제안된 학습방안은 향후 학습 가능한 대용량 신경망의 구현에 매우 적합하리라 예상된다.

  • PDF

Comparison of Power Consumption Prediction Scheme Based on Artificial Intelligence (인공지능 기반 전력량예측 기법의 비교)

  • Lee, Dong-Gu;Sun, Young-Ghyu;Kim, Soo-Hyun;Sim, Issac;Hwang, Yu-Min;Kim, Jin-Young
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.19 no.4
    • /
    • pp.161-167
    • /
    • 2019
  • Recently, demand forecasting techniques have been actively studied due to interest in stable power supply with surging power demand, and increase in spread of smart meters that enable real-time power measurement. In this study, we proceeded the deep learning prediction model experiments which learns actual measured power usage data of home and outputs the forecasting result. And we proceeded pre-processing with moving average method. The predicted value made by the model is evaluated with the actual measured data. Through this forecasting, it is possible to lower the power supply reserve ratio and reduce the waste of the unused power. In this paper, we conducted experiments on three types of networks: Multi Layer Perceptron (MLP), Recurrent Neural Network (RNN), and Long Short Term Memory (LSTM) and we evaluate the results of each scheme. Evaluation is conducted with following method: MSE(Mean Squared Error) method and MAE(Mean Absolute Error).

Graph Neural Network and Reinforcement Learning based Optimal VNE Method in 5G and B5G Networks (5G 및 B5G 네트워크에서 그래프 신경망 및 강화학습 기반 최적의 VNE 기법)

  • Seok-Woo Park;Kang-Hyun Moon;Kyung-Taek Chung;In-Ho Ra
    • Smart Media Journal
    • /
    • v.12 no.11
    • /
    • pp.113-124
    • /
    • 2023
  • With the advent of 5G and B5G (Beyond 5G) networks, network virtualization technology that can overcome the limitations of existing networks is attracting attention. The purpose of network virtualization is to provide solutions for efficient network resource utilization and various services. Existing heuristic-based VNE (Virtual Network Embedding) techniques have been studied, but the flexibility is limited. Therefore, in this paper, we propose a GNN-based network slicing classification scheme to meet various service requirements and a RL-based VNE scheme for optimal resource allocation. The proposed method performs optimal VNE using an Actor-Critic network. Finally, to evaluate the performance of the proposed technique, we compare it with Node Rank, MCST-VNE, and GCN-VNE techniques. Through performance analysis, it was shown that the GNN and RL-based VNE techniques are better than the existing techniques in terms of acceptance rate and resource efficiency.

Fast Detection of Disease in Livestock based on Deep Learning (축사에서 딥러닝을 이용한 질병개체 파악방안)

  • Lee, Woongsup;Kim, Seong Hwan;Ryu, Jongyeol;Ban, Tae-Won
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.5
    • /
    • pp.1009-1015
    • /
    • 2017
  • Recently, the wide spread of IoT (Internet of Things) based technology enables the accumulation of big biometric data on livestock. The availability of big data allows the application of diverse machine learning based algorithm in the field of agriculture, which significantly enhances the productivity of farms. In this paper, we propose an abnormal livestock detection algorithm based on deep learning, which is the one of the most prominent machine learning algorithm. In our proposed scheme, the livestock are divided into two clusters which are normal and abnormal (disease) whose biometric data has different characteristics. Then a deep neural network is used to classify these two clusters based on the biometric data. By using our proposed scheme, the normal and abnormal livestock can be identified based on big biometric data, even though the detailed stochastic characteristics of biometric data are unknown, which is beneficial to prevent epidemic such as mouth-and-foot disease.

Extreme Learning Machine Ensemble Using Bagging for Facial Expression Recognition

  • Ghimire, Deepak;Lee, Joonwhoan
    • Journal of Information Processing Systems
    • /
    • v.10 no.3
    • /
    • pp.443-458
    • /
    • 2014
  • An extreme learning machine (ELM) is a recently proposed learning algorithm for a single-layer feed forward neural network. In this paper we studied the ensemble of ELM by using a bagging algorithm for facial expression recognition (FER). Facial expression analysis is widely used in the behavior interpretation of emotions, for cognitive science, and social interactions. This paper presents a method for FER based on the histogram of orientation gradient (HOG) features using an ELM ensemble. First, the HOG features were extracted from the face image by dividing it into a number of small cells. A bagging algorithm was then used to construct many different bags of training data and each of them was trained by using separate ELMs. To recognize the expression of the input face image, HOG features were fed to each trained ELM and the results were combined by using a majority voting scheme. The ELM ensemble using bagging improves the generalized capability of the network significantly. The two available datasets (JAFFE and CK+) of facial expressions were used to evaluate the performance of the proposed classification system. Even the performance of individual ELM was smaller and the ELM ensemble using a bagging algorithm improved the recognition performance significantly.

Multi-task Learning Based Tropical Cyclone Intensity Monitoring and Forecasting through Fusion of Geostationary Satellite Data and Numerical Forecasting Model Output (정지궤도 기상위성 및 수치예보모델 융합을 통한 Multi-task Learning 기반 태풍 강도 실시간 추정 및 예측)

  • Lee, Juhyun;Yoo, Cheolhee;Im, Jungho;Shin, Yeji;Cho, Dongjin
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_3
    • /
    • pp.1037-1051
    • /
    • 2020
  • The accurate monitoring and forecasting of the intensity of tropical cyclones (TCs) are able to effectively reduce the overall costs of disaster management. In this study, we proposed a multi-task learning (MTL) based deep learning model for real-time TC intensity estimation and forecasting with the lead time of 6-12 hours following the event, based on the fusion of geostationary satellite images and numerical forecast model output. A total of 142 TCs which developed in the Northwest Pacific from 2011 to 2016 were used in this study. The Communications system, the Ocean and Meteorological Satellite (COMS) Meteorological Imager (MI) data were used to extract the images of typhoons, and the Climate Forecast System version 2 (CFSv2) provided by the National Center of Environmental Prediction (NCEP) was employed to extract air and ocean forecasting data. This study suggested two schemes with different input variables to the MTL models. Scheme 1 used only satellite-based input data while scheme 2 used both satellite images and numerical forecast modeling. As a result of real-time TC intensity estimation, Both schemes exhibited similar performance. For TC intensity forecasting with the lead time of 6 and 12 hours, scheme 2 improved the performance by 13% and 16%, respectively, in terms of the root mean squared error (RMSE) when compared to scheme 1. Relative root mean squared errors(rRMSE) for most intensity levels were lessthan 30%. The lower mean absolute error (MAE) and RMSE were found for the lower intensity levels of TCs. In the test results of the typhoon HALONG in 2014, scheme 1 tended to overestimate the intensity by about 20 kts at the early development stage. Scheme 2 slightly reduced the error, resulting in an overestimation by about 5 kts. The MTL models reduced the computational cost about 300% when compared to the single-tasking model, which suggested the feasibility of the rapid production of TC intensity forecasts.

Neuro-Fuzzy Algorithm for Nuclear Reactor Power Control : Part I

  • Chio, Jung-In;Hah, Yung-Joon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.5 no.3
    • /
    • pp.52-63
    • /
    • 1995
  • A neuro-fuzzy algorithm is presented for nuclear reactor power control in a pressurized water reactor. Automatic reacotr power control is complicated by the use of control rods because of highly nonlinear dynamics in the axial power shape. Thus, manual shaped controls are usually employed even for the limited capability during the power maneuvers. In an attempt to achieve automatic shape control, a neuro-fuzzy approach is considered because fuzzy algorithms are good at various aspects of operator's knowledge representation while neural networks are efficinet structures capable of learning from experience and adaptation to a changing nuclear core state. In the proposed neuro-fuzzy control scheme, the rule base is formulated based ona multi-input multi-output system and the dynamic back-propagation is used for learning. The neuro-fuzzy powere control algorithm has been tested using simulation fesponses of a Korean standard pressurized water reactor. The results illustrate that the proposed control algorithm would be a parctical strategy for automatic nuclear reactor power control.

  • PDF

A study on the stabilization control of an inverted pendulum system using CMAC-based decoder (CMAC 디코더를 이용한 도립 진자 시스템의 안정화 제어에 관한 연구)

  • 박현규;이현도;한창훈;안기형;최부귀
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.23 no.9A
    • /
    • pp.2211-2220
    • /
    • 1998
  • This paper presetns an adaptive critic self-learning control system with cerebellar model articulation controller (CMAC)-based decoder integrated with the associative search element (ASE) and adatpive critic element(ACE)- based scheme. The tast of the system is to balance a pole that is hinged to a movable cart by applying forces to the cart's base. The problem is that error feedback information is limited. This problem can be sloved when some adaptive control devices are involved. The ASE incorporates prediction information for reinforrcement from a critic to produce evaluative information for the plant. The CMAC-based decoder interprets one state to a set of patways into the ASE/ACE. These signals correspond to te current state and its possible preceding action states. The CMAC's information interpolation improves the learning speed. And design inverted pendulum hardware system to show control capability with neural network.

  • PDF

A Real-Time Sound Recognition System with a Decision Logic of Random Forest for Robots (Random Forest를 결정로직으로 활용한 로봇의 실시간 음향인식 시스템 개발)

  • Song, Ju-man;Kim, Changmin;Kim, Minook;Park, Yongjin;Lee, Seoyoung;Son, Jungkwan
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.3
    • /
    • pp.273-281
    • /
    • 2022
  • In this paper, we propose a robot sound recognition system that detects various sound events. The proposed system is designed to detect various sound events in real-time by using a microphone on a robot. To get real-time performance, we use a VGG11 model which includes several convolutional neural networks with real-time normalization scheme. The VGG11 model is trained on augmented DB through 24 kinds of various environments (12 reverberation times and 2 signal to noise ratios). Additionally, based on random forest algorithm, a decision logic is also designed to generate event signals for robot applications. This logic can be used for specific classes of acoustic events with better performance than just using outputs of network model. With some experimental results, the performance of proposed sound recognition system is shown on real-time device for robots.