• Title/Summary/Keyword: learning function

Search Result 2,291, Processing Time 0.031 seconds

PID Learning Method using Gradient Approach for Optimal Control (기울기법을 이용한 최적의 PID 제어 학습법)

  • Lim, Yoon-Kyu;Chung, Byeong-Mook
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.18 no.1
    • /
    • pp.180-186
    • /
    • 2001
  • PID control is widely used in industrial areas, but it is not easy to tune PID gains for an optimal control. The proposed learning method is to tune PID gains using the gradient approach. We use two estimation functions in this method : one is an error function for tuning of PID gains, and the other is a performance measuring function for a completion of learning. This paper shows that optimal PID controllers can be acquired when this learning method is applied to 10 systems with different natural frequencies and damping ratios.

  • PDF

Improvement of Support Vector Clustering using Evolutionary Programming and Bootstrap

  • Jun, Sung-Hae
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.8 no.3
    • /
    • pp.196-201
    • /
    • 2008
  • Statistical learning theory has three analytical tools which are support vector machine, support vector regression, and support vector clustering for classification, regression, and clustering respectively. In general, their performances are good because they are constructed by convex optimization. But, there are some problems in the methods. One of the problems is the subjective determination of the parameters for kernel function and regularization by the arts of researchers. Also, the results of the learning machines are depended on the selected parameters. In this paper, we propose an efficient method for objective determination of the parameters of support vector clustering which is the clustering method of statistical learning theory. Using evolutionary algorithm and bootstrap method, we select the parameters of kernel function and regularization constant objectively. To verify improved performances of proposed research, we compare our method with established learning algorithms using the data sets form ucr machine learning repository and synthetic data.

Traffic-based reinforcement learning with neural network algorithm in fog computing environment

  • Jung, Tae-Won;Lee, Jong-Yong;Jung, Kye-Dong
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.12 no.1
    • /
    • pp.144-150
    • /
    • 2020
  • Reinforcement learning is a technology that can present successful and creative solutions in many areas. This reinforcement learning technology was used to deploy containers from cloud servers to fog servers to help them learn the maximization of rewards due to reduced traffic. Leveraging reinforcement learning is aimed at predicting traffic in the network and optimizing traffic-based fog computing network environment for cloud, fog and clients. The reinforcement learning system collects network traffic data from the fog server and IoT. Reinforcement learning neural networks, which use collected traffic data as input values, can consist of Long Short-Term Memory (LSTM) neural networks in network environments that support fog computing, to learn time series data and to predict optimized traffic. Description of the input and output values of the traffic-based reinforcement learning LSTM neural network, the composition of the node, the activation function and error function of the hidden layer, the overfitting method, and the optimization algorithm.

A Modularized Approach to the Development of the Creativity Learning Program

  • Won, Kyung-Ah
    • Archives of design research
    • /
    • v.20 no.2 s.70
    • /
    • pp.103-116
    • /
    • 2007
  • Art education in design has repeatedly stressed the importance of developing creativity. In the digital period, however, which shows rapid change in both forms and contents, it needs to be equipped with more flexible and systematic ways of approaching to the creativity development, especially involved with cultural diversity of the digital world. This paper primarily proposes a maximally efficient, productive creativity learning program in which the integration of expressive media and communication generates a comprehensive network of communicative information in the development of digital technologies, which, consequently, brings forth valuable cultural contents of art. The amalgamation of Won (2006)'s Prism Effect, with distinctive three devices, and the facilitator factors, with two different facilitators such as self-controlled and controlled plays, would function as a catalyst for cultural diversity in the digital forms and contents of art. And this will, consequently, result in producing a number of practices that can be classified and assorted for a later performance. This paper thus suggests a roadmap of how to develop the creativity learning program in which two categories of facilitators based on three thinking devices function to classify four activities. In addition, selected activities are shaped as a creativity learning program by generating learning practices with the formalizing instructional strategy that fit into a specialized educational environment and learners. The samples of loaming practice design show guidelines for practice and the results of learning activity. Therefore, the eventual goal of this paper would be to establish a creativity learning program that constitutes a highly systematized and modularized database to maximize the efficiency and productivity of the creativity development.

  • PDF

Comparison of Deep Learning Activation Functions for Performance Improvement of a 2D Shooting Game Learning Agent (2D 슈팅 게임 학습 에이전트의 성능 향상을 위한 딥러닝 활성화 함수 비교 분석)

  • Lee, Dongcheul;Park, Byungjoo
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.19 no.2
    • /
    • pp.135-141
    • /
    • 2019
  • Recently, there has been active researches about building an artificial intelligence agent that can learn how to play a game by using re-enforcement learning. The performance of the learning can be diverse according to what kinds of deep learning activation functions they used when they train the agent. This paper compares the activation functions when we train our agent for learning how to play a 2D shooting game by using re-enforcement learning. We defined performance metrics to analyze the results and plotted them along a training time. As a result, we found ELU (Exponential Linear Unit) with a parameter 1.0 achieved best rewards than other activation functions. There was 23.6% gap between the best activation function and the worst activation function.

Realization of the Language In structor Using Speech Recognition (음성 인식을 이용한 영어학습기 구현)

  • Shin, Seung-Sik;Jeon, Hyung-Joon;Chung, Chan-Soo;Yu, Bong-Sun;Cho, Kyung-Hyun;Kanng, Chang-Soo
    • Journal of the Korea Computer Industry Society
    • /
    • v.5 no.9
    • /
    • pp.959-964
    • /
    • 2004
  • This study will reilize the learning device of foreign languages using voice recognition that is programmed based on three scenarios by the Conversay SDK. This device is embedd not with basic functions such as pronunciation/phrase recognition function, conversation function etc., but also with additional functions using user's voices such as timer function, alarming function, test function, learning check function etc.

  • PDF

A Study on Fuzzy Wavelet Neural Network System Based on ANFIS Applying Bell Type Fuzzy Membership Function (벨형 퍼지 소속함수를 적용한 ANFIS 기반 퍼지 웨이브렛 신경망 시스템의 연구)

  • 변오성;조수형;문성용
    • Journal of the Institute of Electronics Engineers of Korea TE
    • /
    • v.39 no.4
    • /
    • pp.363-369
    • /
    • 2002
  • In this paper, it could improved on the arbitrary nonlinear function learning approximation which have the wavelet neural network based on Adaptive Neuro-Fuzzy Inference System(ANFIS) and the multi-resolution Analysis(MRA) of the wavelet transform. ANFIS structure is composed of a bell type fuzzy membership function, and the wavelet neural network structure become composed of the forward algorithm and the backpropagation neural network algorithm. This wavelet composition has a single size, and it is used the backpropagation algorithm for learning of the wavelet neural network based on ANFIS. It is confirmed to be improved the wavelet base number decrease and the convergence speed performances of the wavelet neural network based on ANFIS Model which is using the wavelet translation parameter learning and bell type membership function of ANFIS than the conventional algorithm from 1 dimension and 2 dimension functions.

Comparative Analysis on Error Back Propagation Learning and Layer By Layer Learning in Multi Layer Perceptrons (다층퍼셉트론의 오류역전파 학습과 계층별 학습의 비교 분석)

  • 곽영태
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.7 no.5
    • /
    • pp.1044-1051
    • /
    • 2003
  • This paper surveys the EBP(Error Back Propagation) learning, the Cross Entropy function and the LBL(Layer By Layer) learning, which are used for learning the MLP(Multi Layer Perceptrons). We compare the merits and demerits of each learning method in the handwritten digit recognition. Although the speed of EBP learning is slower than other learning methods in the initial learning process, its generalization capability is better. Also, the speed of Cross Entropy function that makes up for the weak points of EBP learning is faster than that of EBP learning. But its generalization capability is worse because the error signal of the output layer trains the target vector linearly. The speed of LBL learning is the fastest speed among the other learning methods in the initial learning process. However, it can't train for more after a certain time, it has the lowest generalization capability. Therefore, this paper proposes the standard of selecting the learning method when we apply the MLP.

Design of automatic cruise control system of mobile robot using fuzzy-neural control technique (퍼지-뉴럴 제어기법에 의한 이동형 로봇의 자율주행 제어시스템 설계)

  • 한성현;김종수
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.1804-1807
    • /
    • 1997
  • This paper presents a new approach to the design of cruise control system of a mobile robot with two drive wheel. The proposed control scheme uses a Gaussian function as a unit function in the fuzzy-neural network, and back propagation algorithm to train the fuzzy-neural network controller in the framework of the specialized learnign architecture. It is proposed a learning controller consisting of two neural networks-fuzzy based on independent reasoning and a connecton net with fixed weights to simply the neural networks-fuzzy. The performance of the proposed controller is shown by performing the computer simulation for trajectory tracking of the speed and azimuth of a mobile robot driven by two independent wheels.

  • PDF

Iterative Learning Control for Discrete Time Nonlinear Systems Based on an Objective Function (목적함수를 고려한 이산 비선형 시스템의 반복 학습 제어)

  • Jeong, Gu-Min;Park, Chong-Ho;Jang, Tae-Jeong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.7 no.1
    • /
    • pp.1147-1154
    • /
    • 2001
  • In this paper, a new iterative learning control scheme for discrete time nonlinear systems is proposed based on an objective function consisting of the output error and input energy. The relationships between the proposed ILC and the optimal control are described. A new input update law is proposed and its convergence is proved under certain conditions. In this proposed update law, the inputs in the whole control horizon are updated at once considered as one large vector. Some illustrative examples are given to show the effectiveness of the proposed method.

  • PDF