• Title/Summary/Keyword: learning function

Search Result 2,291, Processing Time 0.023 seconds

Fuzzy Learning Control for Multivariable Unstable System (불안정한 다변수 시스템에 대한 퍼지 학습제어)

  • 임윤규;정병묵;소범식
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.5 no.7
    • /
    • pp.808-813
    • /
    • 1999
  • A fuzzy learning method to control an unstable and multivariable system is presented in this paper, Because the multivariable system has generally a coupling effect between the inputs and outputs, it is difficult to find its modeling equation or parameters. If the system is unstable, initial condition rules are needed to make it stable because learning is nearly impossible. Therefore, this learning method uses the initial rules and introduces a cost function composed of the actual error and error-rate of each output without the modeling equation. To minimize the cost function, we experimentally got the Jacobian matrix in the operating point of the system. From the Jacobian matrix, we can find the direction of the convergence in the learning, and the optimal control rules are finally acquired when the fuzzy rules are updated by changing the portion of the errors and error rates.

  • PDF

Credit-Assigned-CMAC-based Reinforcement Learning with application to the Acrobot Swing Up Control Problem (Acrobot Swing Up 제어를 위한 Credit-Assigned-CMAC 기반의 강화학습)

  • Shin, Yeon-Yong;Jang, Si-Young;Seo, Seung-Hwan;Suh, Il-Hong
    • Proceedings of the KIEE Conference
    • /
    • 2003.11c
    • /
    • pp.621-624
    • /
    • 2003
  • For real world applications of reinforcement learning techniques, function approximation or generalization will be required to avoid curse of dimensionality. For this, an improved function approximation-based reinforcement learning method is proposed to speed up convergence by using CA-CMAC(Credit-Assigned Cerebellar Model Articulation Controller). To show that our proposed CACRL(CA-CMAC-based Reinforcement Learning) performs better than the CRL(CMAC-based Reinforcement Learning), computer simulation results are illustrated, where a swing-up control problem of an acrobot is considered.

  • PDF

Power Quality Disturbances Identification Method Based on Novel Hybrid Kernel Function

  • Zhao, Liquan;Gai, Meijiao
    • Journal of Information Processing Systems
    • /
    • v.15 no.2
    • /
    • pp.422-432
    • /
    • 2019
  • A hybrid kernel function of support vector machine is proposed to improve the classification performance of power quality disturbances. The kernel function mathematical model of support vector machine directly affects the classification performance. Different types of kernel functions have different generalization ability and learning ability. The single kernel function cannot have better ability both in learning and generalization. To overcome this problem, we propose a hybrid kernel function that is composed of two single kernel functions to improve both the ability in generation and learning. In simulations, we respectively used the single and multiple power quality disturbances to test classification performance of support vector machine algorithm with the proposed hybrid kernel function. Compared with other support vector machine algorithms, the improved support vector machine algorithm has better performance for the classification of power quality signals with single and multiple disturbances.

Improvement of existing machine learning methods of digital signal by changing the step-size (학습률(Step-Size)변화에 따른 디지털 신호의 기계학습 방법 개선)

  • Ji, Sangmin;Park, Jieun
    • Journal of Digital Convergence
    • /
    • v.18 no.2
    • /
    • pp.261-268
    • /
    • 2020
  • Machine learning is achieved by making a cost function from a given digital signal data and optimizing the cost function. The cost function here has local minimums in the cost function depending on the amount of digital signal data and the structure of the neural network. These local minimums make a problem that prevents learning. Among the many ways of solving these methods, our proposed method is to change the learning step-size. Unlike existed methods using the learning rate (step-size) as a fixed constant, the use of multivariate function as the cost function prevent unnecessary machine learning and find the best way to the minimum value. Numerical experiments show that the results of the proposed method improve about 3%(88.8%→91.5%) performance using the proposed method rather than the existed methods.

An Extended Function Point Model for Estimating the Implementing Cost of Machine Learning Applications (머신러닝 애플리케이션 구현 비용 평가를 위한 확장형 기능 포인트 모델)

  • Seokjin Im
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.2
    • /
    • pp.475-481
    • /
    • 2023
  • Softwares, especially like machine learning applications, affect human's life style tremendously. Accordingly, the importance of the cost model for softwares increases rapidly. As cost models, LOC(Line of Code) and M/M(Man-Month) estimates the quantitative aspects of the software. Differently from them, FP(Function Point) focuses on estimating the functional characteristics of software. FP is efficient in the aspect that it estimates qualitative characteristics. FP, however, has a limit for evaluating machine learning softwares because FP does not evaluate the critical factors of machine learning software. In this paper, we propose an extended function point(ExFP) that extends FP to adopt hyper parameter and the complexity of its optimization as the characteristics of the machine learning applications. In the evaluation reflecting the characteristics of machine learning applications. we reveals the effectiveness of the proposed ExFP.

Improvement of Learning Capability with Combination of the Generalized Cascade Correlation and Generalized Recurrent Cascade Correlation Algorithms (일반화된 캐스케이드 코릴레이션 알고리즘과 일반화된 순환 캐스케이드 코릴레이션 알고리즘의 결합을 통한 학습 능력 향상)

  • Lee, Sang-Wha;Song, Hae-Sang
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.2
    • /
    • pp.97-105
    • /
    • 2009
  • This paper presents a combination of the generalized Cascade Correlation and generalized Recurrent Cascade Correlation learning algorithms. The new network will be able to grow with vertical or horizontal direction and with recurrent or without recurrent units for the quick solution of the pattern classification problem. The proposed algorithm was tested learning capability with the sigmoidal activation function and hyperbolic tangent activation function on the contact lens and balance scale standard benchmark problems. And results are compared with those obtained with Cascade Correlation and Recurrent Cascade Correlation algorithms. By the learning the new network was composed with the minimal number of the created hidden units and shows quick learning speed. Consequently it will be able to improve a learning capability.

An Improvement of Performance for Cascade Correlation Learning Algorithm using a Cosine Modulated Gaussian Activation Function (코사인 모듈화 된 가우스 활성화 함수를 사용한 캐스케이드 코릴레이션 학습 알고리즘의 성능 향상)

  • Lee, Sang-Wha;Song, Hae-Sang
    • Journal of the Korea Society of Computer and Information
    • /
    • v.11 no.3
    • /
    • pp.107-115
    • /
    • 2006
  • This paper presents a new class of activation functions for Cascade Correlation learning algorithm, which herein will be called CosGauss function. This function is a cosine modulated gaussian function. In contrast to the sigmoidal, hyperbolic tangent and gaussian functions, more ridges can be obtained by the CosGauss function. Because of the ridges, it is quickly convergent and improves a pattern recognition speed. Consequently it will be able to improve a learning capability. This function was tested with a Cascade Correlation Network on the two spirals problem and results are compared with those obtained with other activation functions.

  • PDF

Function Approximation Based on a Network with Kernel Functions of Bounds and Locality : an Approach of Non-Parametric Estimation

  • Kil, Rhee-M.
    • ETRI Journal
    • /
    • v.15 no.2
    • /
    • pp.35-51
    • /
    • 1993
  • This paper presents function approximation based on nonparametric estimation. As an estimation model of function approximation, a three layered network composed of input, hidden and output layers is considered. The input and output layers have linear activation units while the hidden layer has nonlinear activation units or kernel functions which have the characteristics of bounds and locality. Using this type of network, a many-to-one function is synthesized over the domain of the input space by a number of kernel functions. In this network, we have to estimate the necessary number of kernel functions as well as the parameters associated with kernel functions. For this purpose, a new method of parameter estimation in which linear learning rule is applied between hidden and output layers while nonlinear (piecewise-linear) learning rule is applied between input and hidden layers, is considered. The linear learning rule updates the output weights between hidden and output layers based on the Linear Minimization of Mean Square Error (LMMSE) sense in the space of kernel functions while the nonlinear learning rule updates the parameters of kernel functions based on the gradient of the actual output of network with respect to the parameters (especially, the shape) of kernel functions. This approach of parameter adaptation provides near optimal values of the parameters associated with kernel functions in the sense of minimizing mean square error. As a result, the suggested nonparametric estimation provides an efficient way of function approximation from the view point of the number of kernel functions as well as learning speed.

  • PDF

Weight Adjustment Scheme Based on Hop Count in Q-routing for Software Defined Networks-enabled Wireless Sensor Networks

  • Godfrey, Daniel;Jang, Jinsoo;Kim, Ki-Il
    • Journal of information and communication convergence engineering
    • /
    • v.20 no.1
    • /
    • pp.22-30
    • /
    • 2022
  • The reinforcement learning algorithm has proven its potential in solving sequential decision-making problems under uncertainties, such as finding paths to route data packets in wireless sensor networks. With reinforcement learning, the computation of the optimum path requires careful definition of the so-called reward function, which is defined as a linear function that aggregates multiple objective functions into a single objective to compute a numerical value (reward) to be maximized. In a typical defined linear reward function, the multiple objectives to be optimized are integrated in the form of a weighted sum with fixed weighting factors for all learning agents. This study proposes a reinforcement learning -based routing protocol for wireless sensor network, where different learning agents prioritize different objective goals by assigning weighting factors to the aggregated objectives of the reward function. We assign appropriate weighting factors to the objectives in the reward function of a sensor node according to its hop-count distance to the sink node. We expect this approach to enhance the effectiveness of multi-objective reinforcement learning for wireless sensor networks with a balanced trade-off among competing parameters. Furthermore, we propose SDN (Software Defined Networks) architecture with multiple controllers for constant network monitoring to allow learning agents to adapt according to the dynamics of the network conditions. Simulation results show that our proposed scheme enhances the performance of wireless sensor network under varied conditions, such as the node density and traffic intensity, with a good trade-off among competing performance metrics.

One Dimensional Optimization using Learning Network

  • Chung, Taishn;Bien, Zeungnam
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1995.10b
    • /
    • pp.33-39
    • /
    • 1995
  • One dimensional optimization problem is considered, we propose a method to find the global minimum of one-dimensional function with on gradient information but only the finite number of input-output samples. We construct a learning network which has a good learning capability and of which global maximum(or minimum) can be calculated with simple calculation. By teaching this network to approximate the given function with minimal samples, we can get the global minimum of the function. We verify this method using some typical esamples.

  • PDF