• Title/Summary/Keyword: learning function

Search Result 2,295, Processing Time 0.029 seconds

The Development of IDMLP Neural Network for the Chip Implementation and it's Application to Speech Recognition (Chip 구현을 위한 IDMLP 신경 회로망의 개발과 음성인식에 대한 응용)

  • 김신진;박정운;정호선
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.28B no.5
    • /
    • pp.394-403
    • /
    • 1991
  • This paper described the development of input driven multilayer perceptron(IDMLP) neural network and it's application to the Korean spoken digit recognition. The IDMPLP neural network used here and the learning algorithm for this network was proposed newly. In this model, weight value is integer and transfer function in the neuron is hard limit function. According to the result of the network learning for the some kinds of input data, the number of network layers is one or more by the difficulties of classifying the inputs. We tested the recognition of binaried data for the spoken digit 0 to 9 by means of the proposed network. The experimental results are 100% and 96% for the learning data and test data, respectively.

  • PDF

A neural network with adaptive learning algorithm of curvature smoothing for time-series prediction (시계열 예측을 위한 1, 2차 미분 감소 기능의 적응 학습 알고리즘을 갖는 신경회로망)

  • 정수영;이민호;이수영
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.34C no.6
    • /
    • pp.71-78
    • /
    • 1997
  • In this paper, a new neural network training algorithm will be devised for function approximator with good generalization characteristics and tested with the time series prediction problem using santaFe competition data sets. To enhance the generalization ability a constraint term of hidden neuraon activations is added to the conventional output error, which gives the curvature smoothing characteristics to multi-layer neural networks. A hybrid learning algorithm of the error-back propagation and Hebbian learning algorithm with weight decay constraint will be naturally developed by the steepest decent algorithm minimizing the proposed cost function without much increase of computational requriements.

  • PDF

Vector Quantization of Image Signal using Larning Count Control Neural Networks (학습 횟수 조절 신경 회로망을 이용한 영상 신호의 벡터 양자화)

  • 유대현;남기곤;윤태훈;김재창
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.34C no.1
    • /
    • pp.42-50
    • /
    • 1997
  • Vector quantization has shown to be useful for compressing data related with a wide rnage of applications such as image processing, speech processing, and weather satellite. Neural networks of images this paper propses a efficient neural network learning algorithm, called learning count control algorithm based on the frquency sensitive learning algorithm. This algorithm can train a results more codewords can be assigned to the sensitive region of the human visual system and the quality of the reconstructed imate can be improved. We use a human visual systrem model that is a cascade of a nonlinear intensity mapping function and a modulation transfer function with a bandpass characteristic.

  • PDF

Complex Neural Classifiers for Power Quality Data Mining

  • Vidhya, S.;Kamaraj, V.
    • Journal of Electrical Engineering and Technology
    • /
    • v.13 no.4
    • /
    • pp.1715-1723
    • /
    • 2018
  • This work investigates the performance of fully complex- valued radial basis function network(FC-RBF) and complex extreme learning machine (CELM) based neural approaches for classification of power quality disturbances. This work engages the use of S-Transform to extract the features relating to single and combined power quality disturbances. The performance of the classifiers are compared with their real valued counterparts namely extreme learning machine(ELM) and support vector machine(SVM) in terms of convergence and classification ability. The results signify the suitability of complex valued classifiers for power quality disturbance classification.

Competitive Learning Neural Network with Binary Reinforcement and Constant Adaptation Gain (일정적응 이득과 이진 강화함수를 갖는 경쟁 학습 신경회로망)

  • Seok, Jin-Wuk;Cho, Seong-Won;Choi, Gyung-Sam
    • Proceedings of the KIEE Conference
    • /
    • 1994.11a
    • /
    • pp.326-328
    • /
    • 1994
  • A modified Kohonen's simple Competitive Learning(SCL) algorithm which has binary reinforcement function and a constant adaptation gain is proposed. In contrast to the time-varing adaptation gain of the original Kohonen's SCL algorithm, the proposed algorithm uses a constant adaptation gain, and adds a binary reinforcement function in order to compensate for the lowered learning ability of SCL due to the constant adaptation gain. Since the proposed algorithm does not have the complicated multiplication, it's digital hardware implementation is much easier than one of the original SCL.

  • PDF

Web Service Platform for Optimal Quantization of CNN Models (CNN 모델의 최적 양자화를 위한 웹 서비스 플랫폼)

  • Roh, Jaewon;Lim, Chaemin;Cho, Sang-Young
    • Journal of the Semiconductor & Display Technology
    • /
    • v.20 no.4
    • /
    • pp.151-156
    • /
    • 2021
  • Low-end IoT devices do not have enough computation and memory resources for DNN learning and inference. Integer quantization of real-type neural network models can reduce model size, hardware computational burden, and power consumption. This paper describes the design and implementation of a web-based quantization platform for CNN deep learning accelerator chips. In the web service platform, we implemented visualization of the model through a convenient UI, analysis of each step of inference, and detailed editing of the model. Additionally, a data augmentation function and a management function of files that store models and inference intermediate results are provided. The implemented functions were verified using three YOLO models.

Implementation of a Sightseeing Multi-function Controller Using Neural Networks

  • Jae-Kyung, Lee;Jae-Hong, Yim
    • Journal of information and communication convergence engineering
    • /
    • v.21 no.1
    • /
    • pp.45-53
    • /
    • 2023
  • This study constructs various scenarios required for landscape lighting; furthermore, a large-capacity general-purpose multifunctional controller is designed and implemented to validate the operation of the various scenarios. The multi-functional controller is a large-capacity general-purpose controller composed of a drive and control unit that controls the scenarios and colors of LED modules and an LED display unit. In addition, we conduct a computer simulation by designing a control system to represent the most appropriate color according to the input values of the temperature, illuminance, and humidity, using the neuro-control system. Consequently, when examining the result and output color according to neuro-control, unlike existing crisp logic, neuro-control does not require the storage of many data inputs because of the characteristics of artificial intelligence; the desired value can be controlled by learning with learning data.

Multi-Dimensional Reinforcement Learning Using a Vector Q-Net - Application to Mobile Robots

  • Kiguchi, Kazuo;Nanayakkara, Thrishantha;Watanabe, Keigo;Fukuda, Toshio
    • International Journal of Control, Automation, and Systems
    • /
    • v.1 no.1
    • /
    • pp.142-148
    • /
    • 2003
  • Reinforcement learning is considered as an important tool for robotic learning in unknown/uncertain environments. In this paper, we propose an evaluation function expressed in a vector form to realize multi-dimensional reinforcement learning. The novel feature of the proposed method is that learning one behavior induces parallel learning of other behaviors though the objectives of each behavior are different. In brief, all behaviors watch other behaviors from a critical point of view. Therefore, in the proposed method, there is cross-criticism and parallel learning that make the multi-dimensional learning process more efficient. By ap-plying the proposed learning method, we carried out multi-dimensional evaluation (reward) and multi-dimensional learning simultaneously in one trial. A special neural network (Q-net), in which the weights and the output are represented by vectors, is proposed to realize a critic net-work for Q-learning. The proposed learning method is applied for behavior planning of mobile robots.

A Study on the Soiution of Inverse Kinematic of Manipulator using Self-Organizing Neural Network and Fuzzy Compensator (퍼지 보상기와 자기구성 신경회로망을 이용한 매니퓰레이터의 역기구학 해에 관한 연구)

  • 김동희;이수흠;신위재
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.2 no.3
    • /
    • pp.79-85
    • /
    • 2001
  • We obtain a solution of inverse kinematic of 3 axis manipulator by using a self-organizing neral network(SONN) with a fuzzy compensator. The self-organizing neural network using the gaussian potential function as the activation function has one hidden layer in the first learning time. The network obtains the optimal number of node by increasing the number of hidden layer node through the learning, and the fuzzy compensator has the optimal loaming rate of neutral network. In this results, we can confirmed that the learning rate is improved and the rapid convergence to the steady-state.

  • PDF

Evolutionary Learning Algorithm fo r Projection Neural NEtworks (투영신경회로망의 훈련을 위한 진화학습기법)

  • 황민웅;최진영
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.7 no.4
    • /
    • pp.74-81
    • /
    • 1997
  • This paper proposes an evolutionary learning algorithm to discipline the projection neural nctworks (PNNs) with special type of hidden nodes which can activate radial basis functions as well as sigmoid functions. The proposed algorithm not only trains the parameters and the connection weights hut also c~ptimizes the network structure. Through the structure optimization, the number of hidden node:; necessary to represent a given target function is determined and the role of each hidden node is decided whether it activates a radial basis function or a sigmoid function. To apply the algorithm, PNN is realized by a self-organizing genotype representation with a linked list data structure. Simulations show that the algorithm can build the PNN with less hidden nodes than thc existing learning algorithm using error hack propagation(EE3P) and network growing strategy.

  • PDF