• Title/Summary/Keyword: the number of units in a hidden layer

Search Result 16, Processing Time 0.027 seconds

Learning method of a Neural Network using Genetic Algorithm for 3 Bit Parity Discrimination (패리티 판별을 위한 유전자 알고리즘을 사용한 신경회로망의 학습법)

  • Choi, Jae-Seung;Kim, Chung-Hwa
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.44 no.2 s.314
    • /
    • pp.11-18
    • /
    • 2007
  • Back propagation algorithm based on a gradient-decent method has been widely used to the training of a neural network. However, this algorithm have some problems such as dropping the minimum value in a local area according to an initial value and setting the number of units in a hidden layer when training the neural network. Accordingly, to solve the above-mentioned problems, this paper proposes a genetic algorithm using the training method of the neural network. Thus, the improved genetic algorithm using a new crossover and mutation method is proposed to discriminate 3 bit parity. Experiments confirm that the proposed system is effective for training speed after demonstrating for generation gap, the number of units in the hidden layer, and the number of individuals.

The Influence of Weight Adjusting Method and the Number of Hidden Layer있s Node on Neural Network있s Performance (인공 신경망의 학습에 있어 가중치 변화방법과 은닉층의 노드수가 예측정확성에 미치는 영향)

  • 김진백;김유일
    • The Journal of Information Systems
    • /
    • v.9 no.1
    • /
    • pp.27-44
    • /
    • 2000
  • The structure of neural networks is represented by a weighted directed graph with nodes representing units and links representing connections. Each link is assigned a numerical value representing the weight of the connection. In learning process, the values of weights are adjusted by errors. Following experiment results, the interval of adjusting weights, that is, epoch size influenced neural networks' performance. As epoch size is larger than a certain size, neural networks'performance decreased drastically. And the number of hidden layer's node also influenced neural networks'performance. The networks'performance decreased as hidden layers have more nodes and then increased at some number of hidden layer's node. So, in implementing of neural networks the epoch size and the number of hidden layer's node should be decided by systematic methods, not empirical or heuristic methods.

  • PDF

Nonlinear Compensation Using Artificial Neural Network in Radio-over-Fiber System

  • Najarro, Andres Caceres;Kim, Sung-Man
    • Journal of information and communication convergence engineering
    • /
    • v.16 no.1
    • /
    • pp.1-5
    • /
    • 2018
  • In radio-over-fiber (RoF) systems, nonlinear compensation is very important to meet the error vector magnitude (EVM) requirement of the mobile network standards. In this study, a nonlinear compensation technique based on an artificial neural network (ANN) is proposed for RoF systems. This technique is based on a backpropagation neural network (BPNN) with one hidden layer and three neuron units in this study. The BPNN obtains the inverse response of the system to compensate for nonlinearities. The EVM of the signal is measured by changing the number of neurons and the hidden layers in a RoF system modeled by a measured data. Based on our simulation results, it is concluded that one hidden layer and three neuron units are adequate for the RoF system. Our results showed that the EVMs were improved from 4.027% to 2.605% by using the proposed ANN compensator.

Bayesian Analysis for Neural Network Models

  • Chung, Younshik;Jung, Jinhyouk;Kim, Chansoo
    • Communications for Statistical Applications and Methods
    • /
    • v.9 no.1
    • /
    • pp.155-166
    • /
    • 2002
  • Neural networks have been studied as a popular tool for classification and they are very flexible. Also, they are used for many applications of pattern classification and pattern recognition. This paper focuses on Bayesian approach to feed-forward neural networks with single hidden layer of units with logistic activation. In this model, we are interested in deciding the number of nodes of neural network model with p input units, one hidden layer with m hidden nodes and one output unit in Bayesian setup for fixed m. Here, we use the latent variable into the prior of the coefficient regression, and we introduce the 'sequential step' which is based on the idea of the data augmentation by Tanner and Wong(1787). The MCMC method(Gibbs sampler and Metropolish algorithm) can be used to overcome the complicated Bayesian computation. Finally, a proposed method is applied to a simulated data.

Function Approximation Based on a Network with Kernel Functions of Bounds and Locality : an Approach of Non-Parametric Estimation

  • Kil, Rhee-M.
    • ETRI Journal
    • /
    • v.15 no.2
    • /
    • pp.35-51
    • /
    • 1993
  • This paper presents function approximation based on nonparametric estimation. As an estimation model of function approximation, a three layered network composed of input, hidden and output layers is considered. The input and output layers have linear activation units while the hidden layer has nonlinear activation units or kernel functions which have the characteristics of bounds and locality. Using this type of network, a many-to-one function is synthesized over the domain of the input space by a number of kernel functions. In this network, we have to estimate the necessary number of kernel functions as well as the parameters associated with kernel functions. For this purpose, a new method of parameter estimation in which linear learning rule is applied between hidden and output layers while nonlinear (piecewise-linear) learning rule is applied between input and hidden layers, is considered. The linear learning rule updates the output weights between hidden and output layers based on the Linear Minimization of Mean Square Error (LMMSE) sense in the space of kernel functions while the nonlinear learning rule updates the parameters of kernel functions based on the gradient of the actual output of network with respect to the parameters (especially, the shape) of kernel functions. This approach of parameter adaptation provides near optimal values of the parameters associated with kernel functions in the sense of minimizing mean square error. As a result, the suggested nonparametric estimation provides an efficient way of function approximation from the view point of the number of kernel functions as well as learning speed.

  • PDF

Estrus Detection in Sows Based on Texture Analysis of Pudendal Images and Neural Network Analysis

  • Seo, Kwang-Wook;Min, Byung-Ro;Kim, Dong-Woo;Fwa, Yoon-Il;Lee, Min-Young;Lee, Bong-Ki;Lee, Dae-Weon
    • Journal of Biosystems Engineering
    • /
    • v.37 no.4
    • /
    • pp.271-278
    • /
    • 2012
  • Worldwide trends in animal welfare have resulted in an increased interest in individual management of sows housed in groups within hog barns. Estrus detection has been shown to be one of the greatest determinants of sow productivity. Purpose: We conducted this study to develop a method that can automatically detect the estrus state of a sow by selecting optimal texture parameters from images of a sow's pudendum and by optimizing the number of neurons in the hidden layer of an artificial neural network. Methods: Texture parameters were analyzed according to changes in a sow's pudendum in estrus such as mucus secretion and expansion. Of the texture parameters, eight gray level co-occurrence matrix (GLCM) parameters were used for image analysis. The image states were classified into ten grades for each GLCM parameter, and an artificial neural network was formed using the values for each grade as inputs to discriminate the estrus state of sows. The number of hidden layer neurons in the artificial neural network is an important parameter in neural network design. Therefore, we determined the optimal number of hidden layer units using a trial and error method while increasing the number of neurons. Results: Fifteen hidden layers were determined to be optimal for use in the artificial neural network designed in this study. Thirty images of 10 sows were used for learning, and then 30 different images of 10 sows were used for verification. Conclusions: For learning, the back propagation neural network (BPN) algorithm was used to successful estimate six texture parameters (homogeneity, angular second moment, energy, maximum probability, entropy, and GLCM correlation). Based on the verification results, homogeneity was determined to be the most important texture parameter, and resulted in an estrus detection rate of 70%.

Acoustic Diagnosis of a Pump by Using Neural Network

  • Lee, Sin-Young
    • Journal of Mechanical Science and Technology
    • /
    • v.20 no.12
    • /
    • pp.2079-2086
    • /
    • 2006
  • A fundamental study for developing a fault diagnosis system of a pump is performed by using neural network. Acoustic signals were obtained and converted to frequency domain for normal products and artificially deformed products. The neural network model used in this study was 3-layer type composed of input, hidden, and output layer. The normalized amplitudes at the multiples of real driving frequency were chosen as units of input layer. And the codes of pump malfunctions were selected as units of output layer. Various sets of teach signals made from original data by eliminating some random cases were used in the training. The average errors were approximately proportional to the number of untaught data. Neural network trained by acoustic signals can detect malfunction or diagnose fault of a given machine from the results.

Diagnosis of a Pump by Frequency Analysis of Operation Sound (펌프의 작동음 주파수 분석에 의한 진단)

  • 이신영;박순재
    • Proceedings of the Korean Society of Machine Tool Engineers Conference
    • /
    • 2003.10a
    • /
    • pp.137-142
    • /
    • 2003
  • A fundamental study for developing a system of fault diagnosis of a pump is performed by using neural network. The acoustic signals were obtained and converted to frequency domain for normal products and artificially deformed products. The signals were obtained in various driving frequencies in order to obtain many types of data from a limited number of pumps. The acoustic data in frequency domain were managed to multiples of real driving frequency with the aim of easy comparison. The neural network model used in this study was 3-layer type composed of input, hidden, and output layer. The normalized amplitudes at the multiples of real driving frequency were chosen as units of input layer, Various sets of teach signals made from original data by eliminating some random cases were used in the training. The average errors were approximately proportional to the number of untaught data. The results showed neural network trained by acoustic signals can be used as a simple method far a detection of machine malfunction or fault diagnosis.

  • PDF

Diagnosis of a Pump by Frequency Analysis of Operation Sound (펌프의 작동음 주파수 분석에 의한 진단)

  • Lee Sin-Young
    • Transactions of the Korean Society of Machine Tool Engineers
    • /
    • v.13 no.5
    • /
    • pp.81-86
    • /
    • 2004
  • A fundamental study for developing a system of fault diagnosis of a pump is performed by using neural network. The acoustic signals were obtained and converted to frequency domain for normal products and artificially deformed products. The signals were obtained in various driving frequencies in order to obtain many types of data from a limited number of pumps. The acoustic data in frequency domain were managed to multiples of real driving frequency with the aim of easy comparison. The neural network model used in this study was 3-layer type composed of input, hidden, and output layer. The normalized amplitudes at the multiples of real driving frequency were chosen as units of input layer. Various sets of teach signals made from original data by eliminating some random cases were used in the training. The average errors were approximately proportional to the number of untaught data. The results showed neural network trained by acoustic signals can be used as a simple method for a detection of machine malfuction or fault diagnosis.

Control of Nonlinear System by Multiplication and Combining Layer on Dynamic Neural Networks (동적 신경망의 층의 분열과 합성에 의한 비선형 시스템 제어)

  • Park, Seong-Wook;Lee, Jae-Kwan;Seo, Bo-Hyeok
    • The Transactions of the Korean Institute of Electrical Engineers A
    • /
    • v.48 no.4
    • /
    • pp.419-427
    • /
    • 1999
  • We propose an algorithm for obtaining the optimal node number of hidden units in dynamic neural networks. The dynamic nerual networks comprise of dynamic neural units and neural processor consisting of two dynamic neural units; one functioning as an excitatory neuron and the other as an inhibitory neuron. Starting out with basic network structure to solve the problem of control, we find optimal neural structure by multiplication and combining dynamic neural unit. Numerical examples are presented for nonlinear systems. Those case studies showed that the proposed is useful is practical sense.

  • PDF