• Title/Summary/Keyword: sigmoid activation function

Search Result 49, Processing Time 0.029 seconds

Optimization of Sigmoid Activation Function Parameters using Genetic Algorithms and Pattern Recognition Analysis in Input Space of Two Spirals Problem (유전자알고리즘을 이용한 시그모이드 활성화 함수 파라미터의 최적화와 이중나선 문제의 입력공간 패턴인식 분석)

  • Lee, Sang-Wha
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.4
    • /
    • pp.10-18
    • /
    • 2010
  • This paper presents a optimization of sigmoid activation function parameter using genetic algorithms and pattern recognition analysis in input space of two spirals benchmark problem. To experiment, cascade correlation learning algorithm is used. In the first experiment, normal sigmoid activation function is used to analyze the pattern classification in input space of the two spirals problem. In the second experiment, sigmoid activation functions using different fixed values of the parameters are composed of 8 pools. In the third experiment, displacement of the sigmoid function to determine the value of the three parameters is obtained using genetic algorithms. The parameter values applied to the sigmoid activation functions for candidate neurons are used. To evaluate the performance of these algorithms, each step of the training input pattern classification shows the shape of the two spirals.

Design of Nonlinear(Sigmoid) Activation Function for Digital Neural Network (Digital 신경회로망을 위한 비선형함수의 구현)

  • Kim, Jin-Tae;Chung, Duck-Jin
    • Proceedings of the KIEE Conference
    • /
    • 1993.07a
    • /
    • pp.501-503
    • /
    • 1993
  • A circuit of sigmoid function for neural network is designed by using Piecewise Linear (PWL) method. The slope of sigmoid function can be adjusted to 2 and 0.25. Also the circuit presents both sigmoid function and its differential form. The circuits is simulated by using ViewLogic. Theoretical and simulated performance agree with 1.8 percent.

  • PDF

Pattern Recognition Analysis of Two Spirals and Optimization of Cascade Correlation Algorithm using CosExp and Sigmoid Activation Functions (이중나선의 패턴 인식 분석과 CosExp와 시그모이드 활성화 함수를 사용한 캐스케이드 코릴레이션 알고리즘의 최적화)

  • Lee, Sang-Wha
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.15 no.3
    • /
    • pp.1724-1733
    • /
    • 2014
  • This paper presents a pattern recognition analysis of two spirals problem and optimization of Cascade Correlation learning algorithm using in combination with a non-monotone function as CosExp(cosine-modulated symmetric exponential function) and a monotone function as sigmoid function. In addition, the algorithm's optimization is attempted. By using genetic algorithms the optimization of the algorithm will attempt. In the first experiment, by using CosExp activation function for candidate neurons of the learning algorithm is analyzed the recognized pattern in input space of the two spirals problem. In the second experiment, CosExp function for output neurons is used. In the third experiment, the sigmoid activation functions with various parameters for candidate neurons in 8 pools and CosExp function for output neurons are used. In the fourth experiment, the parameters are composed of 8 pools and displacement of the sigmoid function to determine the value of the three parameters is obtained using genetic algorithms. The parameter values applied to the sigmoid activation functions for candidate neurons are used. To evaluate the performance of these algorithms, each step of the training input pattern classification shows the shape of the two spirals. In the optimizing process, the number of hidden neurons was reduced from 28 to15, and finally the learning algorithm with 12 hidden neurons was optimized.

An improved plasma model by optimizing neuron activation gradient (뉴런 활성화 경사 최적화를 이용한 개선된 플라즈마 모델)

  • 김병환;박성진
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.20-20
    • /
    • 2000
  • Back-propagation neural network (BPNN) is the most prevalently used paradigm in modeling semiconductor manufacturing processes, which as a neuron activation function typically employs a bipolar or unipolar sigmoid function in either hidden and output layers. In this study, applicability of another linear function as a neuron activation function is investigated. The linear function was operated in combination with other sigmoid functions. Comparison revealed that a particular combination, the bipolar sigmoid function in hidden layer and the linear function in output layer, is found to be the best combination that yields the highest prediction accuracy. For BPNN with this combination, predictive performance once again optimized by incrementally adjusting the gradients respective to each function. A total of 121 combinations of gradients were examined and out of them one optimal set was determined. Predictive performance of the corresponding model were compared to non-optimized, revealing that optimized models are more accurate over non-optimized counterparts by an improvement of more than 30%. This demonstrates that the proposed gradient-optimized teaming for BPNN with a linear function in output layer is an effective means to construct plasma models. The plasma modeled is a hemispherical inductively coupled plasma, which was characterized by a 24 full factorial design. To validate models, another eight experiments were conducted. process variables that were varied in the design include source polver, pressure, position of chuck holder and chroline flow rate. Plasma attributes measured using Langmuir probe are electron density, electron temperature, and plasma potential.

  • PDF

MLP accelerator implementation by approximation of activation function (활성화 함수의 근사화를 통한 MLP 가속기 구현)

  • Lee, Sangil;Choi, Sejin;Lee, Kwangyeob
    • Journal of IKEEE
    • /
    • v.22 no.1
    • /
    • pp.197-200
    • /
    • 2018
  • In this paper, sigmoid function, which is difficult to implement at hardware level and has a slow speed, is approximated by using PLAN. We use this as an activation function of MLP structure to reduce resource consumption and speed up. In this paper, we show that the proposed method maintains 95% accuracy in $5{\times}5$ size recognition and 1.83 times faster than GPGPU. We have found that even with similar resources as MLPA accelerators, we use more neurons and converge at higher accuracy and higher speed.

Improvement of learning method in pattern classification (패턴분류에서 학습방법 개선)

  • Kim, Myung-Chan;Choi, Chong-Ho
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.3 no.6
    • /
    • pp.594-601
    • /
    • 1997
  • A new algorithm is proposed for training the multilayer perceptrion(MLP) in pattern classification problems to accelerate the learning speed. It is shown that the sigmoid activation function of the output node can have deterimental effect on the performance of learning. To overcome this detrimental effect and to use the information fully in supervised learning, an objective function for binary modes is proposed. This objective function is composed with two new output activation functions which are selectively used depending on desired values of training patterns. The effect of the objective function is analyzed and a training algorithm is proposed based on this. Its performance is tested in several examples. Simulation results show that the performance of the proposed method is better than that of the conventional error back propagation (EBP) method.

  • PDF

Impact of Activation Functions on Flood Forecasting Model Based on Artificial Neural Networks (홍수량 예측 인공신경망 모형의 활성화 함수에 따른 영향 분석)

  • Kim, Jihye;Jun, Sang-Min;Hwang, Soonho;Kim, Hak-Kwan;Heo, Jaemin;Kang, Moon-Seong
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.63 no.1
    • /
    • pp.11-25
    • /
    • 2021
  • The objective of this study was to analyze the impact of activation functions on flood forecasting model based on Artificial neural networks (ANNs). The traditional activation functions, the sigmoid and tanh functions, were compared with the functions which have been recently recommended for deep neural networks; the ReLU, leaky ReLU, and ELU functions. The flood forecasting model based on ANNs was designed to predict real-time runoff for 1 to 6-h lead time using the rainfall and runoff data of the past nine hours. The statistical measures such as R2, Nash-Sutcliffe Efficiency (NSE), Root Mean Squared Error (RMSE), the error of peak time (ETp), and the error of peak discharge (EQp) were used to evaluate the model accuracy. The tanh and ELU functions were most accurate with R2=0.97 and RMSE=30.1 (㎥/s) for 1-h lead time and R2=0.56 and RMSE=124.6~124.8 (㎥/s) for 6-h lead time. We also evaluated the learning speed by using the number of epochs that minimizes errors. The sigmoid function had the slowest learning speed due to the 'vanishing gradient problem' and the limited direction of weight update. The learning speed of the ELU function was 1.2 times faster than the tanh function. As a result, the ELU function most effectively improved the accuracy and speed of the ANNs model, so it was determined to be the best activation function for ANNs-based flood forecasting.

Supervised Learning Artificial Neural Network Parameter Optimization and Activation Function Basic Training Method using Spreadsheets (스프레드시트를 활용한 지도학습 인공신경망 매개변수 최적화와 활성화함수 기초교육방법)

  • Hur, Kyeong
    • Journal of Practical Engineering Education
    • /
    • v.13 no.2
    • /
    • pp.233-242
    • /
    • 2021
  • In this paper, as a liberal arts course for non-majors, we proposed a supervised learning artificial neural network parameter optimization method and a basic education method for activation function to design a basic artificial neural network subject curriculum. For this, a method of finding a parameter optimization solution in a spreadsheet without programming was applied. Through this training method, you can focus on the basic principles of artificial neural network operation and implementation. And, it is possible to increase the interest and educational effect of non-majors through the visualized data of the spreadsheet. The proposed contents consisted of artificial neurons with sigmoid and ReLU activation functions, supervised learning data generation, supervised learning artificial neural network configuration and parameter optimization, supervised learning artificial neural network implementation and performance analysis using spreadsheets, and education satisfaction analysis. In this paper, considering the optimization of negative parameters for the sigmoid neural network and the ReLU neuron artificial neural network, we propose a training method for the four performance analysis results on the parameter optimization of the artificial neural network, and conduct a training satisfaction analysis.

Multi-labeled Domain Detection Using CNN (CNN을 이용한 발화 주제 다중 분류)

  • Choi, Kyoungho;Kim, Kyungduk;Kim, Yonghe;Kang, Inho
    • 한국어정보학회:학술대회논문집
    • /
    • 2017.10a
    • /
    • pp.56-59
    • /
    • 2017
  • CNN(Convolutional Neural Network)을 이용하여 발화 주제 다중 분류 task를 multi-labeling 방법과, cluster 방법을 이용하여 수행하고, 각 방법론에 MSE(Mean Square Error), softmax cross-entropy, sigmoid cross-entropy를 적용하여 성능을 평가하였다. Network는 음절 단위로 tokenize하고, 품사정보를 각 token의 추가한 sequence와, Naver DB를 통하여 얻은 named entity 정보를 입력으로 사용한다. 실험결과 cluster 방법으로 문제를 변형하고, sigmoid를 output layer의 activation function으로 사용하고 cross entropy cost function을 이용하여 network를 학습시켰을 때 F1 0.9873으로 가장 좋은 성능을 보였다.

  • PDF

A piecewise affine approximation of sigmoid activation functions in multi-layered perceptrons and a comparison with a quantization scheme (다중계층 퍼셉트론 내 Sigmoid 활성함수의 구간 선형 근사와 양자화 근사와의 비교)

  • 윤병문;신요안
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.35C no.2
    • /
    • pp.56-64
    • /
    • 1998
  • Multi-layered perceptrons that are a nonlinear neural network model, have been widely used for various applications mainly thanks to good function approximation capability for nonlinear fuctions. However, for digital hardware implementation of the multi-layere perceptrons, the quantization scheme using "look-up tables (LUTs)" is commonly employed to handle nonlinear signmoid activation functions in the neworks, and thus requires large amount of storage to prevent unacceptable quantization errors. This paper is concerned with a new effective methodology for digital hardware implementation of multi-layered perceptrons, and proposes a "piecewise affine approximation" method in which input domain is divided into (small number of) sub-intervals and nonlinear sigmoid function is linearly approximated within each sub-interval. Using the proposed method, we develop an expression and an error backpropagation type learning algorithm for a multi-layered perceptron, and compare the performance with the quantization method through Monte Carlo simulations on XOR problems. Simulation results show that, in terms of learning convergece, the proposed method with a small number of sub-intervals significantly outperforms the quantization method with a very large storage requirement. We expect from these results that the proposed method can be utilized in digital system implementation to significantly reduce the storage requirement, quantization error, and learning time of the quantization method.quantization method.

  • PDF