• Title/Summary/Keyword: sigmoid 함수

Search Result 109, Processing Time 0.024 seconds

The Study of Neural Networks Using Orthogonal Function System (직교함수를 사용한 신경회로망에 대한 연구)

  • 권성훈;최용준;이정훈;손동설;엄기환
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 1999.11a
    • /
    • pp.214-217
    • /
    • 1999
  • In this paper we proposed a heterogeneous hidden layer consisting of both sigmoid functions and RBFs(Radial Basis Function) in multi-layered neural networks. Focusing on the orthogonal relationship between the sigmoid function and its derivative, a derived RBF that is a derivative of the sigmoid function is used as the RBF in the neural network. so the proposed neural network is called ONN's feasibility Neural Network). Identification results using a nonlinear. function confirm both the ONN's feasibility and characteristics by comparing with those obtained using a conventional neural network which has sigmoid function or RBF in hidden layer.

  • PDF

Optimization of Sigmoid Activation Function Parameters using Genetic Algorithms and Pattern Recognition Analysis in Input Space of Two Spirals Problem (유전자알고리즘을 이용한 시그모이드 활성화 함수 파라미터의 최적화와 이중나선 문제의 입력공간 패턴인식 분석)

  • Lee, Sang-Wha
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.4
    • /
    • pp.10-18
    • /
    • 2010
  • This paper presents a optimization of sigmoid activation function parameter using genetic algorithms and pattern recognition analysis in input space of two spirals benchmark problem. To experiment, cascade correlation learning algorithm is used. In the first experiment, normal sigmoid activation function is used to analyze the pattern classification in input space of the two spirals problem. In the second experiment, sigmoid activation functions using different fixed values of the parameters are composed of 8 pools. In the third experiment, displacement of the sigmoid function to determine the value of the three parameters is obtained using genetic algorithms. The parameter values applied to the sigmoid activation functions for candidate neurons are used. To evaluate the performance of these algorithms, each step of the training input pattern classification shows the shape of the two spirals.

On the Digital Implementation of the Sigmoid function (시그모이드 함수의 디지털 구현에 관한 연구)

  • 이호선;홍봉화
    • The Journal of Information Technology
    • /
    • v.4 no.3
    • /
    • pp.155-163
    • /
    • 2001
  • In this paper, we implemented sigmoid active function which make it difficult to design of the digital neuron networks. Therefore, we designed of the high speed processing of the sigmoid function in order to digital neural networks. we designed of the MAC(Multiplier and Accumulator) operation unit used residue number system without carry propagation for the high speed operation. we designed of MAC operation unit and sigmoid processing unit are proved that it could run of the high speed. On the simulation, the faster than 4.6ns on the each order, we expected that it adapted to the implementation of the high speed digital neural network.

  • PDF

Pattern Recognition Analysis of Two Spirals and Optimization of Cascade Correlation Algorithm using CosExp and Sigmoid Activation Functions (이중나선의 패턴 인식 분석과 CosExp와 시그모이드 활성화 함수를 사용한 캐스케이드 코릴레이션 알고리즘의 최적화)

  • Lee, Sang-Wha
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.15 no.3
    • /
    • pp.1724-1733
    • /
    • 2014
  • This paper presents a pattern recognition analysis of two spirals problem and optimization of Cascade Correlation learning algorithm using in combination with a non-monotone function as CosExp(cosine-modulated symmetric exponential function) and a monotone function as sigmoid function. In addition, the algorithm's optimization is attempted. By using genetic algorithms the optimization of the algorithm will attempt. In the first experiment, by using CosExp activation function for candidate neurons of the learning algorithm is analyzed the recognized pattern in input space of the two spirals problem. In the second experiment, CosExp function for output neurons is used. In the third experiment, the sigmoid activation functions with various parameters for candidate neurons in 8 pools and CosExp function for output neurons are used. In the fourth experiment, the parameters are composed of 8 pools and displacement of the sigmoid function to determine the value of the three parameters is obtained using genetic algorithms. The parameter values applied to the sigmoid activation functions for candidate neurons are used. To evaluate the performance of these algorithms, each step of the training input pattern classification shows the shape of the two spirals. In the optimizing process, the number of hidden neurons was reduced from 28 to15, and finally the learning algorithm with 12 hidden neurons was optimized.

MLP accelerator implementation by approximation of activation function (활성화 함수의 근사화를 통한 MLP 가속기 구현)

  • Lee, Sangil;Choi, Sejin;Lee, Kwangyeob
    • Journal of IKEEE
    • /
    • v.22 no.1
    • /
    • pp.197-200
    • /
    • 2018
  • In this paper, sigmoid function, which is difficult to implement at hardware level and has a slow speed, is approximated by using PLAN. We use this as an activation function of MLP structure to reduce resource consumption and speed up. In this paper, we show that the proposed method maintains 95% accuracy in $5{\times}5$ size recognition and 1.83 times faster than GPGPU. We have found that even with similar resources as MLPA accelerators, we use more neurons and converge at higher accuracy and higher speed.

Sigmoid Curve Model for Software Test-Effort Estimation (소프트웨어 시험 노력 추정 시그모이드 모델)

  • Lee, Sang-Un
    • The KIPS Transactions:PartD
    • /
    • v.11D no.4
    • /
    • pp.885-892
    • /
    • 2004
  • Weibull distribution Iincluding Rayleigh and Exponential distribution is a typical model to estimate the effort distribution which is committed to the software testing phase. This model does not represent standpoint that many efforts are committed actually at the test beginning point. Moreover, it does not properly represent the various distribution form of actual test effort. To solve these problems, this paper proposes the Sigmoid model. The sigmoid function to be applicable in neural network transformed into the function which properly represents the test effort of software in the model. The model was verified to the six test effort data which were got from actual software projects which have various distribution form and verified the suitability. The Sigmoid model nay be selected by the alternative of Weibull model to estimate software test effort because it is superior than the Weibull model.

On optimal design of soft-decision multistage detectors for asynchronous DS/CDMA systems (비동기 DS/CDMA 시스템을 위한 연판정 다단 검출기의 최적 설계)

  • 고정훈;주정석;이용훈
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.22 no.9
    • /
    • pp.2035-2042
    • /
    • 1997
  • We consider the design of soft decision functions for each stage of multistage detection for coherent demodulation in an asynchronous code-division multiple-access(CDMA) system. In particular, the sigmoid function, which is shown to be optimal under the mean square error(MSE) criterion, andmultilevel quantizers that best approximate the sigmoid function are derived. At each stage of multistage detection, the parameters of these decision functions are adjusted depending on estimated input statistics. Computer simulation results demonstrate that multistage detectors employing these soft decision functions perform considerably better than those with hard decision.

  • PDF

The Study of Orthogonal Neural Network (직교함수 신경회로망에 대한 연구)

  • 권성훈;이현관;엄기환
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.4 no.1
    • /
    • pp.145-154
    • /
    • 2000
  • In this paper we proposed the orthogonal neural network(ONN) to control and identify the unknown controlled system. The proposed ONN used the buffer layer in front of the hidden layer and the hidden layer used the sigmoid function and its derivative a derived RBF that is a derivative of the sigmoid function. In order to verify the property of the proposed, it is examined by simulation results of the Narendra model. Controlled system is composed of ONN and confirmed its usefulness through simulation and experimental results.

  • PDF

Quadratic Sigmoid Neural Equalizer (이차 시그모이드 신경망 등화기)

  • Choi, Soo-Yong;Ong, Sung-Hwan;You, Cheol-Woo;Hong, Dae-Sik
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.36S no.1
    • /
    • pp.123-132
    • /
    • 1999
  • In this paper, a quadratic sigmoid neural equalizer(QSNE) is proposed to improve the performance of conventional neural equalizer in terms of bit error probability by using a quadratic sigmoid function as the activation function of neural networks. Conventional neural equalizers which have been used to compensate for nonlinear distortions adopt the sigmoid function. In the case of sigmoid neural equalizer, each neuron has one linear decision boundary. So many neurons are required when the neural equalizer has to separate complicated structure. But in case of the proposed QSNF and quadratic sigmoid neural decision feedback equalizer(QSNDFE), each neuron separates decision region with two parallel lines. Therefore, QSNE and QSNDFE have better performance and simpler structure than the conventional neural equalizers in terms of bit error probability. When the proposed QSNDFE is applied to communication systems and digital magnetic recording systems, it is an improvement of approximately 1.5dB~8.3dB in signal to moise ratio(SNR) over the conventional decision feedback equalizer(DEF) and neural decision feedback equalizer(NDFE). As intersymbol interference(ISI) and nonlinear distortions become severer, QSNDFE shows astounding SNR shows astounding SNR performance gain over the conventional equalizers in the same bit error probability.

  • PDF

Supervised Learning Artificial Neural Network Parameter Optimization and Activation Function Basic Training Method using Spreadsheets (스프레드시트를 활용한 지도학습 인공신경망 매개변수 최적화와 활성화함수 기초교육방법)

  • Hur, Kyeong
    • Journal of Practical Engineering Education
    • /
    • v.13 no.2
    • /
    • pp.233-242
    • /
    • 2021
  • In this paper, as a liberal arts course for non-majors, we proposed a supervised learning artificial neural network parameter optimization method and a basic education method for activation function to design a basic artificial neural network subject curriculum. For this, a method of finding a parameter optimization solution in a spreadsheet without programming was applied. Through this training method, you can focus on the basic principles of artificial neural network operation and implementation. And, it is possible to increase the interest and educational effect of non-majors through the visualized data of the spreadsheet. The proposed contents consisted of artificial neurons with sigmoid and ReLU activation functions, supervised learning data generation, supervised learning artificial neural network configuration and parameter optimization, supervised learning artificial neural network implementation and performance analysis using spreadsheets, and education satisfaction analysis. In this paper, considering the optimization of negative parameters for the sigmoid neural network and the ReLU neuron artificial neural network, we propose a training method for the four performance analysis results on the parameter optimization of the artificial neural network, and conduct a training satisfaction analysis.