• Title/Summary/Keyword: Hyperbolic tangent 함수

Search Result 17, Processing Time 0.028 seconds

An Improvement of Performance for Cascade Correlation Learning Algorithm using a Cosine Modulated Gaussian Activation Function (코사인 모듈화 된 가우스 활성화 함수를 사용한 캐스케이드 코릴레이션 학습 알고리즘의 성능 향상)

  • Lee, Sang-Wha;Song, Hae-Sang
    • Journal of the Korea Society of Computer and Information
    • /
    • v.11 no.3
    • /
    • pp.107-115
    • /
    • 2006
  • This paper presents a new class of activation functions for Cascade Correlation learning algorithm, which herein will be called CosGauss function. This function is a cosine modulated gaussian function. In contrast to the sigmoidal, hyperbolic tangent and gaussian functions, more ridges can be obtained by the CosGauss function. Because of the ridges, it is quickly convergent and improves a pattern recognition speed. Consequently it will be able to improve a learning capability. This function was tested with a Cascade Correlation Network on the two spirals problem and results are compared with those obtained with other activation functions.

  • PDF

A New Bussgang Blind Equalization Algorithm with Reduced Computational Complexity (계산 복잡도가 줄어든 새로운 Bussgang 자력 등화 알고리듬)

  • Kim, Seong-Min;Kim, Whan-Woo
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.22 no.10
    • /
    • pp.1012-1015
    • /
    • 2011
  • The decision-directed blind equalization algorithm is often used due to its simplicity and good convergence property when the eye pattern is open. However, in a channel where the eye pattern is closed, the decision-directed algorithm is not guaranteed to converge. Hence, a modified Bussgang-type algorithm using a hyperbolic tangent function for zero-memory nonlinear(ZNL) function has been proposed and applied to avoid this problem by Filho et al. But application of this algorithm includes the calculation of hyperbolic tangent function and its derivative or a look-up table which may need a large amount of memory due to channel variations. To reduce the computational and/or hardware complexity of Filho's algorithm, in this paper, an improved method for the decision-directed algorithm is proposed. In the proposed scheme, the ZNL function and its derivative are respectively set to be the original signum function and a narrow rectangular pulse which is an approximation of Dirac delta function. It is shown that the proposed scheme, when it is combined with decision-directed algorithm, reduces the computational complexity drastically while it retains the convergence and steady-state performance of the Filho's algorithm.

Improvement of Learning Capability with Combination of the Generalized Cascade Correlation and Generalized Recurrent Cascade Correlation Algorithms (일반화된 캐스케이드 코릴레이션 알고리즘과 일반화된 순환 캐스케이드 코릴레이션 알고리즘의 결합을 통한 학습 능력 향상)

  • Lee, Sang-Wha;Song, Hae-Sang
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.2
    • /
    • pp.97-105
    • /
    • 2009
  • This paper presents a combination of the generalized Cascade Correlation and generalized Recurrent Cascade Correlation learning algorithms. The new network will be able to grow with vertical or horizontal direction and with recurrent or without recurrent units for the quick solution of the pattern classification problem. The proposed algorithm was tested learning capability with the sigmoidal activation function and hyperbolic tangent activation function on the contact lens and balance scale standard benchmark problems. And results are compared with those obtained with Cascade Correlation and Recurrent Cascade Correlation algorithms. By the learning the new network was composed with the minimal number of the created hidden units and shows quick learning speed. Consequently it will be able to improve a learning capability.

신경망기법을 이용한 기업부실예측에 관한 연구

  • Jeong, Gi-Ung;Hong, Gwan-Su
    • The Korean Journal of Financial Management
    • /
    • v.12 no.2
    • /
    • pp.1-23
    • /
    • 1995
  • 본 연구의 목적은 특정 금융기관의 주거래기업들에 대한 부실예측을 위해 주거래기업들을 잠식, 도산, 그리고 건전기업과 같이 세집단으로 구분하여 예측하고자 하며, 기업부실 예측력에 영향을 미치는 세 가지 요인으로서 표본구성, 투입 변수, 분석 기법의 관점에서 다음을 살펴보는 것이다. 첫째, 기업부실예측에서 전통적인 delta learning rule과 sigmoid함수를 사용한 역전파학습(신경망 I)과 이들의 변형형태인 normalized cumulative delta learning rule과 hyperbolic tangent함수를 사용한 역전파 학습(신경망 II)과의 예측력의 차이를 살펴보고 또한 이러한 두가지 신경망기법의 예측력을 MDA(다변량판별분석) 결과와 비교하여 신경망기법에 대한 예측력의 유용성을 살펴보고자 한다. 둘째, 세집단분류문제에서는 잠식, 도산, 건전기업의 구성비율이 위의 세가지 예측기법의 결과에 어떠한 영향을 미치는지를 살펴보고자 한다. 세째, 투입 변수선정은 기존연구 또는 이론을 바탕으로 연구자의 판단에 의해 선택하는 방법과 다수의 변수를 가지고 통계적기법에 의해 좋은 판별변수의 집합을 찾는 것이다. 본 연구에서는 이러한 방법들에 의해 선정된 투입변수들이 세가지 예측기법의 결과에 어떠한 영향을 미치는지를 살펴보고자 한다. 이러한 관점에서 본 연구의 실증분석 결과를 요약하면 다음과 같다. 1) 신경망기법이 두집단에서와 같이 세집단 분류문제에서도 MDA보다는 더 높은 예측력을 보였다. 2) 잠식과 도산기업의 수는 비슷하게 그리고 건전기업의 수는 잠식과 도산기업을 합한 수와 비슷하게 표본을 구성하는 것이 예측력을 향상하는데 도움이 된다고 할 수 있다. 3) 속성별로 고르게 투입변수로 선정한 경우가 그렇지 않은 경우보다 더 높은 예측력을 보였다. 4) 전통적인 delta learning rule과 sigmoid함수를 사용한 역전파학습 보다는 normalized cumulative delta learning rule과 hyperbolic tangent함수를 사용한 역전파 학습이 더 높은 예측력을 보였다. 이러한 현상은 두집단문제에서 보다 세집단문제에서 더 큰 차이를 나타내고 있다.

  • PDF

Application of Artificial Neural Network to Flamelet Library for Gaseous Hydrogen/Liquid Oxygen Combustion at Supercritical Pressure (초임계 압력조건에서 기체수소-액체산소 연소해석의 층류화염편 라이브러리에 대한 인공신경망 학습 적용)

  • Jeon, Tae Jun;Park, Tae Seon
    • Journal of the Korean Society of Propulsion Engineers
    • /
    • v.25 no.6
    • /
    • pp.1-11
    • /
    • 2021
  • To develop an efficient procedure related to the flamelet library, the machine learning process based on artificial neural network(ANN) is applied for the gaseous hydrogen/liquid oxygen combustor under a supercritical pressure condition. For hidden layers, 25 combinations based on Rectified Linear Unit(ReLU) and hyperbolic tangent are adopted to find an optimum architecture in terms of the computational efficiency and the training performance. For activation functions, the hyperbolic tangent is proper to get the high learning performance for accurate properties. A transformation learning data is proposed to improve the training performance. When the optimal node is arranged for the 4 hidden layers, it is found to be the most efficient in terms of training performance and computational cost. Compared to the interpolation procedure, the ANN procedure reduces computational time and system memory by 37% and 99.98%, respectively.

Case Study on Failure of Rock Slope Caused by Filling Material of Clay (점토 충전물에 의한 암반사면 파괴사례 연구)

  • Kim, Yong-Jun;Lee, Young-Huy;Kim, Sun-Ki;Kim, Ju-Hwa
    • Tunnel and Underground Space
    • /
    • v.16 no.5 s.64
    • /
    • pp.368-376
    • /
    • 2006
  • After heavy rainfall, It was occurred massive plane failure along bedding plane of shale in the center of rock slope. It was observed filling material and trace of underground water leakage around of the slope. We tried to find the cause for slope failure, and the result of examination showed that primary factors of the failure were low shear strength of clay filling material and water pressure formed within tension crack existed in the top of the slope. In this research, in order to examine the features of shear strength of filled rock joint, shear test of filled rock joint was conducted using of artificial filling material such as sand and clay..Also we made an investigation into the characteristics of shear strength with different thickness of filling materials.

Optimization Of Water Quality Prediction Model In Daechong Reservoir, Based On Multiple Layer Perceptron (다층 퍼셉트론을 기반으로 한 대청호 수질 예측 모델 최적화)

  • Lee, Hankyu;Kim, Jin Hui;Byeon, Seohyeon;Park, Kangdong;Shin, Jae-ki;Park, Yongeun
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2022.05a
    • /
    • pp.43-43
    • /
    • 2022
  • 유해 조류 대발생은 전국 각지의 인공호소나 하천에서 다발적으로 발생하며, 경관을 해치고 수질을 오염시키는 등 수자원에 부정적인 영향을 미친다. 본 연구에서는 인공호소에서 발생하는 유해 조류 대발생을 예측하기 위해 심층학습 기법을 이용하여 예측 모델을 개발하고자 하였다. 대상 지점은 대청호의 추동 지점으로 선정하였다. 대청호는 금강유역 중류에 위치한 댐으로, 약 150만명에 달하는 급수 인구수를 유지 중이기에 유해 남조 대발생 관리가 매우 중요한 장소이다. 학습용 데이터 구축은 대청호의 2011년 1월부터 2019년 12월까지 측정된 수질, 기상, 수문 자료를 입력 자료를 이용하였다. 수질 예측 모델의 구조는 다중 레이어 퍼셉트론(Multiple Layer Perceptron; MLP)으로, 입력과 한 개 이상의 은닉층, 그리고 출력층으로 구성된 인공신경망이다. 본 연구에서는 인공신경망의 은닉층 개수(1~3개)와 각각의 레이어에 적용되는 은닉 노드 개수(11~30개), 활성함수 5종(Linear, sigmoid, hyperbolic tangent, Rectified Linear Unit, Exponential Linear Unit)을 각각 하이퍼파라미터로 정하고, 모델의 성능을 최대로 발휘할 수 있는 조건을 찾고자 하였다. 하이퍼파라미터 최적화 도구는 Tensorflow에서 배포하는 Keras Tuner를 사용하였다. 모델은 총 3000 학습 epoch 가 진행되는 동안 최적의 가중치를 계산하도록 설계하였고, 이 결과를 매 반복마다 저장장치에 기록하였다. 모델 성능의 타당성은 예측과 실측 데이터 간의 상관관계를 R2, NSE, RMSE를 통해 산출하여 검증하였다. 모델 최적화 결과, 적합한 하이퍼파라미터는 최적화 횟수 총 300회에서 256 번째 반복 결과인 은닉층 개수 3개, 은닉 노드 수 각각 25개, 22개, 14개가 가장 적합하였고, 이에 따른 활성함수는 ELU, ReLU, Hyperbolic tangent, Linear 순서대로 사용되었다. 최적화된 하이퍼파라미터를 이용하여 모델 학습 및 검증을 수행한 결과, R2는 학습 0.68, 검증 0.61이었고 NSE는 학습 0.85, 검증 0.81, RMSE는 학습 0.82, 검증 0.92로 나타났다.

  • PDF

Investigation on the modified continual reassessment method in phase I clinical trial (1상 임상실험에서 수정된 CRM에 대한 연구)

  • 강승호
    • The Korean Journal of Applied Statistics
    • /
    • v.15 no.2
    • /
    • pp.323-336
    • /
    • 2002
  • In this paper we consider the modified continual reassessment method in which a cohort consists of three patients. Simulation has been a main research tool in the investigation of CRM. In this paper we propose complete enumeration as an alternative of simulation. Using new method we show that the expected toxicity rate at the MTD converges to the target toxicity rate well as the sample size increases.

Approximation of Polynomials and Step function for cosine modulated Gaussian Function in Neural Network Architecture (뉴로 네트워크에서 코사인 모듈화 된 가우스함수의 다항식과 계단함수의 근사)

  • Lee, Sang-Wha
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.49 no.2
    • /
    • pp.115-122
    • /
    • 2012
  • We present here a new class of activation functions for neural networks, which herein will be called CosGauss function. This function is a cosine-modulated gaussian function. In contrast to the sigmoidal-, hyperbolic tangent- and gaussian activation functions, more ridges can be obtained by the CosGauss function. It will be proven that this function can be used to aproximate polynomials and step functions. The CosGauss function was tested with a Cascade-Correlation-Network of the multilayer structure on the Tic-Tac-Toe game and iris plants problems, and results are compared with those obtained with other activation functions.

Comparison of Artificial Neural Network Model Capability for Runoff Estimation about Activation Functions (활성화 함수에 따른 유출량 산정 인공신경망 모형의 성능 비교)

  • Kim, Maga;Choi, Jin-Yong;Bang, Jehong;Yoon, Pureun;Kim, Kwihoon
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.63 no.1
    • /
    • pp.103-116
    • /
    • 2021
  • Analysis of runoff is substantial for effective water management in the watershed. Runoff occurs by reaction of a watershed to the rainfall and has non-linearity and uncertainty due to the complex relation of weather and watershed factors. ANN (Artificial Neural Network), which learns from the data, is one of the machine learning technique known as a proper model to interpret non-linear data. The performance of ANN is affected by the ANN's structure, the number of hidden layer nodes, learning rate, and activation function. Especially, the activation function has a role to deliver the information entered and decides the way of making output. Therefore, It is important to apply appropriate activation functions according to the problem to solve. In this paper, ANN models were constructed to estimate runoff with different activation functions and each model was compared and evaluated. Sigmoid, Hyperbolic tangent, ReLU (Rectified Linear Unit), ELU (Exponential Linear Unit) functions were applied to the hidden layer, and Identity, ReLU, Softplus functions applied to the output layer. The statistical parameters including coefficient of determination, NSE (Nash and Sutcliffe Efficiency), NSEln (modified NSE), and PBIAS (Percent BIAS) were utilized to evaluate the ANN models. From the result, applications of Hyperbolic tangent function and ELU function to the hidden layer and Identity function to the output layer show competent performance rather than other functions which demonstrated the function selection in the ANN structure can affect the performance of ANN.