• Title/Summary/Keyword: 신 경망

Search Result 78, Processing Time 0.027 seconds

Extensions of Knowledge-Based Artificial Neural Networks for the Theory Refinements (영역이론정련을 위한 지식기반신경망의 확장)

  • Shim, Dong-Hee
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.38 no.6
    • /
    • pp.18-25
    • /
    • 2001
  • KBANN (knowledge-based artificial neural network) combining the analytical learning and the inductive learning has been shown to be more effective than other machine learning models. However KBANN doesn't have the theory refinement ability because the topology of network can't be altered dynamically. Although TopGen was proposed to extend the ability of KABNN in this respect, it also had some defects. The algorithms which could solve this TopGen's defects, enabling the refinement of theory, by extending KBANN, are designed.

  • PDF

Theory Refinements in Knowledge-based Artificial Neural Networks by Adding Hidden Nodes (지식기반신경망에서 은닉노드삽입을 이용한 영역이론정련화)

  • Sim, Dong-Hui
    • The Transactions of the Korea Information Processing Society
    • /
    • v.3 no.7
    • /
    • pp.1773-1780
    • /
    • 1996
  • KBANN (knowledge-based artificial neural network) combining the symbolic approach and the numerical approach has been shown to be more effective than other machine learning models. However KBANN doesn't have the theory refinement ability because the topology of network can't be altered dynamically. Although TopGen was proposed to extend the ability of KABNN in this respect, it also had some defects due to the link-ing of hidden nodes to input nodes and the use of beam search. The algorithm which could solve this TopGen's defects, by adding the hidden nodes linked to next layer nodes and using hill-climbing search with backtracking, is designed.

  • PDF

Comparison of the Speech Recognition Performance based upon the Recurrent Structure of the Multilayered Recurrent Neural Network (다층회귀신경망의 회귀구조에 따른 음성인식성능 비교)

  • 어태경
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1998.06e
    • /
    • pp.357-360
    • /
    • 1998
  • 4층구조인 다층퍼셉트론으로부터 입력층을 제외한 각 측의 출력성분을 하위은닉층으로 귀환하는 3모델의 다층회귀신경망을 구성하고, 각 모델별 망의 크기에 따른 음성인식성능을 분석 비교한다. 과거의 입력신호를 출력층에서 예측하여 오차신호를 계산하고, 이 오차신호가 최소화하는 방향으로 연결세기를 조정한다. 실험결과 3회귀모델중 상위은닉층의 회귀연결방식이 가장 양호한 인식율을 나타내었으며, 각 망 공히 상, 히위은닉층의 뉴런수 10, 15개, 예측차수 3, 4차 일 때 인식성능이 양호하였다. 그리고 회귀신경망이 비회귀신경망에 비해 인식율이 크게 향상된다는 것을 확인 할 수 있었다.

  • PDF

Analysis on Strategies for Modeling the Wave Equation with Physics-Informed Neural Networks (물리정보신경망을 이용한 파동방정식 모델링 전략 분석)

  • Sangin Cho;Woochang Choi;Jun Ji;Sukjoon Pyun
    • Geophysics and Geophysical Exploration
    • /
    • v.26 no.3
    • /
    • pp.114-125
    • /
    • 2023
  • The physics-informed neural network (PINN) has been proposed to overcome the limitations of various numerical methods used to solve partial differential equations (PDEs) and the drawbacks of purely data-driven machine learning. The PINN directly applies PDEs to the construction of the loss function, introducing physical constraints to machine learning training. This technique can also be applied to wave equation modeling. However, to solve the wave equation using the PINN, second-order differentiations with respect to input data must be performed during neural network training, and the resulting wavefields contain complex dynamical phenomena, requiring careful strategies. This tutorial elucidates the fundamental concepts of the PINN and discusses considerations for wave equation modeling using the PINN approach. These considerations include spatial coordinate normalization, the selection of activation functions, and strategies for incorporating physics loss. Our experimental results demonstrated that normalizing the spatial coordinates of the training data leads to a more accurate reflection of initial conditions in neural network training for wave equation modeling. Furthermore, the characteristics of various functions were compared to select an appropriate activation function for wavefield prediction using neural networks. These comparisons focused on their differentiation with respect to input data and their convergence properties. Finally, the results of two scenarios for incorporating physics loss into the loss function during neural network training were compared. Through numerical experiments, a curriculum-based learning strategy, applying physics loss after the initial training steps, was more effective than utilizing physics loss from the early training steps. In addition, the effectiveness of the PINN technique was confirmed by comparing these results with those of training without any use of physics loss.

A study on the Recurrent Predictioni Neural Networks for Syllables Recognition (음절인식을 위한 회귀예측신경망에 관한 연구)

  • 한학용
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1998.08a
    • /
    • pp.272-277
    • /
    • 1998
  • MLP형 예측신경망, Jordan 형과 Elman 형 회귀예측신경망을 사용하여 예측차수오 kdmsslr층이 유니트수의 변화에 따른 인식결과를 CHMM과 비교하였다. 음성데이타는 100음절데이터와 ETRI 의 샘돌이 숫자음을 사용하였다. 숫자음에서 신경망의 인식률은 98.5%로 5상태 CHMM의 85.6%보다는 향상된 인식성능을 보였으며 6상태 이상의 CHMM보다는 다소 인식률이 낮게 나타났다.

  • PDF

Performance Improvement Method of Fully Connected Neural Network Using Combined Parametric Activation Functions (결합된 파라메트릭 활성함수를 이용한 완전연결신경망의 성능 향상)

  • Ko, Young Min;Li, Peng Hang;Ko, Sun Woo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.1
    • /
    • pp.1-10
    • /
    • 2022
  • Deep neural networks are widely used to solve various problems. In a fully connected neural network, the nonlinear activation function is a function that nonlinearly transforms the input value and outputs it. The nonlinear activation function plays an important role in solving the nonlinear problem, and various nonlinear activation functions have been studied. In this study, we propose a combined parametric activation function that can improve the performance of a fully connected neural network. Combined parametric activation functions can be created by simply adding parametric activation functions. The parametric activation function is a function that can be optimized in the direction of minimizing the loss function by applying a parameter that converts the scale and location of the activation function according to the input data. By combining the parametric activation functions, more diverse nonlinear intervals can be created, and the parameters of the parametric activation functions can be optimized in the direction of minimizing the loss function. The performance of the combined parametric activation function was tested through the MNIST classification problem and the Fashion MNIST classification problem, and as a result, it was confirmed that it has better performance than the existing nonlinear activation function and parametric activation function.

Inductive Learning using Theory-Refinement Knowledge-Based Artificial Neural Network (이론정련 지식기반인공신경망을 이용한 귀납적 학습)

  • 심동희
    • Journal of Korea Multimedia Society
    • /
    • v.4 no.3
    • /
    • pp.280-285
    • /
    • 2001
  • Since KBANN (knowledge-based artificial neural network) combing the inductive learning algorithm and the analytical learning algorithm was proposed, several methods such as TopGen, TR-KBANN, THRE-KBANN which modify KBANN have been proposed. But these methods can be applied when there is a domain theory. The algorithm representing the problem into KBANN based on only the instances without domain theory is proposed in this paper. Domain theory represented into KBANN can be refined by THRE-KBANN. The performance of this algorithm is more efficient than the C4.5 in the experiment for some problem domains of inductive learning.

  • PDF

Speech Recognitioin Using Multilayered Recurrent Neural Networks (다층회귀신경망을 이용한 음성인식)

  • 어태경
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1998.08a
    • /
    • pp.267-271
    • /
    • 1998
  • 신경망에 의한 음절과 연속음성 인식시 동특성처리의 한방법으로 회귀신경망을 이용한다. 본 연구는 비회귀형 상위은닉층과 회귀형 하위은닉층을 가진 4층 구조의 다층회귀신경망으로 예측기를 반들어 나성화자 5명이 CV형 음절 14개, CVC형 음절 14개를 각각 5회씩 발음한 총 700개의 음성중 3회분인 420개 음성으로 학습한 후 나머지 2회분인 280개 음성으로 인식을 평가한다. 입력신호의 예측차수와 상, 하위 은닉층으 뉴런수를 변경시키면서 각각의 인식률을 조사해 본 결과 상위 은닉층의 뉴런이 10개이고 하위 은닉층의 뉴런이 10개와 15개 그리고 예측차수가 3,4차일 때 가장 양호한 인식기로 동작한다는 것을 알 수 있었다. 이 때 나타난 인식률은 Elman 망보다 다소 우세하다.

  • PDF

An Intrusion Detection System Using Principle Component Analysis and Time Delay Neural Network (PCA와 TDNN을 이용한 비정상 패킷탐지)

  • Jung, Sung-Yoon;Kang, Byung-Doo;Kim, Sang-Kyoon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2003.05a
    • /
    • pp.285-288
    • /
    • 2003
  • 기존의 침입탐지 시스템은 오용탐지모델이 널리 사용되고 있다. 이 모델은 낮은 오판율(False Alarm rates)을 가지고 있으나 새로운 공격에 대해 전문가시스템(Expert Systems)에 의한 규칙추가를 필요로 하고, 그 규칙과 완전히 매칭되는 시그너처만 공격으로 탐지하므로 변형된 공격을 탐지하지 못한다는 문제점을 가지고 있다. 본 논문에서는 이러한 문제점을 보완하기 위해 주성분분석(Principle Component Analysis ; 이하 PCA)과 시간지연신경망(Time Delay Neural Network ; 이하 TDNN)을 이용한 침입탐지 시스템을 제안한다. 패킷은 PCA를 이용하여 주성분을 결정하고 패킷이미지패턴으로 만든다. 이 연속된 패킷이미지패턴을 시간지연신경망의 학습패턴으로 사용한다.

  • PDF

A Study on the Diphone Recognition of Korean Connected Words and Eojeol Reconstruction (한국어 연결단어의 이음소 인식과 어절 형성에 관한 연구)

  • ;Jeong, Hong
    • The Journal of the Acoustical Society of Korea
    • /
    • v.14 no.4
    • /
    • pp.46-63
    • /
    • 1995
  • This thesis described an unlimited vocabulary connected speech recognition system using Time Delay Neural Network(TDNN). The recognition unit is the diphone unit which includes the transition section of two phonemes, and the number of diphone unit is 329. The recognition processing of korean connected speech is composed by three part; the feature extraction section of the input speech signal, the diphone recognition processing and post-processing. In the feature extraction section, the extraction of diphone interval in input speech signal is carried and then the feature vectors of 16th filter-bank coefficients are calculated for each frame in the diphone interval. The diphone recognition processing is comprised by the three stage hierachical structure and is carried using 30 Time Delay Neural Networks. particularly, the structure of TDNN is changed so as to increase the recognition rate. The post-processing section, mis-recognized diphone strings are corrected using the probability of phoneme transition and the probability o phoneme confusion and then the eojeols (Korean word or phrase) are formed by combining the recognized diphones.

  • PDF