• 제목/요약/키워드: Training Algorithm

검색결과 1,881건 처리시간 0.029초

진화전략을 이용한 뉴로퍼지 시스템의 학습방법 (Training Algorithms of Neuro-fuzzy Systems Using Evolution Strategy)

  • 정성훈
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2001년도 하계종합학술대회 논문집(3)
    • /
    • pp.173-176
    • /
    • 2001
  • This paper proposes training algorithms of neuro-fuzzy systems. First, we introduce a structure training algorithm, which produces the necessary number of hidden nodes from training data. From this algorithm, initial fuzzy rules are also obtained. Second, the parameter training algorithm using evolution strategy is introduced. In order to show their usefulness, we apply our neuro-fuzzy system to a nonlinear system identification problem. It was found from experiments that proposed training algorithms works well.

  • PDF

패턴분류기를 위한 최소오차율 학습알고리즘과 예측신경회로망모델에의 적용 (A Minimum-Error-Rate Training Algorithm for Pattern Classifiers and Its Application to the Predictive Neural Network Models)

  • 나경민;임재열;안수길
    • 전자공학회논문지B
    • /
    • 제31B권12호
    • /
    • pp.108-115
    • /
    • 1994
  • Most pattern classifiers have been designed based on the ML (Maximum Likelihood) training algorithm which is simple and relatively powerful. The ML training is an efficient algorithm to individually estimate the model parameters of each class under the assumption that all class models in a classifier are statistically independent. That assumption, however, is not valid in many real situations, which degrades the performance of the classifier. In this paper, we propose a minimum-error-rate training algorithm based on the MAP (Maximum a Posteriori) approach. The algorithm regards the normalized outputs of the classifier as estimates of the a posteriori probability, and tries to maximize those estimates. According to Bayes decision theory, the proposed algorithm satisfies the condition of minimum-error-rate classificatin. We apply this algorithm to NPM (Neural Prediction Model) for speech recognition, and derive new disrminative training algorithms. Experimental results on ten Korean digits recognition have shown the reduction of 37.5% of the number of recognition errors.

  • PDF

벡터양자화기의 코드북을 구하는 새로운 고속 학습 알고리듬 (A New Fast Training Algorithm for Vector Quantizer Design)

  • 이대룡;백성준;성굉모
    • 한국음향학회지
    • /
    • 제15권5호
    • /
    • pp.107-112
    • /
    • 1996
  • 본 논문에서는 코드북 학습 알고리듬의 대표적인 LBG 알고리듬의 탐색시간을 줄이기 위한 새로운 고속 학습 알고리듬을 제안한다. 제안한 알고리듬은 각 학습데이타가 모든 코드워드를 탐색하지 않고, 먼저 첫 번째 단계에서 각 학습데이타의 주위에 있는 일정한 개수의 코드워드에 대한 인덱스(index) 정보를 저장하고, 다음 단계에서부터는 이 인덱스가 가리키는 코드워드만을 탐색대상으로 함으로써 학습시간을 줄이는 것이다. 제안한 알고리듬을 기존의 고속 탐색 알고리듬인 FSLBG 알고리듬과 비교하면 제안한 알고리듬이 더 짧은 학습시간으로 더 좋은 성능을 갖는 코드북을 얻을 수 있음을 보인다. 또한 제안한 알고리듬을 LBG 알고리듬과 비교하면 영상데이타에 대해 코드북의 크기가 256인 경우에는 약 6%, 코드북의 크기가 1024인 경우에는 약 1.6%인 16개의 코드워드만을 탐색대상으로 해서 PSNR(peak signal-to-noise ratio)면에서 거의 성능이 같은 코드북을 생성할 수 있음을 보이고 있다.

  • PDF

Training Adaptive Equalization With Blind Algorithms

  • Namiki, Masanobu;Shimamura, Tetsuya
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2002년도 ITC-CSCC -3
    • /
    • pp.1901-1904
    • /
    • 2002
  • A good performance on communication systems is obtained by decreasing the length of training sequence In the initial stage of adaptive equalization. This paper presents a new approach to accomplish this, with the use of a training adaptive equalizer. The approach is based on combining the training and tracking modes, in which the training equalizer is updated by the LMS algorithm with the training sequence and then updated by a blind algorithm. By computer simulations, it is shown that a class of the proposed equalizers provides better performance than the conventional training equalizer.

  • PDF

확률적 근사법과 후형질과 알고리즘을 이용한 다층 신경망의 학습성능 개선 (Improving the Training Performance of Multilayer Neural Network by Using Stochastic Approximation and Backpropagation Algorithm)

  • 조용현;최흥문
    • 전자공학회논문지B
    • /
    • 제31B권4호
    • /
    • pp.145-154
    • /
    • 1994
  • This paper proposes an efficient method for improving the training performance of the neural network by using a hybrid of a stochastic approximation and a backpropagation algorithm. The proposed method improves the performance of the training by appliying a global optimization method which is a hybrid of a stochastic approximation and a backpropagation algorithm. The approximate initial point for a stochastic approximation and a backpropagation algorihtm. The approximate initial point for fast global optimization is estimated first by applying the stochastic approximation, and then the backpropagation algorithm, which is the fast gradient descent method, is applied for a high speed global optimization. And further speed-up of training is made possible by adjusting the training parameters of each of the output and the hidden layer adaptively to the standard deviation of the neuron output of each layer. The proposed method has been applied to the parity checking and the pattern classification, and the simulation results show that the performance of the proposed method is superior to that of the backpropagation, the Baba's MROM, and the Sun's method with randomized initial point settings. The results of adaptive adjusting of the training parameters show that the proposed method further improves the convergence speed about 20% in training.

  • PDF

Tri-training algorithm based on cross entropy and K-nearest neighbors for network intrusion detection

  • Zhao, Jia;Li, Song;Wu, Runxiu;Zhang, Yiying;Zhang, Bo;Han, Longzhe
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권12호
    • /
    • pp.3889-3903
    • /
    • 2022
  • To address the problem of low detection accuracy due to training noise caused by mislabeling when Tri-training for network intrusion detection (NID), we propose a Tri-training algorithm based on cross entropy and K-nearest neighbors (TCK) for network intrusion detection. The proposed algorithm uses cross-entropy to replace the classification error rate to better identify the difference between the practical and predicted distributions of the model and reduce the prediction bias of mislabeled data to unlabeled data; K-nearest neighbors are used to remove the mislabeled data and reduce the number of mislabeled data. In order to verify the effectiveness of the algorithm proposed in this paper, experiments were conducted on 12 UCI datasets and NSL-KDD network intrusion datasets, and four indexes including accuracy, recall, F-measure and precision were used for comparison. The experimental results revealed that the TCK has superior performance than the conventional Tri-training algorithms and the Tri-training algorithms using only cross-entropy or K-nearest neighbor strategy.

패리티 판별을 위한 유전자 알고리즘을 사용한 신경회로망의 학습법 (Learning method of a Neural Network using Genetic Algorithm for 3 Bit Parity Discrimination)

  • 최재승;김정화
    • 전자공학회논문지CI
    • /
    • 제44권2호
    • /
    • pp.11-18
    • /
    • 2007
  • 신경회로망의 학습에 널리 사용되고 있는 오차역전파 알고리즘은 최급하강법을 기초로 하고 있기 때문에 초기값에 따라서는 극소값에 떨어지거나, 신경회로망을 학습시킬 때 중간층 유닛수를 얼마로 설정하는 등의 문제점이 있다. 따라서 이러한 문제점을 해결하기 위하여, 본 논문에서는 3비트 패리티 판별을 위하여 신경회로망의 학습에 교차법, 돌연변이법에 새로운 기법을 도입한 개량형 유전적 알고리즘을 제안한다. 본 논문에서는 세대차이, 중간층 유닛수의 차이, 집단의 개체수의 차이에 대하여 실험을 실시하여, 본 방식이 학습 속도의 면에서 유효하다는 것을 나타낸다.

Semi-supervised Software Defect Prediction Model Based on Tri-training

  • Meng, Fanqi;Cheng, Wenying;Wang, Jingdong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권11호
    • /
    • pp.4028-4042
    • /
    • 2021
  • Aiming at the problem of software defect prediction difficulty caused by insufficient software defect marker samples and unbalanced classification, a semi-supervised software defect prediction model based on a tri-training algorithm was proposed by combining feature normalization, over-sampling technology, and a Tri-training algorithm. First, the feature normalization method is used to smooth the feature data to eliminate the influence of too large or too small feature values on the model's classification performance. Secondly, the oversampling method is used to expand and sample the data, which solves the unbalanced classification of labelled samples. Finally, the Tri-training algorithm performs machine learning on the training samples and establishes a defect prediction model. The novelty of this model is that it can effectively combine feature normalization, oversampling techniques, and the Tri-training algorithm to solve both the under-labelled sample and class imbalance problems. Simulation experiments using the NASA software defect prediction dataset show that the proposed method outperforms four existing supervised and semi-supervised learning in terms of Precision, Recall, and F-Measure values.

인공신경망 학습단계에서의 Genetic Algorithm을 이용한 입력변수 선정 (Input variables selection using genetic algorithm in training an artificial neural network)

  • 이재식;차봉근
    • 한국경영과학회:학술대회논문집
    • /
    • 한국경영과학회 1996년도 추계학술대회발표논문집; 고려대학교, 서울; 26 Oct. 1996
    • /
    • pp.27-30
    • /
    • 1996
  • Determination of input variables for artificial neural network (ANN) depends entirely on the judgement of a modeller. As the number of input variables increases, the training time for the resulting ANN increases exponentially. Moreover, larger number of input variables does not guarantee better performance. In this research, we employ Genetic Algorithm for selecting proper input variables that yield the best performance in training the resulting ANN.

  • PDF

Discriminative Training of Stochastic Segment Model Based on HMM Segmentation for Continuous Speech Recognition

  • Chung, Yong-Joo;Un, Chong-Kwan
    • The Journal of the Acoustical Society of Korea
    • /
    • 제15권4E호
    • /
    • pp.21-27
    • /
    • 1996
  • In this paper, we propose a discriminative training algorithm for the stochastic segment model (SSM) in continuous speech recognition. As the SSM is usually trained by maximum likelihood estimation (MLE), a discriminative training algorithm is required to improve the recognition performance. Since the SSM does not assume the conditional independence of observation sequence as is done in hidden Markov models (HMMs), the search space for decoding an unknown input utterance is increased considerably. To reduce the computational complexity and starch space amount in an iterative training algorithm for discriminative SSMs, a hybrid architecture of SSMs and HMMs is programming using HMMs. Given the segment boundaries, the parameters of the SSM are discriminatively trained by the minimum error classification criterion based on a generalized probabilistic descent (GPD) method. With the discriminative training of the SSM, the word error rate is reduced by 17% compared with the MLE-trained SSM in speaker-independent continuous speech recognition.

  • PDF