• Title/Summary/Keyword: Training Algorithm

Search Result 1,861, Processing Time 0.029 seconds

Design of robust servo systems and application to control of training simulator for radio-controlled helicopter (강인한 서보계설계와 R/C헬리콥터 트레이닝 시뮬레이터 제어에의 응용)

  • 김상봉;박순실
    • Transactions of the Korean Society of Mechanical Engineers
    • /
    • v.15 no.2
    • /
    • pp.497-506
    • /
    • 1991
  • In this paper, a new construction for training simulator of R/C helicopter based on two types of servo controller is proposed. Two modified algorithms (algorithm I and II) for servo controller design are presented. Algorithm I is developed by adopting Davison's method in the case that the expressions for the homogeneous differential equations of reference input and disturbance are different types, and algorithm II is done by considering error weighting function for the servo controller of algorithm I . The linear fractional transformation method is incorporated in both design methods in order to assign the closed loop poles of the servo system in a specified region. The helicopter simulator is composed by the gimbals with two freedom of rolling and pitching. The reliability and validity for the design methods of the proposed servo controller are investigated through the practical experiment for the simulator by using 16bits micro-computer with A/D and D/A converters. It can be observered from the experimental results that the proposed servo controller is applicable to practical plants since the simulator is robust for the arbitrary disturbance and it follows to the given reference input without significant steady state error.

A Modified Error Function to Improve the Error Back-Propagation Algorithm for Multi-Layer Perceptrons

  • Oh, Sang-Hoon;Lee, Young-Jik
    • ETRI Journal
    • /
    • v.17 no.1
    • /
    • pp.11-22
    • /
    • 1995
  • This paper proposes a modified error function to improve the error back-propagation (EBP) algorithm for multi-Layer perceptrons (MLPs) which suffers from slow learning speed. It can also suppress over-specialization for training patterns that occurs in an algorithm based on a cross-entropy cost function which markedly reduces learning time. In the similar way as the cross-entropy function, our new function accelerates the learning speed of the EBP algorithm by allowing the output node of the MLP to generate a strong error signal when the output node is far from the desired value. Moreover, it prevents the overspecialization of learning for training patterns by letting the output node, whose value is close to the desired value, generate a weak error signal. In a simulation study to classify handwritten digits in the CEDAR [1] database, the proposed method attained 100% correct classification for the training patterns after only 50 sweeps of learning, while the original EBP attained only 98.8% after 500 sweeps. Also, our method shows mean-squared error of 0.627 for the test patterns, which is superior to the error 0.667 in the cross-entropy method. These results demonstrate that our new method excels others in learning speed as well as in generalization.

  • PDF

A BLMS Adaptive Receiver for Direct-Sequence Code Division Multiple Access Systems

  • Hamouda Walaa;McLane Peter J.
    • Journal of Communications and Networks
    • /
    • v.7 no.3
    • /
    • pp.243-247
    • /
    • 2005
  • We propose an efficient block least-mean-square (BLMS) adaptive algorithm, in conjunction with error control coding, for direct-sequence code division multiple access (DS-CDMA) systems. The proposed adaptive receiver incorporates decision feedback detection and channel encoding in order to improve the performance of the standard LMS algorithm in convolutionally coded systems. The BLMS algorithm involves two modes of operation: (i) The training mode where an uncoded training sequence is used for initial filter tap-weights adaptation, and (ii) the decision-directed where the filter weights are adapted, using the BLMS algorithm, after decoding/encoding operation. It is shown that the proposed adaptive receiver structure is able to compensate for the signal-to­noise ratio (SNR) loss incurred due to the switching from uncoded training mode to coded decision-directed mode. Our results show that by using the proposed adaptive receiver (with decision feed­back block adaptation) one can achieve a much better performance than both the coded LMS with no decision feedback employed. The convergence behavior of the proposed BLMS receiver is simulated and compared to the standard LMS with and without channel coding. We also examine the steady-state bit-error rate (BER) performance of the proposed adaptive BLMS and standard LMS, both with convolutional coding, where we show that the former is more superior than the latter especially at large SNRs ($SNR\;\geq\;9\;dB$).

Generic Training Set based Multimanifold Discriminant Learning for Single Sample Face Recognition

  • Dong, Xiwei;Wu, Fei;Jing, Xiao-Yuan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.1
    • /
    • pp.368-391
    • /
    • 2018
  • Face recognition (FR) with a single sample per person (SSPP) is common in real-world face recognition applications. In this scenario, it is hard to predict intra-class variations of query samples by gallery samples due to the lack of sufficient training samples. Inspired by the fact that similar faces have similar intra-class variations, we propose a virtual sample generating algorithm called k nearest neighbors based virtual sample generating (kNNVSG) to enrich intra-class variation information for training samples. Furthermore, in order to use the intra-class variation information of the virtual samples generated by kNNVSG algorithm, we propose image set based multimanifold discriminant learning (ISMMDL) algorithm. For ISMMDL algorithm, it learns a projection matrix for each manifold modeled by the local patches of the images of each class, which aims to minimize the margins of intra-manifold and maximize the margins of inter-manifold simultaneously in low-dimensional feature space. Finally, by comprehensively using kNNVSG and ISMMDL algorithms, we propose k nearest neighbor virtual image set based multimanifold discriminant learning (kNNMMDL) approach for single sample face recognition (SSFR) tasks. Experimental results on AR, Multi-PIE and LFW face datasets demonstrate that our approach has promising abilities for SSFR with expression, illumination and disguise variations.

Estimating Evapotranspiration of Rice Crop Using Neural Networks -Application of Back-propagation and Counter-propagation Algorithm- (신경회로망을 이용한 수도 증발산량 예측 -백프로파게이션과 카운터프로파게이션 알고리즘의 적용-)

  • 이남호;정하우
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.36 no.2
    • /
    • pp.88-95
    • /
    • 1994
  • This paper is to evaluate the applicability of neural networks to the estimation of evapotranspiration. Two neural networks were developed to forecast daily evapotranspiration of the rice crop with back-propagation and counter-propagation algorithm. The neural network trained by back-propagation algorithm with delta learning rule is a three-layer network with input, hidden, and output layers. The other network with counter-propagation algorithm is a four-layer network with input, normalizing, competitive, and output layers. Training neural networks was conducted using daily actual evapotranspiration of rice crop and daily climatic data such as mean temperature, sunshine hours, solar radiation, relative humidity, and pan evaporation. During the training, neural network parameters were calibrated. The trained networks were applied to a set of field data not used in the training. The created response of the back-propagation network was in good agreement with desired values and showed better performances than the counter-propagation network did. Evaluating the neural network performance indicates that the back-propagation neural network may be applied to the estimation of evapotranspiration of the rice crop. This study does not provide with a conclusive statement as to the ability of a neural network to evapotranspiration estimating. More detailed study is required for better understanding and evaluating the behavior of neural networks.

  • PDF

The Development of Exercise Accuracy Measurement Algorithm Supporting Personal Training's Exercise Amount Improvement

  • Oh, Seung-Taek;Kim, Hyeong-Seok;Lim, Jae-Hyun
    • International journal of advanced smart convergence
    • /
    • v.11 no.4
    • /
    • pp.57-67
    • /
    • 2022
  • The demand for personal training (PT), through which high exercise effects can be achieved within short-term, has recently increased. PT can achieve an exercise amount improvement effect, only if accurate postures are maintained upon performing PT, and exercise with inaccurate postures can cause injuries. However, research is insufficient on exercise amount comparisons and judging exercise accuracy on PT. This study proposes an exercise accuracy measurement algorithm and compares differences in exercise amounts according to exercise postures through experiments using a respiratory gas analyzer. The exercise accuracy measurement algorithm acquires Euler anglesfrom major body parts operated upon exercise through a motion device, based on which the joint angles are calculated. By comparing the calculated joint angles with each reference angle in each exercise step, the status of exercise accuracy is judged. The calculated results of exercise accuracy on squats, lunges, and push-ups showed 0.02% difference in comparison with actually measured results through a goniometer. As a result of the exercise amount comparison experiment according to accurate posture through a respiratory gas analyzer, the exercise amount was higher by 45.19% on average in accurate postures. Through this, it was confirmed that maintaining accurate postures contributes to exercise amount improvement.

A Comparison of the Effects of Optimization Learning Rates using a Modified Learning Process for Generalized Neural Network (일반화 신경망의 개선된 학습 과정을 위한 최적화 신경망 학습률들의 효율성 비교)

  • Yoon, Yeochang;Lee, Sungduck
    • The Korean Journal of Applied Statistics
    • /
    • v.26 no.5
    • /
    • pp.847-856
    • /
    • 2013
  • We propose a modified learning process for generalized neural network using a learning algorithm by Liu et al. (2001). We consider the effect of initial weights, training results and learning errors using a modified learning process. We employ an incremental training procedure where training patterns are learned systematically. Our algorithm starts with a single training pattern and a single hidden layer neuron. During the course of neural network training, we try to escape from the local minimum by using a weight scaling technique. We allow the network to grow by adding a hidden layer neuron only after several consecutive failed attempts to escape from a local minimum. Our optimization procedure tends to make the network reach the error tolerance with no or little training after the addition of a hidden layer neuron. Simulation results with suitable initial weights indicate that the present constructive algorithm can obtain neural networks very close to minimal structures and that convergence to a solution in neural network training can be guaranteed. We tested these algorithms extensively with small training sets.

Development of Brain-Style Intelligent Information Processing Algorithm Through the Merge of Supervised and Unsupervised Learning I: Generation of Exemplar Patterns for Training (교사학습과 비교사 학습의 접목에 의한 두뇌방식의 지능 정보 처리 알고리즘I: 학습패턴의 생성)

  • 오상훈
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2004.05a
    • /
    • pp.56-62
    • /
    • 2004
  • In the case that we do not have enough number of training patterns because of limitation such as time consuming, economic problem, and so on, we geneterate a new patterns using the brain-style Information processing algorithm, that is, supervised and unsupervised learning methods.

  • PDF

Training HMM Structure and Parameters with Genetic Algorithm and Harmony Search Algorithm

  • Ko, Kwang-Eun;Park, Seung-Min;Park, Jun-Heong;Sim, Kwee-Bo
    • Journal of Electrical Engineering and Technology
    • /
    • v.7 no.1
    • /
    • pp.109-114
    • /
    • 2012
  • In this paper, we utilize training strategy of hidden Markov model (HMM) to use in versatile issues such as classification of time-series sequential data such as electric transient disturbance problem in power system. For this, an automatic means of optimizing HMMs would be highly desirable, but it raises important issues: model interpretation and complexity control. With this in mind, we explore the possibility of using genetic algorithm (GA) and harmony search (HS) algorithm for optimizing the HMM. GA is flexible to allow incorporating other methods, such as Baum-Welch, within their cycle. Furthermore, operators that alter the structure of HMMs can be designed to simple structures. HS algorithm with parameter-setting free technique is proper for optimizing the parameters of HMM. HS algorithm is flexible so as to allow the elimination of requiring tedious parameter assigning efforts. In this paper, a sequential data analysis simulation is illustrated, and the optimized-HMMs are evaluated. The optimized HMM was capable of classifying a sequential data set for testing compared with the normal HMM.

Combining a HMM with a Genetic Algorithm for the Fault Diagnosis of Photovoltaic Inverters

  • Zheng, Hong;Wang, Ruoyin;Xu, Wencheng;Wang, Yifan;Zhu, Wen
    • Journal of Power Electronics
    • /
    • v.17 no.4
    • /
    • pp.1014-1026
    • /
    • 2017
  • The traditional fault diagnosis method for photovoltaic (PV) inverters has a difficult time meeting the requirements of the current complex systems. Its main weakness lies in the study of nonlinear systems. In addition, its diagnosis time is long and its accuracy is low. To solve these problems, a hidden Markov model (HMM) is used that has unique advantages in terms of its training model and its recognition for diagnosing faults. However, the initial value of the HMM has a great influence on the model, and it is possible to achieve a local minimum in the training process. Therefore, a genetic algorithm is used to optimize the initial value and to achieve global optimization. In this paper, the HMM is combined with a genetic algorithm (GHMM) for PV inverter fault diagnosis. First Matlab is used to implement the genetic algorithm and to determine the optimal HMM initial value. Then a Baum-Welch algorithm is used for iterative training. Finally, a Viterbi algorithm is used for fault identification. Experimental results show that the correct PV inverter fault recognition rate by the HMM is about 10% higher than that of traditional methods. Using the GHMM, the correct recognition rate is further increased by approximately 13%, and the diagnosis time is greatly reduced. Therefore, the GHMM is faster and more accurate in diagnosing PV inverter faults.