• Title/Summary/Keyword: gradient-descent method

Search Result 238, Processing Time 0.031 seconds

Development of Visual Servo Control System for the Tracking and Grabbing of Moving Object (이동 물체 포착을 위한 비젼 서보 제어 시스템 개발)

  • Choi, G.J.;Cho, W.S.;Ahn, D.S.
    • Journal of Power System Engineering
    • /
    • v.6 no.1
    • /
    • pp.96-101
    • /
    • 2002
  • In this paper, we address the problem of controlling an end-effector to track and grab a moving target using the visual servoing technique. A visual servo mechanism based on the image-based servoing principle, is proposed by using visual feedback to control an end-effector without calibrated robot and camera models. Firstly, we consider the control problem as a nonlinear least squares optimization and update the joint angles through the Taylor Series Expansion. And to track a moving target in real time, the Jacobian estimation scheme(Dynamic Broyden's Method) is used to estimate the combined robot and image Jacobian. Using this algorithm, we can drive the objective function value to a neighborhood of zero. To show the effectiveness of the proposed algorithm, simulation results for a six degree of freedom robot are presented.

  • PDF

High Efficiency Life Prediction and Exception Processing Method of NAND Flash Memory-based Storage using Gradient Descent Method (경사하강법을 이용한 낸드 플래시 메모리기반 저장 장치의 고효율 수명 예측 및 예외처리 방법)

  • Lee, Hyun-Seob
    • Journal of Convergence for Information Technology
    • /
    • v.11 no.11
    • /
    • pp.44-50
    • /
    • 2021
  • Recently, enterprise storage systems that require large-capacity storage devices to accommodate big data have used large-capacity flash memory-based storage devices with high density compared to cost and size. This paper proposes a high-efficiency life prediction method with slope descent to maximize the life of flash memory media that directly affects the reliability and usability of large enterprise storage devices. To this end, this paper proposes the structure of a matrix for storing metadata for learning the frequency of defects and proposes a cost model using metadata. It also proposes a life expectancy prediction policy in exceptional situations when defects outside the learned range occur. Lastly, it was verified through simulation that a method proposed by this paper can maximize its life compared to a life prediction method based on the fixed number of times and the life prediction method based on the remaining ratio of spare blocks, which has been used to predict the life of flash memory.

Study on the Effective Compensation of Quantization Error for Machine Learning in an Embedded System (임베디드 시스템에서의 양자화 기계학습을 위한 효율적인 양자화 오차보상에 관한 연구)

  • Seok, Jinwuk
    • Journal of Broadcast Engineering
    • /
    • v.25 no.2
    • /
    • pp.157-165
    • /
    • 2020
  • In this paper. we propose an effective compensation scheme to the quantization error arisen from quantized learning in a machine learning on an embedded system. In the machine learning based on a gradient descent or nonlinear signal processing, the quantization error generates early vanishing of a gradient and occurs the degradation of learning performance. To compensate such quantization error, we derive an orthogonal compensation vector with respect to a maximum component of the gradient vector. Moreover, instead of the conventional constant learning rate, we propose the adaptive learning rate algorithm without any inner loop to select the step size, based on a nonlinear optimization technique. The simulation results show that the optimization solver based on the proposed quantized method represents sufficient learning performance.

Simultaneous optimization method of feature transformation and weighting for artificial neural networks using genetic algorithm : Application to Korean stock market

  • Kim, Kyoung-jae;Ingoo Han
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 1999.10a
    • /
    • pp.323-335
    • /
    • 1999
  • In this paper, we propose a new hybrid model of artificial neural networks(ANNs) and genetic algorithm (GA) to optimal feature transformation and feature weighting. Previous research proposed several variants of hybrid ANNs and GA models including feature weighting, feature subset selection and network structure optimization. Among the vast majority of these studies, however, ANNs did not learn the patterns of data well, because they employed GA for simple use. In this study, we incorporate GA in a simultaneous manner to improve the learning and generalization ability of ANNs. In this study, GA plays role to optimize feature weighting and feature transformation simultaneously. Globally optimized feature weighting overcome the well-known limitations of gradient descent algorithm and globally optimized feature transformation also reduce the dimensionality of the feature space and eliminate irrelevant factors in modeling ANNs. By this procedure, we can improve the performance and enhance the generalisability of ANNs.

  • PDF

Tuning Learning Rate in Neural Network Using Fuzzy Model (퍼지 모델을 이용한 신경망의 학습률 조정)

  • 라혁주;서재용;김성주;전홍태
    • Proceedings of the IEEK Conference
    • /
    • 2003.07d
    • /
    • pp.1239-1242
    • /
    • 2003
  • The neural networks are a famous model to learn the nonlinear function or nonlinear system. The main point of neural network is that the difference actual output from desired output is used to update weights. Usually, the gradient descent method is used for the learning process. On training process, if learning rate is too large, neural networks hardly guarantee convergence of neural networks. On the other hand, if learning rate is too small, the training spends much time. Therefore, one major problem in use of neural networks are to decrease the teaming time while neural networks are guaranteed convergence. In this paper, we suggest the model of fuzzy logic to neural networks to calibrate learning rate. This method is to tune learning rate dynamically according to error and demonstrates the optimization of training.

  • PDF

On Learning of HMM-Net Classifiers Using Hybrid Methods (하이브리드법에 의한 HMM-Net 분류기의 학습)

  • 김상운;신성효
    • Proceedings of the IEEK Conference
    • /
    • 1998.10a
    • /
    • pp.1273-1276
    • /
    • 1998
  • The HMM-Net is an architecture for a neural network that implements a hidden Markov model (HMM). The architecture is developed for the purpose of combining the discriminant power of neural networks with the time-domain modeling capability of HMMs. Criteria used for learning HMM-Net classifiers are maximum likelihood (ML), maximum mutual information (MMI), and minimization of mean squared error(MMSE). In this paper we propose an efficient learning method of HMM-Net classifiers using hybrid criteria, ML/MMSE and MMI/MMSE, and report the results of an experimental study comparing the performance of HMM-Net classifiers trained by the gradient descent algorithm with the above criteria. Experimental results for the isolated numeric digits from /0/ to /9/ show that the performance of the proposed method is better than the others in the respects of learning and recognition rates.

  • PDF

PID Learning Controller for Multivariable System with Dynamic Friction (동적 마찰이 있는 다변수 시스템에서의 PID 학습 제어)

  • Chung, Byeong-Mook
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.24 no.12
    • /
    • pp.57-64
    • /
    • 2007
  • There have been many researches for optimal controllers in multivariable systems, and they generally use accurate linear models of the plant dynamics. Real systems, however, contain nonlinearities and high-order dynamics that may be difficult to model using conventional techniques. Therefore, it is necessary a PID gain tuning method without explicit modeling for the multivariable plant dynamics. The PID tuning method utilizes the sign of Jacobian and gradient descent techniques to iteratively reduce the error-related objective function. This paper, especially, focuses on the role of I-controller when there is a steady state error. However, it is not easy to tune I-gain unlike P- and D-gain because I-controller is mainly operated in the steady state. Simulations for an overhead crane system with dynamic friction show that the proposed PID-LC algorithm improves controller performance, even in the steady state error.

Design of an Adaptive Neuro-Fuzzy Inference Precompensator for Load Frequency Control of Two-Area Power Systems (2지역 전력계통의 부하주파수 제어를 위한 적응 뉴로 퍼지추론 보상기 설계)

  • 정형환;정문규;한길만
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.24 no.2
    • /
    • pp.72-81
    • /
    • 2000
  • In this paper, we design an adaptive neuro-fuzzy inference system(ANFIS) precompensator for load frequency control of 2-area power systems. While proportional integral derivative (PID) controllers are used in power systems, they may have some problems because of high nonlinearities of the power systems. So, a neuro-fuzzy-based precompensation scheme is incorporated with a convectional PID controller to obtain robustness to the nonlinearities. The proposed precompensation technique can be easily implemented by adding a precompensator to an existing PID controller. The applied neruo-fuzzy inference system precompensator uses a hybrid learning algorithm. This algorithm is to use both a gradient descent method to optimize the premise parameters and a least squares method to solve for the consequent parameters. Simulation results show that the proposed control technique is superior to a conventional Ziegler-Nichols PID controller in dynamic responses about load disturbances.

  • PDF

Forecasting Long-Term Steamflow from a Small Waterhed Using Artificial Neural Network (인공신경망 이론을 이용한 소유역에서의 장기 유출 해석)

  • 강문성;박승우
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.43 no.2
    • /
    • pp.69-77
    • /
    • 2001
  • An artificial neural network model was developed to analyze and forecast daily steamflow flow a small watershed. Error Back propagation neural networks (EBPN) of daily rainfall and runoff data were found to have a high performance in simulating stremflow. The model adopts a gradient descent method where the momentum and adaptive learning rate concepts were employed to minimize local minima value problems and speed up the convergence of EBP method. The number of hidden nodes was optimized using Bayesian information criterion. The resulting optimal EBPN model for forecasting daily streamflow consists of three rainfall and four runoff data (Model34), and the best number of the hidden nodes were found to be 13. The proposed model simulates the daily streamflow satisfactorily by comparison compared to the observed data at the HS#3 watershed of the Baran watershed project, which is 391.8 ha and has relatively steep topography and complex land use.

  • PDF

The Design of Fuzzy-Neural Networks using FCM Algorithms (FCM 알고리즘을 이용한 퍼지-뉴럴 네트워크 설계)

  • Yoon, Ki-Chan;Park, Byoung-Jun;Oh, Sung-Kwun;Lee, Sung-Hwan
    • Proceedings of the KIEE Conference
    • /
    • 2000.11d
    • /
    • pp.803-805
    • /
    • 2000
  • In this paper, we propose fuzzy-neural Networks(FNN) which is useful for identification algorithms. The proposed FNN model consists of two steps: the first step, which determines premise and consequent parameters approximately using FCM_RI method, the second step, which adjusts the premise and consequent parameters more precisely by gradient descent algorithm. The FCM_RI algorithm consists FCM clustering algorithm and Recursive least squared(RLS) method, this divides the input space more efficiently than convention methods by taking into consideration correlations between components of sample data. To evaluate the performance of the proposed FNN model, we use the time series data for gas furnace.

  • PDF