• Title/Summary/Keyword: gradient algorithm

Search Result 1,168, Processing Time 0.028 seconds

Image Classification using Deep Learning Algorithm and 2D Lidar Sensor (딥러닝 알고리즘과 2D Lidar 센서를 이용한 이미지 분류)

  • Lee, Junho;Chang, Hyuk-Jun
    • Journal of IKEEE
    • /
    • v.23 no.4
    • /
    • pp.1302-1308
    • /
    • 2019
  • This paper presents an approach for classifying image made by acquired position data from a 2D Lidar sensor with a convolutional neural network (CNN). Lidar sensor has been widely used for unmanned devices owing to advantages in term of data accuracy, robustness against geometry distortion and light variations. A CNN algorithm consists of one or more convolutional and pooling layers and has shown a satisfactory performance for image classification. In this paper, different types of CNN architectures based on training methods, Gradient Descent(GD) and Levenberg-arquardt(LM), are implemented. The LM method has two types based on the frequency of approximating Hessian matrix, one of the factors to update training parameters. Simulation results of the LM algorithms show better classification performance of the image data than that of the GD algorithm. In addition, the LM algorithm with more frequent Hessian matrix approximation shows a smaller error than the other type of LM algorithm.

Anisotropic Total Variation Denoising Technique for Low-Dose Cone-Beam Computed Tomography Imaging

  • Lee, Ho;Yoon, Jeongmin;Lee, Eungman
    • Progress in Medical Physics
    • /
    • v.29 no.4
    • /
    • pp.150-156
    • /
    • 2018
  • This study aims to develop an improved Feldkamp-Davis-Kress (FDK) reconstruction algorithm using anisotropic total variation (ATV) minimization to enhance the image quality of low-dose cone-beam computed tomography (CBCT). The algorithm first applies a filter that integrates the Shepp-Logan filter into a cosine window function on all projections for impulse noise removal. A total variation objective function with anisotropic penalty is then minimized to enhance the difference between the real structure and noise using the steepest gradient descent optimization with adaptive step sizes. The preserving parameter to adjust the separation between the noise-free and noisy areas is determined by calculating the cumulative distribution function of the gradient magnitude of the filtered image obtained by the application of the filtering operation on each projection. With these minimized ATV projections, voxel-driven backprojection is finally performed to generate the reconstructed images. The performance of the proposed algorithm was evaluated with the catphan503 phantom dataset acquired with the use of a low-dose protocol. Qualitative and quantitative analyses showed that the proposed ATV minimization provides enhanced CBCT reconstruction images compared with those generated by the conventional FDK algorithm, with a higher contrast-to-noise ratio (CNR), lower root-mean-square-error, and higher correlation. The proposed algorithm not only leads to a potential imaging dose reduction in repeated CBCT scans via lower mA levels, but also elicits high CNR values by removing noisy corrupted areas and by avoiding the heavy penalization of striking features.

Implementation of adaptive filters using fast hadamard transform (고속하다마드 변환을 이용한 적응 필터의 구현)

  • 곽대연;박진배;윤태성
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.1379-1382
    • /
    • 1997
  • We introduce a fast implementation of the adaptive transversal filter which uses least-mean-square(LMS) algorithm. The fast Hadamard transform(FHT) is used for the implementation of the filter. By using the proposed filter we can get the significant time reduction in computatioin over the conventional time domain LMS filter at the cost of a little performance. By computer simulation, we show the comparison of the propsed Hadamard-domain filter and the time domain filter in the view of multiplication time, mean-square error and robustness for noise.

  • PDF

DIFFERENTIAL LEARNING AND ICA

  • Park, Seungjin
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.09a
    • /
    • pp.162-165
    • /
    • 2003
  • Differential learning relies on the differentiated values of nodes, whereas the conventional learning depends on the values themselves of nodes. In this paper, I elucidate the differential learning in the framework maximum likelihood learning of linear generative model with latent variables obeying random walk. I apply the idea of differential learning to the problem independent component analysis(ICA), which leads to differential ICA. Algorithm derivation using the natural gradient and local stability analysis are provided. Usefulness of the algorithm is emphasized in the case of blind separation of temporally correlated sources and is demonstrated through a simple numerical example.

  • PDF

Stable Tracking Control to a Non-linear Process Via Neural Network Model

  • Zhai, Yujia
    • Journal of the Korea Convergence Society
    • /
    • v.5 no.4
    • /
    • pp.163-169
    • /
    • 2014
  • A stable neural network control scheme for unknown non-linear systems is developed in this paper. While the control variable is optimised to minimize the performance index, convergence of the index is guaranteed asymptotically stable by a Lyapnov control law. The optimization is achieved using a gradient descent searching algorithm and is consequently slow. A fast convergence algorithm using an adaptive learning rate is employed to speed up the convergence. Application of the stable control to a single input single output (SISO) non-linear system is simulated. The satisfactory control performance is obtained.

Adaptive fuzzy sliding mode control for nonlinear systems (비선형 계통에 대한 적응 퍼지 슬라이딩 모드 제어)

  • 서삼준;서호준;김동식
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1996.10b
    • /
    • pp.684-688
    • /
    • 1996
  • In this paper, to overcome drawbacks of variable structure control system a self-tuning fuzzy sliding mode control algorithm using gradient descent method is proposed. The proposed method has the characteristics which are viewed in conventional VSC, e.g. insensitivity to a class of disturbance, parameter variations and uncertainties in the sliding mode. To demonstrate its performance, the proposed control algorithm is applied to a one-degree of freedom robot arm. The results show that both alleviation of chattering and performance are achieved.

  • PDF

Optimization of Block-based Evolvable Neural Network using the Genetic Algorithm (유전자 알고리즘을 이용한 블록 기반 진화신경망의 최적화)

  • 문상우;공성곤
    • Proceedings of the IEEK Conference
    • /
    • 1999.06a
    • /
    • pp.460-463
    • /
    • 1999
  • In this paper, we proposed an block-based evolvable neural network(BENN). The BENN can optimize it's structure and weights simultaneously. It can be easily implemented by FPGA whose connection and internal functionality can be reconfigured. To solve the local minima problem that is caused gradient descent learning algorithm, genetic algorithms are applied for optimizing the proposed evolvable neural network model.

  • PDF

Algorithms and Programs for Optimization of Large-Scale Dynamic System (대형동적 시스템의 최적화 앨고리즘 및 프로그램 개발에 관한 연구)

  • 양흥석;박영문;김건중
    • The Transactions of the Korean Institute of Electrical Engineers
    • /
    • v.32 no.4
    • /
    • pp.121-127
    • /
    • 1983
  • In this paper an efficient algorithm for Pontriagin's maximum principle is developed. Fletcher-Powell method is adopted as optimization technique which shows fast and stable convergence characteristics. Terminal constraints are alse considered by using Hestens' algorithm and penalty function method together. Control variable inequality constraints are also considered by using Gradient Projection technique combined with Flectcher-Powell method. Test experiment shows good and reliable results.

  • PDF

Deep Learning Architectures and Applications (딥러닝의 모형과 응용사례)

  • Ahn, SungMahn
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.127-142
    • /
    • 2016
  • Deep learning model is a kind of neural networks that allows multiple hidden layers. There are various deep learning architectures such as convolutional neural networks, deep belief networks and recurrent neural networks. Those have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks. Among those architectures, convolutional neural networks and recurrent neural networks are classified as the supervised learning model. And in recent years, those supervised learning models have gained more popularity than unsupervised learning models such as deep belief networks, because supervised learning models have shown fashionable applications in such fields mentioned above. Deep learning models can be trained with backpropagation algorithm. Backpropagation is an abbreviation for "backward propagation of errors" and a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of an error function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the error function. Convolutional neural networks use a special architecture which is particularly well-adapted to classify images. Using this architecture makes convolutional networks fast to train. This, in turn, helps us train deep, muti-layer networks, which are very good at classifying images. These days, deep convolutional networks are used in most neural networks for image recognition. Convolutional neural networks use three basic ideas: local receptive fields, shared weights, and pooling. By local receptive fields, we mean that each neuron in the first(or any) hidden layer will be connected to a small region of the input(or previous layer's) neurons. Shared weights mean that we're going to use the same weights and bias for each of the local receptive field. This means that all the neurons in the hidden layer detect exactly the same feature, just at different locations in the input image. In addition to the convolutional layers just described, convolutional neural networks also contain pooling layers. Pooling layers are usually used immediately after convolutional layers. What the pooling layers do is to simplify the information in the output from the convolutional layer. Recent convolutional network architectures have 10 to 20 hidden layers and billions of connections between units. Training deep learning networks has taken weeks several years ago, but thanks to progress in GPU and algorithm enhancement, training time has reduced to several hours. Neural networks with time-varying behavior are known as recurrent neural networks or RNNs. A recurrent neural network is a class of artificial neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Unlike feedforward neural networks, RNNs can use their internal memory to process arbitrary sequences of inputs. Early RNN models turned out to be very difficult to train, harder even than deep feedforward networks. The reason is the unstable gradient problem such as vanishing gradient and exploding gradient. The gradient can get smaller and smaller as it is propagated back through layers. This makes learning in early layers extremely slow. The problem actually gets worse in RNNs, since gradients aren't just propagated backward through layers, they're propagated backward through time. If the network runs for a long time, that can make the gradient extremely unstable and hard to learn from. It has been possible to incorporate an idea known as long short-term memory units (LSTMs) into RNNs. LSTMs make it much easier to get good results when training RNNs, and many recent papers make use of LSTMs or related ideas.

Optimization of a Composite Laminated Structure by Network-Based Genetic Algorithm

  • Park, Jung-Sun;Song, Seok-Bong
    • Journal of Mechanical Science and Technology
    • /
    • v.16 no.8
    • /
    • pp.1033-1038
    • /
    • 2002
  • Genetic alsorithm (GA) , compared to the gradient-based optimization, has advantages of convergence to a global optimized solution. The genetic algorithm requires so many number of analyses that may cause high computational cost for genetic search. This paper proposes a personal computer network programming based on TCP/IP protocol and client-server model using socket, to improve processing speed of the genetic algorithm for optimization of composite laminated structures. By distributed processing for the generated population, improvement in processing speed has been obtained. Consequently, usage of network-based genetic algorithm with the faster network communication speed will be a very valuable tool for the discrete optimization of large scale and complex structures requiring high computational cost.