• Title/Summary/Keyword: Error backpropagation

Search Result 133, Processing Time 0.023 seconds

Peak Impact Force of Ship Bridge Collision Based on Neural Network Model (신경망 모델을 이용한 선박-교각 최대 충돌력 추정 연구)

  • Wang, Jian;Noh, Jackyou
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.28 no.1
    • /
    • pp.175-183
    • /
    • 2022
  • The collision between a ship and bridge across a waterway may result in extremely serious consequences that may endanger the safety of life and property. Therefore, factors affecting ship bridge collision must be investigated, and the impact force should be discussed based on various collision conditions. In this study, a finite element model of ship bridge collision is established, and the peak impact force of a ship bridge collision based on 50 operating conditions combined with three input parameters, i.e., ship loading condition, ship speed, and ship bridge collision angle, is calculated via numerical simulation. Using neural network models trained with the numerical simulation results, the prediction model of the peak impact force of ship bridge collision involving an extremely short calculation time on the order of milliseconds is established. The neural network models used in this study are the basic backpropagation neural network model and Elman neural network model, which can manage temporal information. The accuracy of the neural network models is verified using 10 test samples based on the operating conditions. Results of a verification test show that the Elman neural network model performs better than the backpropagation neural network model, with a mean relative error of 4.566% and relative errors of less than 5% in 8 among 10 test cases. The trained neural network can yield a reliable ship bridge collision force instantaneously only when the required parameters are specified and a nonlinear finite element solution process is not required. The proposed model can be used to predict whether a catastrophic collision will occur during ship navigation, and thus hence the safety of crew operating the ship.

Deep Learning Architectures and Applications (딥러닝의 모형과 응용사례)

  • Ahn, SungMahn
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.127-142
    • /
    • 2016
  • Deep learning model is a kind of neural networks that allows multiple hidden layers. There are various deep learning architectures such as convolutional neural networks, deep belief networks and recurrent neural networks. Those have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks. Among those architectures, convolutional neural networks and recurrent neural networks are classified as the supervised learning model. And in recent years, those supervised learning models have gained more popularity than unsupervised learning models such as deep belief networks, because supervised learning models have shown fashionable applications in such fields mentioned above. Deep learning models can be trained with backpropagation algorithm. Backpropagation is an abbreviation for "backward propagation of errors" and a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of an error function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the error function. Convolutional neural networks use a special architecture which is particularly well-adapted to classify images. Using this architecture makes convolutional networks fast to train. This, in turn, helps us train deep, muti-layer networks, which are very good at classifying images. These days, deep convolutional networks are used in most neural networks for image recognition. Convolutional neural networks use three basic ideas: local receptive fields, shared weights, and pooling. By local receptive fields, we mean that each neuron in the first(or any) hidden layer will be connected to a small region of the input(or previous layer's) neurons. Shared weights mean that we're going to use the same weights and bias for each of the local receptive field. This means that all the neurons in the hidden layer detect exactly the same feature, just at different locations in the input image. In addition to the convolutional layers just described, convolutional neural networks also contain pooling layers. Pooling layers are usually used immediately after convolutional layers. What the pooling layers do is to simplify the information in the output from the convolutional layer. Recent convolutional network architectures have 10 to 20 hidden layers and billions of connections between units. Training deep learning networks has taken weeks several years ago, but thanks to progress in GPU and algorithm enhancement, training time has reduced to several hours. Neural networks with time-varying behavior are known as recurrent neural networks or RNNs. A recurrent neural network is a class of artificial neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Unlike feedforward neural networks, RNNs can use their internal memory to process arbitrary sequences of inputs. Early RNN models turned out to be very difficult to train, harder even than deep feedforward networks. The reason is the unstable gradient problem such as vanishing gradient and exploding gradient. The gradient can get smaller and smaller as it is propagated back through layers. This makes learning in early layers extremely slow. The problem actually gets worse in RNNs, since gradients aren't just propagated backward through layers, they're propagated backward through time. If the network runs for a long time, that can make the gradient extremely unstable and hard to learn from. It has been possible to incorporate an idea known as long short-term memory units (LSTMs) into RNNs. LSTMs make it much easier to get good results when training RNNs, and many recent papers make use of LSTMs or related ideas.

An Enhancement of Learning Speed of the Error - Backpropagation Algorithm (오류 역전도 알고리즘의 학습속도 향상기법)

  • Shim, Bum-Sik;Jung, Eui-Yong;Yoon, Chung-Hwa;Kang, Kyung-Sik
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.7
    • /
    • pp.1759-1769
    • /
    • 1997
  • The Error BackPropagation (EBP) algorithm for multi-layered neural networks is widely used in various areas such as associative memory, speech recognition, pattern recognition and robotics, etc. Nevertheless, many researchers have continuously published papers about improvements over the original EBP algorithm. The main reason for this research activity is that EBP is exceeding slow when the number of neurons and the size of training set is large. In this study, we developed new learning speed acceleration methods using variable learning rate, variable momentum rate and variable slope for the sigmoid function. During the learning process, these parameters should be adjusted continuously according to the total error of network, and it has been shown that these methods significantly reduced learning time over the original EBP. In order to show the efficiency of the proposed methods, first we have used binary data which are made by random number generator and showed the vast improvements in terms of epoch. Also, we have applied our methods to the binary-valued Monk's data, 4, 5, 6, 7-bit parity checker and real-valued Iris data which are famous benchmark training sets for machine learning.

  • PDF

Implementation of the Controller for intelligent Process System Using Neural Network (신경회로망을 이용한 지능형 가공 시스템 제어기 구현)

  • Son, Chang-U;kim, Gwan-Hyeong;Kim, Il;Tak, Han-Ho;Lee, Sang-Bae
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2000.11a
    • /
    • pp.376-379
    • /
    • 2000
  • In this paper, this system makes use of the analog infrered rays sensor and converts the feature of fish analog signal when sensor is operating with CPU(80C196KC). Then, After signal processing, this feature is classified a special feature and a outline of fish by using the neural network, one of the artificial intelligence scheme. This neural network classifies fish pattern of very simple and short calculation. This has linear activation function and the error backpropagation is used as a learning algorithm. And the neural network is learned in off-line process. Because an adaptation period of neural network is too long time when random initial weights are used, off-line learning is induced to decrease the progress time. We confirmed this method has better performance than somewhat outdated machines.

  • PDF

RFID Tag Detection on a Water Content Using a Back-propagation Learning Machine

  • Jo, Min-Ho;Lim, Chang-Gyoon;Zimmers, Emory W.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.1 no.1
    • /
    • pp.19-31
    • /
    • 2007
  • RFID tag is detected by an RFID antenna and information is read from the tag detected, by an RFID reader. RFID tag detection by an RFID reader is very important at the deployment stage. Tag detection is influenced by factors such as tag direction on a target object, speed of a conveyer moving the object, and the contents of an object. The water content of the object absorbs radio waves at high frequencies, typically approximately 900 MHz, resulting in unstable tag signal power. Currently, finding the best conditions for factors influencing the tag detection requires very time consuming work at deployment. Thus, a quick and simple RFID tag detection scheme is needed to improve the current time consuming trial-and-error experimental method. This paper proposes a back-propagation learning-based RFID tag detection prediction scheme, which is intelligent and has the advantages of ease of use and time/cost savings. The results of simulation with the proposed scheme demonstrate a high prediction accuracy for tag detection on a water content, which is comparable with the current method in terms of time/cost savings.

COMPARISON OF SPECKLE REDUCTION METHODS FOR MULTISOURCE LAND-COVER CLASSIFICATION BY NEURAL NETWORK : A CASE STUDY IN THE SOUTH COAST OF KOREA

  • Ryu, Joo-Hyung;Won, Joong-Sun;Kim, Sang-Wan
    • Proceedings of the KSRS Conference
    • /
    • 1999.11a
    • /
    • pp.144-147
    • /
    • 1999
  • The objective of this study is to quantitatively evaluate the effects of various SAR speckle reduction methods for multisource land-cover classification by backpropagation neural network, especially over the coastal region. The land-cover classification using neural network has an advantage over conventional statistical approaches in that it is distribution-free and no prior knowledge of the statistical distributions of the classes is needed. The goal of multisource land-cover classification acquired by different sensors is to reduce the classification error, and consequently SAR can be utilized an complementary tool to optical sensors. SAR speckle is, however, an serious limiting factor when it is exploited for land-cover classification. In order to reduce this problem. we test various speckle methods including Frost, Median, Kuan and EPOS. Interpreting the weights about training pixel samples, the “Importance Value” of each SAR images that reduced speckle can be estimated based on its contribution to the classification. In this study, the “Importance Value” is used as a criterion of the effectiveness.

  • PDF

Structural damage detection of steel bridge girder using artificial neural networks and finite element models

  • Hakim, S.J.S.;Razak, H. Abdul
    • Steel and Composite Structures
    • /
    • v.14 no.4
    • /
    • pp.367-377
    • /
    • 2013
  • Damage in structures often leads to failure. Thus it is very important to monitor structures for the occurrence of damage. When damage happens in a structure the consequence is a change in its modal parameters such as natural frequencies and mode shapes. Artificial Neural Networks (ANNs) are inspired by human biological neurons and have been applied for damage identification with varied success. Natural frequencies of a structure have a strong effect on damage and are applied as effective input parameters used to train the ANN in this study. The applicability of ANNs as a powerful tool for predicting the severity of damage in a model steel girder bridge is examined in this study. The data required for the ANNs which are in the form of natural frequencies were obtained from numerical modal analysis. By incorporating the training data, ANNs are capable of producing outputs in terms of damage severity using the first five natural frequencies. It has been demonstrated that an ANN trained only with natural frequency data can determine the severity of damage with a 6.8% error. The results shows that ANNs trained with numerically obtained samples have a strong potential for structural damage identification.

Detection of High Impedance Fault Using Adaptive Neuro-Fuzzy Inference System (적응 뉴로 퍼지 추론 시스템을 이용한 고임피던스 고장검출)

  • 유창완
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.9 no.4
    • /
    • pp.426-435
    • /
    • 1999
  • A high impedance fault(HIF) is one of the serious problems facing the electric utility industry today. Because of the high impedance of a downed conductor under some conditions these faults are not easily detected by over-current based protection devices and can cause fires and personal hazard. In this paper a new method for detection of HIF which uses adaptive neuro-fuzzy inference system (ANFIS) is proposed. Since arcing fault current shows different changes during high and low voltage portion of conductor voltage waveform we firstly divided one cycle of fault current into equal spanned four data windows according to the mangnitude of conductor voltage. Fast fourier transform(FFT) is applied to each data window and the frequency spectrum of current waveform are chosen asinputs of ANFIS after input selection method is preprocessed. Using staged fault and normal data ANFIS is trained to discriminate between normal and HIF status by hybrid learning algorithm. This algorithm adapted gradient descent and least square method and shows rapid convergence speed and improved convergence error. The proposed method represent good performance when applied to staged fault data and HIFLL(high impedance like load)such as arc-welder.

  • PDF

Improvement of cold mill precalculation accuracy using a corrective neural network

  • Jang, Min;Cho, Sungzoon;Cho, Yong-Joong;Yoon, Sungcheol;Cho, Hyungsuk
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 1996.04a
    • /
    • pp.63-66
    • /
    • 1996
  • Cold rolling mill process in steel works uses stands of rolls to flatten a strip to a desired thichness. At Pohang Iron and Steel Company (POSCO) in Pohang, Korea, precalculation determines the mill settings before a strip actually enters the mill and is done by an outdated mathematical model. A corrective neural network model is proposed to improve the accuracy of the roll force prediction. Additional variables to be fed to the network include the chemical composition of the coil, its coiling temperature and the aggregated amount of processed strips of each roll. The network was trained using a standard backpropagation with 2,277 process data collected form POSCO from March 1995, then was tested on the unseen 200 data from the same period. The combined model reduced the prediction error by 55.4% on average.

  • PDF

Development of Inference Algorithm for Bead Geometry in GMAW (GMA 용접의 비드형상 추론 알고리즘 개발)

  • Kim, Myun-Hee;Bae, Joon-Young;Lee, Sang-Ryong
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.19 no.4
    • /
    • pp.132-139
    • /
    • 2002
  • In GMAW(Gas Metal Arc Welding) processes, bead geometry (penetration, bead width and height) is a criterion to estimate welding quality. Bead geometry is affected by welding current, arc voltage and travel speed, shielding gas, CTWD (contact-tip to workpiece distance) and so on. In this paper, welding process variables were selected as welding current, arc voltage and travel speed. And bead geometry was reasoned from the chosen welding process variables using neuro-fuzzy algorithm. Neural networks was applied to design FL(fuzzy logic). The parameters of input membership functions and those of consequence functions in FL were tuned through the method of learning by backpropagation algorithm. Bead geometry could be reasoned from welding current, arc voltage, travel speed on FL using the results learned by neural networks. On the developed inference system of bead geometry using neuro-furzy algorithm, the inference error percent of bead width was within $\pm$4%, that of bead height was within $\pm$3%, and that of penetration was within $\pm$8%. Neural networks came into effect to find the parameters of input membership functions and those of consequence in FL. Therefore the inference system of welding quality expects to be developed through proposed algorithm.