• Title/Summary/Keyword: neural network optimization

Search Result 818, Processing Time 0.029 seconds

Wavelet-based feature extraction for automatic defect classification in strands by ultrasonic structural monitoring

  • Rizzo, Piervincenzo;Lanza di Scalea, Francesco
    • Smart Structures and Systems
    • /
    • v.2 no.3
    • /
    • pp.253-274
    • /
    • 2006
  • The structural monitoring of multi-wire strands is of importance to prestressed concrete structures and cable-stayed or suspension bridges. This paper addresses the monitoring of strands by ultrasonic guided waves with emphasis on the signal processing and automatic defect classification. The detection of notch-like defects in the strands is based on the reflections of guided waves that are excited and detected by magnetostrictive ultrasonic transducers. The Discrete Wavelet Transform was used to extract damage-sensitive features from the detected signals and to construct a multi-dimensional Damage Index vector. The Damage Index vector was then fed to an Artificial Neural Network to provide the automatic classification of (a) the size of the notch and (b) the location of the notch from the receiving sensor. Following an optimization study of the network, it was determined that five damage-sensitive features provided the best defect classification performance with an overall success rate of 90.8%. It was thus demonstrated that the wavelet-based multidimensional analysis can provide excellent classification performance for notch-type defects in strands.

Development of Experimental Model fer Bead profile Prediction in GMA Welding (GMA용접에서 비드단면형상을 예측하기 위한 실험적 모델의 개발)

  • Son Joon-Sik;Kim Ill-Soo;Park Chang-Eun;Kim In-Ju;Jeong Ho-Seong
    • Journal of Welding and Joining
    • /
    • v.23 no.4
    • /
    • pp.41-47
    • /
    • 2005
  • Generally, the use of robots in manufacturing industry has been increased during the past decade. GMA(Gas Metal Arc) welding process is an actively Vowing area, and many new procedures have been developed for use with high strength alloys. One of the basic requirement for the automatic welding applications is to investigate relationships between process parameters and bead geometry. The objective of this paper is to develop a new approach involving the use of neural network and multiple regression methods in the prediction of bead geometry for GMA welding process and to develop an intelligent system that visualize bead geometry in order to employ the robotic GMA welding processes. Examples of the simulation for GMA welding process are supplied to demonstrate and verify the proposed system developed using MATLAB. The developed system could be effectively implemented not oかy for estimating bead geometry, but also employed to monitor and control the bead geometry in real time.

Development of user activity type and recognition technology using LSTM (LSTM을 이용한 사용자 활동유형 및 인식기술 개발)

  • Kim, Young-kyun;Kim, Won-jong;Lee, Seok-won
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.10a
    • /
    • pp.360-363
    • /
    • 2018
  • Human activity is influenced by various factors, from individual physical features such as vertebral flexion and pelvic distortion to feelings such as joy, anger, and sadness. However, the nature of these behaviors changes over time, and behavioral characteristics do not change much in the short term. The activity data of a person has a time series characteristic that changes with time and a certain regularity for each action. In this study, we applied LSTM, a kind of cyclic neural network to deal with time - series characteristics, to the technique of recognizing activity type and improved recognition rate of activity type by measuring time and parameter optimization of components of LSTM model.

  • PDF

An Efficient Global Optimization Method for Reducing the Wave Drag in Transonic Regime (천음속 영역의 조파항력 감소를 위한 효율적인 전역적 최적화 기법 연구)

  • Jung, Sung-Ki;Myong, Rho-Shin;Cho, Tae-Hwan
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.37 no.3
    • /
    • pp.248-254
    • /
    • 2009
  • The use of evolutionary algorithm is limited in the field of aerodynamics, mainly because the population-based search algorithm requires excessive CPU time. In this paper a coupling method with adaptive range genetic algorithm for floating point and back-propagation neural network is proposed to efficiently obtain a converged solution. As a result, it is shown that a reduction of 14% and 33% respectively in wave drag and its consumed time can be achieved by the new method.

CNN-based Fast Split Mode Decision Algorithm for Versatile Video Coding (VVC) Inter Prediction

  • Yeo, Woon-Ha;Kim, Byung-Gyu
    • Journal of Multimedia Information System
    • /
    • v.8 no.3
    • /
    • pp.147-158
    • /
    • 2021
  • Versatile Video Coding (VVC) is the latest video coding standard developed by Joint Video Exploration Team (JVET). In VVC, the quadtree plus multi-type tree (QT+MTT) structure of coding unit (CU) partition is adopted, and its computational complexity is considerably high due to the brute-force search for recursive rate-distortion (RD) optimization. In this paper, we aim to reduce the time complexity of inter-picture prediction mode since the inter prediction accounts for a large portion of the total encoding time. The problem can be defined as classifying the split mode of each CU. To classify the split mode effectively, a novel convolutional neural network (CNN) called multi-level tree (MLT-CNN) architecture is introduced. For boosting classification performance, we utilize additional information including inter-picture information while training the CNN. The overall algorithm including the MLT-CNN inference process is implemented on VVC Test Model (VTM) 11.0. The CUs of size 128×128 can be the inputs of the CNN. The sequences are encoded at the random access (RA) configuration with five QP values {22, 27, 32, 37, 42}. The experimental results show that the proposed algorithm can reduce the computational complexity by 11.53% on average, and 26.14% for the maximum with an average 1.01% of the increase in Bjøntegaard delta bit rate (BDBR). Especially, the proposed method shows higher performance on the sequences of the A and B classes, reducing 9.81%~26.14% of encoding time with 0.95%~3.28% of the BDBR increase.

Hyper Parameter Tuning Method based on Sampling for Optimal LSTM Model

  • Kim, Hyemee;Jeong, Ryeji;Bae, Hyerim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.1
    • /
    • pp.137-143
    • /
    • 2019
  • As the performance of computers increases, the use of deep learning, which has faced technical limitations in the past, is becoming more diverse. In many fields, deep learning has contributed to the creation of added value and used on the bases of more data as the application become more divers. The process for obtaining a better performance model will require a longer time than before, and therefore it will be necessary to find an optimal model that shows the best performance more quickly. In the artificial neural network modeling a tuning process that changes various elements of the neural network model is used to improve the model performance. Except Gride Search and Manual Search, which are widely used as tuning methods, most methodologies have been developed focusing on heuristic algorithms. The heuristic algorithm can get the results in a short time, but the results are likely to be the local optimal solution. Obtaining a global optimal solution eliminates the possibility of a local optimal solution. Although the Brute Force Method is commonly used to find the global optimal solution, it is not applicable because of an infinite number of hyper parameter combinations. In this paper, we use a statistical technique to reduce the number of possible cases, so that we can find the global optimal solution.

Employing TLBO and SCE for optimal prediction of the compressive strength of concrete

  • Zhao, Yinghao;Moayedi, Hossein;Bahiraei, Mehdi;Foong, Loke Kok
    • Smart Structures and Systems
    • /
    • v.26 no.6
    • /
    • pp.753-763
    • /
    • 2020
  • The early prediction of Compressive Strength of Concrete (CSC) is a significant task in the civil engineering construction projects. This study, therefore, is dedicated to introducing two novel hybrids of neural computing, namely Shuffled Complex Evolution (SCE) and Teaching-Learning-Based Optimization (TLBO) for predicting the CSC. The algorithms are applied to a Multi-Layer Perceptron (MLP) network to create the SCE-MLP and TLBO-MLP ensembles. The results revealed that, first, intelligent models can properly handle analyzing and generalizing the non-linear relationship between the CSC and its influential parameters. For example, the smallest and largest values of the CSC were 17.19 and 58.53 MPa, and the outputs of the MLP, SCE-MLP, and TLBO-MLP range in [17.61, 54.36], [17.69, 55.55] and [18.07, 53.83], respectively. Second, applying the SCE and TLBO optimizers resulted in increasing the correlation of the MLP products from 93.58 to 97.32 and 97.22%, respectively. The prediction error was also reduced by around 34 and 31% which indicates the high efficiency of these algorithms. Moreover, regarding the computation time needed to implement the SCE-MLP and TLBO-MLP models, the SCE is a considerably more time-efficient optimizer. Nevertheless, both suggested models can be promising substitutes for laboratory and destructive CSC evaluative models.

A SE Approach to Predict the Peak Cladding Temperature using Artificial Neural Network

  • ALAtawneh, Osama Sharif;Diab, Aya
    • Journal of the Korean Society of Systems Engineering
    • /
    • v.16 no.2
    • /
    • pp.67-77
    • /
    • 2020
  • Traditionally nuclear thermal hydraulic and nuclear safety has relied on numerical simulations to predict the system response of a nuclear power plant either under normal operation or accident condition. However, this approach may sometimes be rather time consuming particularly for design and optimization problems. To expedite the decision-making process data-driven models can be used to deduce the statistical relationships between inputs and outputs rather than solving physics-based models. Compared to the traditional approach, data driven models can provide a fast and cost-effective framework to predict the behavior of highly complex and non-linear systems where otherwise great computational efforts would be required. The objective of this work is to develop an AI algorithm to predict the peak fuel cladding temperature as a metric for the successful implementation of FLEX strategies under extended station black out. To achieve this, the model requires to be conditioned using pre-existing database created using the thermal-hydraulic analysis code, MARS-KS. In the development stage, the model hyper-parameters are tuned and optimized using the talos tool.

Novel Image Classification Method Based on Few-Shot Learning in Monkey Species

  • Wang, Guangxing;Lee, Kwang-Chan;Shin, Seong-Yoon
    • Journal of information and communication convergence engineering
    • /
    • v.19 no.2
    • /
    • pp.79-83
    • /
    • 2021
  • This paper proposes a novel image classification method based on few-shot learning, which is mainly used to solve model overfitting and non-convergence in image classification tasks of small datasets and improve the accuracy of classification. This method uses model structure optimization to extend the basic convolutional neural network (CNN) model and extracts more image features by adding convolutional layers, thereby improving the classification accuracy. We incorporated certain measures to improve the performance of the model. First, we used general methods such as setting a lower learning rate and shuffling to promote the rapid convergence of the model. Second, we used the data expansion technology to preprocess small datasets to increase the number of training data sets and suppress over-fitting. We applied the model to 10 monkey species and achieved outstanding performances. Experiments indicated that our proposed method achieved an accuracy of 87.92%, which is 26.1% higher than that of the traditional CNN method and 1.1% higher than that of the deep convolutional neural network ResNet50.

A Robust Energy Consumption Forecasting Model using ResNet-LSTM with Huber Loss

  • Albelwi, Saleh
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.7
    • /
    • pp.301-307
    • /
    • 2022
  • Energy consumption has grown alongside dramatic population increases. Statistics show that buildings in particular utilize a significant amount of energy, worldwide. Because of this, building energy prediction is crucial to best optimize utilities' energy plans and also create a predictive model for consumers. To improve energy prediction performance, this paper proposes a ResNet-LSTM model that combines residual networks (ResNets) and long short-term memory (LSTM) for energy consumption prediction. ResNets are utilized to extract complex and rich features, while LSTM has the ability to learn temporal correlation; the dense layer is used as a regression to forecast energy consumption. To make our model more robust, we employed Huber loss during the optimization process. Huber loss obtains high efficiency by handling minor errors quadratically. It also takes the absolute error for large errors to increase robustness. This makes our model less sensitive to outlier data. Our proposed system was trained on historical data to forecast energy consumption for different time series. To evaluate our proposed model, we compared our model's performance with several popular machine learning and deep learning methods such as linear regression, neural networks, decision tree, and convolutional neural networks, etc. The results show that our proposed model predicted energy consumption most accurately.