• 제목/요약/키워드: Sensitivity algorithm

Search Result 1,033, Processing Time 0.027 seconds

A Study on Polynomial Neural Networks for Stabilized Deep Networks Structure (안정화된 딥 네트워크 구조를 위한 다항식 신경회로망의 연구)

  • Jeon, Pil-Han;Kim, Eun-Hu;Oh, Sung-Kwun
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.66 no.12
    • /
    • pp.1772-1781
    • /
    • 2017
  • In this study, the design methodology for alleviating the overfitting problem of Polynomial Neural Networks(PNN) is realized with the aid of two kinds techniques such as L2 regularization and Sum of Squared Coefficients (SSC). The PNN is widely used as a kind of mathematical modeling methods such as the identification of linear system by input/output data and the regression analysis modeling method for prediction problem. PNN is an algorithm that obtains preferred network structure by generating consecutive layers as well as nodes by using a multivariate polynomial subexpression. It has much fewer nodes and more flexible adaptability than existing neural network algorithms. However, such algorithms lead to overfitting problems due to noise sensitivity as well as excessive trainning while generation of successive network layers. To alleviate such overfitting problem and also effectively design its ensuing deep network structure, two techniques are introduced. That is we use the two techniques of both SSC(Sum of Squared Coefficients) and $L_2$ regularization for consecutive generation of each layer's nodes as well as each layer in order to construct the deep PNN structure. The technique of $L_2$ regularization is used for the minimum coefficient estimation by adding penalty term to cost function. $L_2$ regularization is a kind of representative methods of reducing the influence of noise by flattening the solution space and also lessening coefficient size. The technique for the SSC is implemented for the minimization of Sum of Squared Coefficients of polynomial instead of using the square of errors. In the sequel, the overfitting problem of the deep PNN structure is stabilized by the proposed method. This study leads to the possibility of deep network structure design as well as big data processing and also the superiority of the network performance through experiments is shown.

Prediction of high turbidity in rivers using LSTM algorithm (LSTM 모형을 이용한 하천 고탁수 발생 예측 연구)

  • Park, Jungsu;Lee, Hyunho
    • Journal of Korean Society of Water and Wastewater
    • /
    • v.34 no.1
    • /
    • pp.35-43
    • /
    • 2020
  • Turbidity has various effects on the water quality and ecosystem of a river. High turbidity during floods increases the operation cost of a drinking water supply system. Thus, the management of turbidity is essential for providing safe water to the public. There have been various efforts to estimate turbidity in river systems for proper management and early warning of high turbidity in the water supply process. Advanced data analysis technology using machine learning has been increasingly used in water quality management processes. Artificial neural networks(ANNs) is one of the first algorithms applied, where the overfitting of a model to observed data and vanishing gradient in the backpropagation process limit the wide application of ANNs in practice. In recent years, deep learning, which overcomes the limitations of ANNs, has been applied in water quality management. LSTM(Long-Short Term Memory) is one of novel deep learning algorithms that is widely used in the analysis of time series data. In this study, LSTM is used for the prediction of high turbidity(>30 NTU) in a river from the relationship of turbidity to discharge, which enables early warning of high turbidity in a drinking water supply system. The model showed 0.98, 0.99, 0.98 and 0.99 for precision, recall, F1-score and accuracy respectively, for the prediction of high turbidity in a river with 2 hour frequency data. The sensitivity of the model to the observation intervals of data is also compared with time periods of 2 hour, 8 hour, 1 day and 2 days. The model shows higher precision with shorter observation intervals, which underscores the importance of collecting high frequency data for better management of water resources in the future.

Comparison of Steady and Unsteady Water Quality Model (정상 및 비정상상태 하천수질모형의 비교)

  • Ko, Ick-Hwan;Noh, Joon-Woo;Kim, Young-Do
    • Journal of Korea Water Resources Association
    • /
    • v.38 no.6 s.155
    • /
    • pp.505-515
    • /
    • 2005
  • Two representative river water quality models have been compared in this paper. The steady water quality model, QUAL2E, and the unsteady model, CE-QUAL-RIV1, have been chosen for comparative simulations. Under same reaction coefficients and boundary conditions, the water quality of the Geum river below the Daechung dam has been simulated using two different models, and the water quality equations are compared each other. Since basic model algorithm is very close, the input data required for model run is very similar. Upon the simulation under steady condition, the results of two models show very good agreement especially for BOD, DO, and $NH_3-N$, while the results of specific constituent such as dissolved P is quite different. As a result, dominant water quality parameters to compute each corresponding water quality variables are summarized and tablized through the sensitivity analysis.

Dynamic Analysis of a KAERI Channel Type Shear Wall: System Identification, FE Model Updating and Time-History Responses (KAERI 채널형 전단벽체의 동적해석; 시스템판별, FE 모델향상 및 시간이력 응답)

  • Cho, Soon-Ho
    • Journal of the Earthquake Engineering Society of Korea
    • /
    • v.25 no.3
    • /
    • pp.145-152
    • /
    • 2021
  • KAERI has planned to carry out a series of dynamic tests using a shaking table and time-history analyses for a channel-type concrete shear wall to investigate its seismic performance because of the recently frequent occurrence of earthquakes in the south-eastern parts of Korea. The overall size of a test specimen is b×l×h =2500 mm×3500 mm×4500 mm, and it consists of three stories having slabs and walls with thicknesses of 140 mm and 150 mm, respectively. The system identification, FE model updating, and time-history analysis results for a test shear wall are presented herein. By applying the advanced system identification, so-called pLSCF, the improved modal parameters are extracted in the lower modes. Using three FE in-house packages, such as FEMtools, Ruaumoko, and VecTor4, the eigenanalyses are made for an initial FE model, resulting in consistency in eigenvalues. However, they exhibit relatively stiffer behavior, as much as 30 to 50% compared with those extracted from the test in the 1st and 2nd modes. The FE model updating is carried out to consider the 6-dofs spring stiffnesses at the wall base as major parameters by adopting a Bayesian type automatic updating algorithm to minimize the residuals in modal parameters. The updating results indicate that the highest sensitivity is apparent in the vertical translational springs at few locations ranging from 300 to 500% in variation. However, their changes seem to have no physical meaning because of the numerical values. Finally, using the updated FE model, the time-history responses are predicted by Ruaumoko at each floor where accelerometers are located. The accelerograms between test and analysis show an acceptable match in terms of maximum and minimum values. However, the magnitudes and patterns of floor response spectra seem somewhat different because of the slightly different input accelerograms and damping ratios involved.

A Study on Kernel Size Adaptation for Correntropy-based Learning Algorithms (코렌트로피 기반 학습 알고리듬의 커널 사이즈에 관한 연구)

  • Kim, Namyong
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.2
    • /
    • pp.714-720
    • /
    • 2021
  • The ITL (information theoretic learning) based on the kernel density estimation method that has successfully been applied to machine learning and signal processing applications has a drawback of severe sensitiveness in choosing proper kernel sizes. For the maximization of correntropy criterion (MCC) as one of the ITL-type criteria, several methods of adapting the remaining kernel size ( ) after removing the term have been studied. In this paper, it is shown that the main cause of sensitivity in choosing the kernel size derives from the term and that the adaptive adjustment of in the remaining terms leads to approach the absolute value of error, which prevents the weight adjustment from continuing. Thus, it is proposed that choosing an appropriate constant as the kernel size for the remaining terms is more effective. In addition, the experiment results when compared to the conventional algorithm show that the proposed method enhances learning performance by about 2dB of steady state MSE with the same convergence rate. In an experiment for channel models, the proposed method enhances performance by 4 dB so that the proposed method is more suitable for more complex or inferior conditions.

A Heuristic for Service-Parts Lot-Sizing with Disassembly Option (분해옵션 포함 서비스부품 로트사이징 휴리스틱)

  • Jang, Jin-Myeong;Kim, Hwa-Joong;Son, Dong-Hoon;Lee, Dong-Ho
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.44 no.2
    • /
    • pp.24-35
    • /
    • 2021
  • Due to increasing awareness on the treatment of end-of-use/life products, disassembly has been a fast-growing research area of interest for many researchers over recent decades. This paper introduces a novel lot-sizing problem that has not been studied in the literature, which is the service-parts lot-sizing with disassembly option. The disassembly option implies that the demands of service parts can be fulfilled by newly manufactured parts, but also by disassembled parts. The disassembled parts are the ones recovered after the disassembly of end-of-use/life products. The objective of the considered problem is to maximize the total profit, i.e., the revenue of selling the service parts minus the total cost of the fixed setup, production, disassembly, inventory holding, and disposal over a planning horizon. This paper proves that the single-period version of the considered problem is NP-hard and suggests a heuristic by combining a simulated annealing algorithm and a linear-programming relaxation. Computational experiment results show that the heuristic generates near-optimal solutions within reasonable computation time, which implies that the heuristic is a viable optimization tool for the service parts inventory management. In addition, sensitivity analyses indicate that deciding an appropriate price of disassembled parts and an appropriate collection amount of EOLs are very important for sustainable service parts systems.

Numerical and experimental investigation for monitoring and prediction of performance in the soft actuator

  • Azizkhani, Mohammadbagher;sangsefidi, Alireza;Kadkhodapour, Javad;Anaraki, Ali Pourkamali
    • Structural Engineering and Mechanics
    • /
    • v.77 no.2
    • /
    • pp.167-177
    • /
    • 2021
  • Due to various benefits such as unlimited degrees of freedom, environment adaptability, and safety for humans, engineers have used soft materials with hyperelastic behavior in various industrial, medical, rescue, and other sectors. One of the applications of these materials in the fabrication of bending soft actuators (SA) is that they have eliminated many problems in the actuators such as production cost, mechanical complexity, and design algorithm. However, SA has complexities, such as predicting and monitoring behavior despite the many benefits. The first part of this paper deals with the prediction of SA behavior through mathematical models such as Ogden and Darijani, and its comparison with the results of experiments. At first, by examining different geometric models, the cubic structure was selected as the optimal structure in the investigated models. This geometrical structure at the same pressure showed the most significant bending in the simulation. The simulation results were then compared with experimental, and the final gripper model was designed and manufactured using a 3D printer with silicone rubber as for the polymer part. This geometrical structure is capable of bending up to a 90-degree angle at 70 kPa in less than 2 seconds. The second section is dedicated to monitoring the bending behavior created by the strain sensors with different sensitivity and stretchability. In the fabrication of the sensors, silicon is used as a soft material with hyperelastic behavior and carbon fiber as a conductive material in the soft material substrate. The SA designed in this paper is capable of deforming up to 1000 cycles without changing its characteristics and capable of moving objects weigh up to 1200 g. This SA has the capability of being used in soft robots and artificial hand making for high-speed objects harvesting.

Comparative Learning based Deep Learning Algorithm for Abnormal Beat Detection using Imaged Electrocardiogram Signal (비정상심박 검출을 위해 영상화된 심전도 신호를 이용한 비교학습 기반 딥러닝 알고리즘)

  • Bae, Jinkyung;Kwak, Minsoo;Noh, Kyeungkap;Lee, Dongkyu;Park, Daejin;Lee, Seungmin
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.1
    • /
    • pp.30-40
    • /
    • 2022
  • Electrocardiogram (ECG) signal's shape and characteristic varies through each individual, so it is difficult to classify with one neural network. It is difficult to classify the given data directly, but if corresponding normal beat is given, it is relatively easy and accurate to classify the beat by comparing two beats. In this study, we classify the ECG signal by generating the reference normal beat through the template cluster, and combining with the input ECG signal. It is possible to detect abnormal beats of various individual's records with one neural network by learning and classifying with the imaged ECG beats which are combined with corresponding reference normal beat. Especially, various neural networks, such as GoogLeNet, ResNet, and DarkNet, showed excellent performance when using the comparative learning. Also, we can confirmed that GoogLeNet has 99.72% sensitivity, which is the highest performance of the three neural networks.

Application of POD reduced-order algorithm on data-driven modeling of rod bundle

  • Kang, Huilun;Tian, Zhaofei;Chen, Guangliang;Li, Lei;Wang, Tianyu
    • Nuclear Engineering and Technology
    • /
    • v.54 no.1
    • /
    • pp.36-48
    • /
    • 2022
  • As a valid numerical method to obtain a high-resolution result of a flow field, computational fluid dynamics (CFD) have been widely used to study coolant flow and heat transfer characteristics in fuel rod bundles. However, the time-consuming, iterative calculation of Navier-Stokes equations makes CFD unsuitable for the scenarios that require efficient simulation such as sensitivity analysis and uncertainty quantification. To solve this problem, a reduced-order model (ROM) based on proper orthogonal decomposition (POD) and machine learning (ML) is proposed to simulate the flow field efficiently. Firstly, a validated CFD model to output the flow field data set of the rod bundle is established. Secondly, based on the POD method, the modes and corresponding coefficients of the flow field were extracted. Then, an deep feed-forward neural network, due to its efficiency in approximating arbitrary functions and its ability to handle high-dimensional and strong nonlinear problems, is selected to build a model that maps the non-linear relationship between the mode coefficients and the boundary conditions. A trained surrogate model for modes coefficients prediction is obtained after a certain number of training iterations. Finally, the flow field is reconstructed by combining the product of the POD basis and coefficients. Based on the test dataset, an evaluation of the ROM is carried out. The evaluation results show that the proposed POD-ROM accurately describe the flow status of the fluid field in rod bundles with high resolution in only a few milliseconds.

Fraud detection support vector machines with a functional predictor: application to defective wafer detection problem (불량 웨이퍼 탐지를 위한 함수형 부정 탐지 지지 벡터기계)

  • Park, Minhyoung;Shin, Seung Jun
    • The Korean Journal of Applied Statistics
    • /
    • v.35 no.5
    • /
    • pp.593-601
    • /
    • 2022
  • We call "fruad" the cases that are not frequently occurring but cause significant losses. Fraud detection is commonly encountered in various applications, including wafer production in the semiconductor industry. It is not trivial to directly extend the standard binary classification methods to the fraud detection context because the misclassification cost is much higher than the normal class. In this article, we propose the functional fraud detection support vector machine (F2DSVM) that extends the fraud detection support vector machine (FDSVM) to handle functional covariates. The proposed method seeks a classifier for a function predictor that achieves optimal performance while achieving the desired sensitivity level. F2DSVM, like the conventional SVM, has piece-wise linear solution paths, allowing us to develop an efficient algorithm to recover entire solution paths, resulting in significantly improved computational efficiency. Finally, we apply the proposed F2DSVM to the defective wafer detection problem and assess its potential applicability.