• Title/Summary/Keyword: Gaussian process model

Search Result 241, Processing Time 0.027 seconds

Markov Chain Monte Carlo simulation based Bayesian updating of model parameters and their uncertainties

  • Sengupta, Partha;Chakraborty, Subrata
    • Structural Engineering and Mechanics
    • /
    • v.81 no.1
    • /
    • pp.103-115
    • /
    • 2022
  • The prediction error variances for frequencies are usually considered as unknown in the Bayesian system identification process. However, the error variances for mode shapes are taken as known to reduce the dimension of an identification problem. The present study attempts to explore the effectiveness of Bayesian approach of model parameters updating using Markov Chain Monte Carlo (MCMC) technique considering the prediction error variances for both the frequencies and mode shapes. To remove the ergodicity of Markov Chain, the posterior distribution is obtained by Gaussian Random walk over the proposal distribution. The prior distributions of prediction error variances of modal evidences are implemented through inverse gamma distribution to assess the effectiveness of estimation of posterior values of model parameters. The issue of incomplete data that makes the problem ill-conditioned and the associated singularity problem is prudently dealt in by adopting a regularization technique. The proposed approach is demonstrated numerically by considering an eight-storey frame model with both complete and incomplete modal data sets. Further, to study the effectiveness of the proposed approach, a comparative study with regard to accuracy and computational efficacy of the proposed approach is made with the Sequential Monte Carlo approach of model parameter updating.

A Study on the Heat Transfer Characteristics of Single Bead Deposition of Inconel 718 Superalloy on S45C Structural Steel Using a DMT Process (DMT 공정을 이용한 S45C 구조용강 위 Inconel 718 초합금 단일 비드 적층시 열전달 특성 분석에 관한 연구)

  • Lee, Kwang-Kyu;Ahn, Dong-Gyu;Kim, Woo-Sung;Lee, Ho-Jin
    • Journal of the Korean Society of Manufacturing Process Engineers
    • /
    • v.19 no.8
    • /
    • pp.56-63
    • /
    • 2020
  • The heat transfer phenomenon in the vicinity of the irradiated region of a focused laser beam of a DMT process greatly affects both the deposition characteristics of powders on a substrate and the properties of the deposited region. The goal of this paper is to investigate the heat transfer characteristics of a single bead deposition of Inconel 718 powders on S45C structural steel using a laser-aided direct metal tooling (DMT) process. The finite element analysis (FEA) model with a Gaussian volumetric heat flux is developed to simulate a three-dimensional transient heat transfer phenomenon. The cross-section of the bead for the FEA is estimated with an equivalent area method using experimental results. Through the comparison of the results of the experiments and those of the analysis, the effective beam radius of the bottom region of the volumetric heat flux and the efficiency of the heat flux model for different powers and travel speeds of the laser are predicted. From the results of the FEA, the influence of the power and the travel speed of the laser on the creation of a steady-state heat transfer region and the formation of the heat-affected zone (HAZ) in the substrate are investigated.

Complexity Estimation Based Work Load Balancing for a Parallel Lidar Waveform Decomposition Algorithm

  • Jung, Jin-Ha;Crawford, Melba M.;Lee, Sang-Hoon
    • Korean Journal of Remote Sensing
    • /
    • v.25 no.6
    • /
    • pp.547-557
    • /
    • 2009
  • LIDAR (LIght Detection And Ranging) is an active remote sensing technology which provides 3D coordinates of the Earth's surface by performing range measurements from the sensor. Early small footprint LIDAR systems recorded multiple discrete returns from the back-scattered energy. Recent advances in LIDAR hardware now make it possible to record full digital waveforms of the returned energy. LIDAR waveform decomposition involves separating the return waveform into a mixture of components which are then used to characterize the original data. The most common statistical mixture model used for this process is the Gaussian mixture. Waveform decomposition plays an important role in LIDAR waveform processing, since the resulting components are expected to represent reflection surfaces within waveform footprints. Hence the decomposition results ultimately affect the interpretation of LIDAR waveform data. Computational requirements in the waveform decomposition process result from two factors; (1) estimation of the number of components in a mixture and the resulting parameter estimates, which are inter-related and cannot be solved separately, and (2) parameter optimization does not have a closed form solution, and thus needs to be solved iteratively. The current state-of-the-art airborne LIDAR system acquires more than 50,000 waveforms per second, so decomposing the enormous number of waveforms is challenging using traditional single processor architecture. To tackle this issue, four parallel LIDAR waveform decomposition algorithms with different work load balancing schemes - (1) no weighting, (2) a decomposition results-based linear weighting, (3) a decomposition results-based squared weighting, and (4) a decomposition time-based linear weighting - were developed and tested with varying number of processors (8-256). The results were compared in terms of efficiency. Overall, the decomposition time-based linear weighting work load balancing approach yielded the best performance among four approaches.

Smartphone-User Interactive based Self Developing Place-Time-Activity Coupled Prediction Method for Daily Routine Planning System (일상생활 계획을 위한 스마트폰-사용자 상호작용 기반 지속 발전 가능한 사용자 맞춤 위치-시간-행동 추론 방법)

  • Lee, Beom-Jin;Kim, Jiseob;Ryu, Je-Hwan;Heo, Min-Oh;Kim, Joo-Seuk;Zhang, Byoung-Tak
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.2
    • /
    • pp.154-159
    • /
    • 2015
  • Over the past few years, user needs in the smartphone application market have been shifted from diversity toward intelligence. Here, we propose a novel cognitive agent that plans the daily routines of users using the lifelog data collected by the smart phones of individuals. The proposed method first employs DPGMM (Dirichlet Process Gaussian Mixture Model) to automatically extract the users' POI (Point of Interest) from the lifelog data. After extraction, the POI and other meaningful features such as GPS, the user's activity label extracted from the log data is then used to learn the patterns of the user's daily routine by POMDP (Partially Observable Markov Decision Process). To determine the significant patterns within the user's time dependent patterns, collaboration was made with the SNS application Foursquare to record the locations visited by the user and the activities that the user had performed. The method was evaluated by predicting the daily routine of seven users with 3300 feedback data. Experimental results showed that daily routine scheduling can be established after seven days of lifelogged data and feedback data have been collected, demonstrating the potential of the new method of place-time-activity coupled daily routine planning systems in the intelligence application market.

Adjustment of Korean Birth Weight Data (한국 신생아의 출생체중 데이터 보정)

  • Shin, Hyungsik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.2
    • /
    • pp.259-264
    • /
    • 2017
  • Birth weight of a new born baby provides very important information in evaluating many clinical issues such as fetal growth restriction. This paper analyzes birth weight data of babies born in Korea from 2011 to 2013, and it shows that there is a biologically implausible distribution of birth weights in the data. This implies that some errors may be generated in the data collection process. In particular, this paper analyzes the relationship between gestational period and birth weight, and it is shown that the birth weight data mostly of gestational periods from 28 to 32 weeks have noticeable errors. Therefore, this paper employs the finite Gaussian mixture model to classify the collected data points into two classes: non-corrupted and corrupted. After the classification the paper removes data points that have been predicted to be corrupted. This adjustment scheme provides more natural and medically plausible percentile values of birth weights for all the gestational periods.

Autism Spectrum Disorder Detection in Children using the Efficacy of Machine Learning Approaches

  • Tariq Rafiq;Zafar Iqbal;Tahreem Saeed;Yawar Abbas Abid;Muneeb Tariq;Urooj Majeed;Akasha
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.4
    • /
    • pp.179-186
    • /
    • 2023
  • For the future prosperity of any society, the sound growth of children is essential. Autism Spectrum Disorder (ASD) is a neurobehavioral disorder which has an impact on social interaction of autistic child and has an undesirable effect on his learning, speaking, and responding skills. These children have over or under sensitivity issues of touching, smelling, and hearing. Its symptoms usually appear in the child of 4- to 11-year-old but parents did not pay attention to it and could not detect it at early stages. The process to diagnose in recent time is clinical sessions that are very time consuming and expensive. To complement the conventional method, machine learning techniques are being used. In this way, it improves the required time and precision for diagnosis. We have applied TFLite model on image based dataset to predict the autism based on facial features of child. Afterwards, various machine learning techniques were trained that includes Logistic Regression, KNN, Gaussian Naïve Bayes, Random Forest and Multi-Layer Perceptron using Autism Spectrum Quotient (AQ) dataset to improve the accuracy of the ASD detection. On image based dataset, TFLite model shows 80% accuracy and based on AQ dataset, we have achieved 100% accuracy from Logistic Regression and MLP models.

A Study on the Performance Degradation Pattern of Caisson-type Quay Wall Port Facilities (케이슨식 안벽 항만시설의 성능저하패턴 연구)

  • Na, Yong Hyoun;Park, Mi Yeon;Jang, Shinwoo
    • Journal of the Society of Disaster Information
    • /
    • v.18 no.1
    • /
    • pp.146-153
    • /
    • 2022
  • Purpose: In the case of domestic port facilities, port structures that have been in use for a long time have many problems in terms of safety performance and functionality due to the enlargement of ships, increased frequency of use, and the effects of natural disasters due to climate change. A big data analysis method was studied to develop an approximate model that can predict the aging pattern of a port facility based on the maintenance history data of the port facility. Method: In this study, member-level maintenance history data for caisson-type quay walls were collected, defined as big data, and based on the data, a predictive approximation model was derived to estimate the aging pattern and deterioration of the facility at the project level. A state-based aging pattern prediction model generated through Gaussian process (GP) and linear interpolation (SLPT) techniques was proposed, and models suitable for big data utilization were compared and proposed through validation. Result: As a result of examining the suitability of the proposed method, the SLPT method has RMSE of 0.9215 and 0.0648, and the predictive model applied with the SLPT method is considered suitable. Conclusion: Through this study, it is expected that the study of predicting performance degradation of big data-based facilities will become an important system in decision-making regarding maintenance.

Finite Element Analysis of Large-Electron-Beam Polishing-Induced Temperature Distribution (대면적 전자빔 폴리싱 공정 시 발생하는 온도 분포 유한요소해석 연구)

  • Kim, J.S.;Kim, J.S.;Kang, E.G.;Lee, S.W.;Park, H.W.
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.22 no.6
    • /
    • pp.931-936
    • /
    • 2013
  • Recently, the use of large-electron-beam polishing for polishing complex metal surfaces has been proposed. In this study, the temperature induced by a large electron beam was predicted using the heat transfer theory. A finite element (FE) model of a continuous wave (CW) electron beam was constructed assuming Gaussian distribution. The temperature distribution and melting depth of an SUS304 sample were predicted by changing electron-beam polishing process parameters such as energy density and beam velocity. The results obtained using the developed FE model were compared with experimental results for verifying the melting depth prediction capability of the developed FE model.

A random forest-regression-based inverse-modeling evolutionary algorithm using uniform reference points

  • Gholamnezhad, Pezhman;Broumandnia, Ali;Seydi, Vahid
    • ETRI Journal
    • /
    • v.44 no.5
    • /
    • pp.805-815
    • /
    • 2022
  • The model-based evolutionary algorithms are divided into three groups: estimation of distribution algorithms, inverse modeling, and surrogate modeling. Existing inverse modeling is mainly applied to solve multi-objective optimization problems and is not suitable for many-objective optimization problems. Some inversed-model techniques, such as the inversed-model of multi-objective evolutionary algorithm, constructed from the Pareto front (PF) to the Pareto solution on nondominated solutions using a random grouping method and Gaussian process, were introduced. However, some of the most efficient inverse models might be eliminated during this procedure. Also, there are challenges, such as the presence of many local PFs and developing poor solutions when the population has no evident regularity. This paper proposes inverse modeling using random forest regression and uniform reference points that map all nondominated solutions from the objective space to the decision space to solve many-objective optimization problems. The proposed algorithm is evaluated using the benchmark test suite for evolutionary algorithms. The results show an improvement in diversity and convergence performance (quality indicators).

An Efficient CT Image Denoising using WT-GAN Model

  • Hae Chan Jeong;Dong Hoon Lim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.5
    • /
    • pp.21-29
    • /
    • 2024
  • Reducing the radiation dose during CT scanning can lower the risk of radiation exposure, but not only does the image resolution significantly deteriorate, but the effectiveness of diagnosis is reduced due to the generation of noise. Therefore, noise removal from CT images is a very important and essential processing process in the image restoration. Until now, there are limitations in removing only the noise by separating the noise and the original signal in the image area. In this paper, we aim to effectively remove noise from CT images using the wavelet transform-based GAN model, that is, the WT-GAN model in the frequency domain. The GAN model used here generates images with noise removed through a U-Net structured generator and a PatchGAN structured discriminator. To evaluate the performance of the WT-GAN model proposed in this paper, experiments were conducted on CT images damaged by various noises, namely Gaussian noise, Poisson noise, and speckle noise. As a result of the performance experiment, the WT-GAN model is better than the traditional filter, that is, the BM3D filter, as well as the existing deep learning models, such as DnCNN, CDAE model, and U-Net GAN model, in qualitative and quantitative measures, that is, PSNR (Peak Signal-to-Noise Ratio) and SSIM (Structural Similarity Index Measure) showed excellent results.