• Title/Summary/Keyword: Gaussian Learning

Search Result 278, Processing Time 0.027 seconds

Design of automatic cruise control system of mobile robot using fuzzy-neural control technique (퍼지-뉴럴 제어기법에 의한 이동형 로봇의 자율주행 제어시스템 설계)

  • 한성현;김종수
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.1804-1807
    • /
    • 1997
  • This paper presents a new approach to the design of cruise control system of a mobile robot with two drive wheel. The proposed control scheme uses a Gaussian function as a unit function in the fuzzy-neural network, and back propagation algorithm to train the fuzzy-neural network controller in the framework of the specialized learnign architecture. It is proposed a learning controller consisting of two neural networks-fuzzy based on independent reasoning and a connecton net with fixed weights to simply the neural networks-fuzzy. The performance of the proposed controller is shown by performing the computer simulation for trajectory tracking of the speed and azimuth of a mobile robot driven by two independent wheels.

  • PDF

The Azimuth and Velocity Control of a Mobile Robot with Two Drive Wheels by Neural-Fuzzy Control Method (뉴럴-퍼지제어기법에 의한 두 구동휠을 갖는 이동형 로보트의 자세 및 속도 제어)

  • Cho, Y.G.;Bae, J.I.
    • Journal of Power System Engineering
    • /
    • v.2 no.3
    • /
    • pp.74-82
    • /
    • 1998
  • This paper presents a new approach to the design of speed and azimuth control of a mobile robot with two drive wheels. The proposed control scheme uses a Gaussian function as a unit function in the neural-fuzzy network and back propagation algorithm to train the neural-fuzzy network controller in the framework of the specialized learning architecture. It is proposed to a learned controller with two neural-fuzzy networks based on an independent reasoning and a connection net with fixed weights to simplify the neural-fuzzy network. The performance of the proposed controller can be seen by the computer simulation for trajectory tracking of the speed and azimuth of a mobile robot driven by two independent wheels.

  • PDF

Study of Machine Learning Method for Anormaly Detection Using Multivariate Gaussian Distribution in LPWA Network Environment (LPWA 네트워크 환경에서 다변량 가우스 분포를 활용하여 이상탐지를 위한 머신러닝 기법 연구)

  • Lee, Sangjin;Kim, Keecheon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2017.05a
    • /
    • pp.309-311
    • /
    • 2017
  • With the recent development of the Internet (IoT) technology, we have come to a very connected society. This paper focuses on the security aspects that can occur within the LPWA Network environment of the Internet of things, and proposes a new machine learning method considering next generation IPS / IDS that can detect and block unexpected and unusual behavior of devices.

  • PDF

Blind Image Separation with Neural Learning Based on Information Theory and Higher-order Statistics (신경회로망 ICA를 이용한 혼합영상신호의 분리)

  • Cho, Hyun-Cheol;Lee, Kwon-Soon
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.57 no.8
    • /
    • pp.1454-1463
    • /
    • 2008
  • Blind source separation by independent component analysis (ICA) has applied in signal processing, telecommunication, and image processing to recover unknown original source signals from mutually independent observation signals. Neural networks are learned to estimate the original signals by unsupervised learning algorithm. Because the outputs of the neural networks which yield original source signals are mutually independent, then mutual information is zero. This is equivalent to minimizing the Kullback-Leibler convergence between probability density function and the corresponding factorial distribution of the output in neural networks. In this paper, we present a learning algorithm using information theory and higher order statistics to solve problem of blind source separation. For computer simulation two deterministic signals and a Gaussian noise are used as original source signals. We also test the proposed algorithm by applying it to several discrete images.

Machine learning models for predicting the compressive strength of concrete containing nano silica

  • Garg, Aman;Aggarwal, Paratibha;Aggarwal, Yogesh;Belarbi, M.O.;Chalak, H.D.;Tounsi, Abdelouahed;Gulia, Reeta
    • Computers and Concrete
    • /
    • v.30 no.1
    • /
    • pp.33-42
    • /
    • 2022
  • Experimentally predicting the compressive strength (CS) of concrete (for a mix design) is a time-consuming and laborious process. The present study aims to propose surrogate models based on Support Vector Machine (SVM) and Gaussian Process Regression (GPR) machine learning techniques, which can predict the CS of concrete containing nano-silica. Content of cement, aggregates, nano-silica and its fineness, water-binder ratio, and the days at which strength has to be predicted are the input variables. The efficiency of the models is compared in terms of Correlation Coefficient (CC), Root Mean Square Error (RMSE), Variance Account For (VAF), Nash-Sutcliffe Efficiency (NSE), and RMSE to observation's standard deviation ratio (RSR). It has been observed that the SVM outperforms GPR in predicting the CS of the concrete containing nano-silica.

No-reference Image Blur Assessment Based on Multi-scale Spatial Local Features

  • Sun, Chenchen;Cui, Ziguan;Gan, Zongliang;Liu, Feng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.10
    • /
    • pp.4060-4079
    • /
    • 2020
  • Blur is an important type of image distortion. How to evaluate the quality of blurred image accurately and efficiently is a research hotspot in the field of image processing in recent years. Inspired by the multi-scale perceptual characteristics of the human visual system (HVS), this paper presents a no-reference image blur/sharpness assessment method based on multi-scale local features in the spatial domain. First, considering various content has different sensitivity to blur distortion, the image is divided into smooth, edge, and texture regions in blocks. Then, the Gaussian scale space of the image is constructed, and the categorized contrast features between the original image and the Gaussian scale space images are calculated to express the blur degree of different image contents. To simulate the impact of viewing distance on blur distortion, the distribution characteristics of local maximum gradient of multi-resolution images were also calculated in the spatial domain. Finally, the image blur assessment model is obtained by fusing all features and learning the mapping from features to quality scores by support vector regression (SVR). Performance of the proposed method is evaluated on four synthetically blurred databases and one real blurred database. The experimental results demonstrate that our method can produce quality scores more consistent with subjective evaluations than other methods, especially for real burred images.

Estimation of Spatial Distribution Using the Gaussian Mixture Model with Multivariate Geoscience Data (다변량 지구과학 데이터와 가우시안 혼합 모델을 이용한 공간 분포 추정)

  • Kim, Ho-Rim;Yu, Soonyoung;Yun, Seong-Taek;Kim, Kyoung-Ho;Lee, Goon-Taek;Lee, Jeong-Ho;Heo, Chul-Ho;Ryu, Dong-Woo
    • Economic and Environmental Geology
    • /
    • v.55 no.4
    • /
    • pp.353-366
    • /
    • 2022
  • Spatial estimation of geoscience data (geo-data) is challenging due to spatial heterogeneity, data scarcity, and high dimensionality. A novel spatial estimation method is needed to consider the characteristics of geo-data. In this study, we proposed the application of Gaussian Mixture Model (GMM) among machine learning algorithms with multivariate data for robust spatial predictions. The performance of the proposed approach was tested through soil chemical concentration data from a former smelting area. The concentrations of As and Pb determined by ex-situ ICP-AES were the primary variables to be interpolated, while the other metal concentrations by ICP-AES and all data determined by in-situ portable X-ray fluorescence (PXRF) were used as auxiliary variables in GMM and ordinary cokriging (OCK). Among the multidimensional auxiliary variables, important variables were selected using a variable selection method based on the random forest. The results of GMM with important multivariate auxiliary data decreased the root mean-squared error (RMSE) down to 0.11 for As and 0.33 for Pb and increased the correlations (r) up to 0.31 for As and 0.46 for Pb compared to those from ordinary kriging and OCK using univariate or bivariate data. The use of GMM improved the performance of spatial interpretation of anthropogenic metals in soil. The multivariate spatial approach can be applied to understand complex and heterogeneous geological and geochemical features.

Landslide susceptibility assessment using feature selection-based machine learning models

  • Liu, Lei-Lei;Yang, Can;Wang, Xiao-Mi
    • Geomechanics and Engineering
    • /
    • v.25 no.1
    • /
    • pp.1-16
    • /
    • 2021
  • Machine learning models have been widely used for landslide susceptibility assessment (LSA) in recent years. The large number of inputs or conditioning factors for these models, however, can reduce the computation efficiency and increase the difficulty in collecting data. Feature selection is a good tool to address this problem by selecting the most important features among all factors to reduce the size of the input variables. However, two important questions need to be solved: (1) how do feature selection methods affect the performance of machine learning models? and (2) which feature selection method is the most suitable for a given machine learning model? This paper aims to address these two questions by comparing the predictive performance of 13 feature selection-based machine learning (FS-ML) models and 5 ordinary machine learning models on LSA. First, five commonly used machine learning models (i.e., logistic regression, support vector machine, artificial neural network, Gaussian process and random forest) and six typical feature selection methods in the literature are adopted to constitute the proposed models. Then, fifteen conditioning factors are chosen as input variables and 1,017 landslides are used as recorded data. Next, feature selection methods are used to obtain the importance of the conditioning factors to create feature subsets, based on which 13 FS-ML models are constructed. For each of the machine learning models, a best optimized FS-ML model is selected according to the area under curve value. Finally, five optimal FS-ML models are obtained and applied to the LSA of the studied area. The predictive abilities of the FS-ML models on LSA are verified and compared through the receive operating characteristic curve and statistical indicators such as sensitivity, specificity and accuracy. The results showed that different feature selection methods have different effects on the performance of LSA machine learning models. FS-ML models generally outperform the ordinary machine learning models. The best FS-ML model is the recursive feature elimination (RFE) optimized RF, and RFE is an optimal method for feature selection.

Estimation of the Input Wave Height of the Wave Generator for Regular Waves by Using Artificial Neural Networks and Gaussian Process Regression (인공신경망과 가우시안 과정 회귀에 의한 규칙파의 조파기 입력파고 추정)

  • Jung-Eun, Oh;Sang-Ho, Oh
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.34 no.6
    • /
    • pp.315-324
    • /
    • 2022
  • The experimental data obtained in a wave flume were analyzed using machine learning techniques to establish a model that predicts the input wave height of the wavemaker based on the waves that have experienced wave shoaling and to verify the performance of the established model. For this purpose, artificial neural network (NN), the most representative machine learning technique, and Gaussian process regression (GPR), one of the non-parametric regression analysis methods, were applied respectively. Then, the predictive performance of the two models was compared. The analysis was performed independently for the case of using all the data at once and for the case by classifying the data with a criterion related to the occurrence of wave breaking. When the data were not classified, the error between the input wave height at the wavemaker and the measured value was relatively large for both the NN and GPR models. On the other hand, if the data were divided into non-breaking and breaking conditions, the accuracy of predicting the input wave height was greatly improved. Among the two models, the overall performance of the GPR model was better than that of the NN model.

Study on Anomaly Detection Method of Improper Foods using Import Food Big data (수입식품 빅데이터를 이용한 부적합식품 탐지 시스템에 관한 연구)

  • Cho, Sanggoo;Choi, Gyunghyun
    • The Journal of Bigdata
    • /
    • v.3 no.2
    • /
    • pp.19-33
    • /
    • 2018
  • Owing to the increase of FTA, food trade, and versatile preferences of consumers, food import has increased at tremendous rate every year. While the inspection check of imported food accounts for about 20% of the total food import, the budget and manpower necessary for the government's import inspection control is reaching its limit. The sudden import food accidents can cause enormous social and economic losses. Therefore, predictive system to forecast the compliance of food import with its preemptive measures will greatly improve the efficiency and effectiveness of import safety control management. There has already been a huge data accumulated from the past. The processed foods account for 75% of the total food import in the import food sector. The analysis of big data and the application of analytical techniques are also used to extract meaningful information from a large amount of data. Unfortunately, not many studies have been done regarding analyzing the import food and its implication with understanding the big data of food import. In this context, this study applied a variety of classification algorithms in the field of machine learning and suggested a data preprocessing method through the generation of new derivative variables to improve the accuracy of the model. In addition, the present study compared the performance of the predictive classification algorithms with the general base classifier. The Gaussian Naïve Bayes prediction model among various base classifiers showed the best performance to detect and predict the nonconformity of imported food. In the future, it is expected that the application of the abnormality detection model using the Gaussian Naïve Bayes. The predictive model will reduce the burdens of the inspection of import food and increase the non-conformity rate, which will have a great effect on the efficiency of the food import safety control and the speed of import customs clearance.