• Title/Summary/Keyword: Gaussian Learning

Search Result 278, Processing Time 0.025 seconds

Performance Evaluation of Machine Learning Algorithms for Cloud Removal of Optical Imagery: A Case Study in Cropland (광학 영상의 구름 제거를 위한 기계학습 알고리즘의 예측 성능 평가: 농경지 사례 연구)

  • Soyeon Park;Geun-Ho Kwak;Ho-Yong Ahn;No-Wook Park
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_1
    • /
    • pp.507-519
    • /
    • 2023
  • Multi-temporal optical images have been utilized for time-series monitoring of croplands. However, the presence of clouds imposes limitations on image availability, often requiring a cloud removal procedure. This study assesses the applicability of various machine learning algorithms for effective cloud removal in optical imagery. We conducted comparative experiments by focusing on two key variables that significantly influence the predictive performance of machine learning algorithms: (1) land-cover types of training data and (2) temporal variability of land-cover types. Three machine learning algorithms, including Gaussian process regression (GPR), support vector machine (SVM), and random forest (RF), were employed for the experiments using simulated cloudy images in paddy fields of Gunsan. GPR and SVM exhibited superior prediction accuracy when the training data had the same land-cover types as the cloud region, and GPR showed the best stability with respect to sampling fluctuations. In addition, RF was the least affected by the land-cover types and temporal variations of training data. These results indicate that GPR is recommended when the land-cover type and spectral characteristics of the training data are the same as those of the cloud region. On the other hand, RF should be applied when it is difficult to obtain training data with the same land-cover types as the cloud region. Therefore, the land-cover types in cloud areas should be taken into account for extracting informative training data along with selecting the optimal machine learning algorithm.

Resume Classification System using Natural Language Processing & Machine Learning Techniques

  • Irfan Ali;Nimra;Ghulam Mujtaba;Zahid Hussain Khand;Zafar Ali;Sajid Khan
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.7
    • /
    • pp.108-117
    • /
    • 2024
  • The selection and recommendation of a suitable job applicant from the pool of thousands of applications are often daunting jobs for an employer. The recommendation and selection process significantly increases the workload of the concerned department of an employer. Thus, Resume Classification System using the Natural Language Processing (NLP) and Machine Learning (ML) techniques could automate this tedious process and ease the job of an employer. Moreover, the automation of this process can significantly expedite and transparent the applicants' selection process with mere human involvement. Nevertheless, various Machine Learning approaches have been proposed to develop Resume Classification Systems. However, this study presents an automated NLP and ML-based system that classifies the Resumes according to job categories with performance guarantees. This study employs various ML algorithms and NLP techniques to measure the accuracy of Resume Classification Systems and proposes a solution with better accuracy and reliability in different settings. To demonstrate the significance of NLP & ML techniques for processing & classification of Resumes, the extracted features were tested on nine machine learning models Support Vector Machine - SVM (Linear, SGD, SVC & NuSVC), Naïve Bayes (Bernoulli, Multinomial & Gaussian), K-Nearest Neighbor (KNN) and Logistic Regression (LR). The Term-Frequency Inverse Document (TF-IDF) feature representation scheme proven suitable for Resume Classification Task. The developed models were evaluated using F-ScoreM, RecallM, PrecissionM, and overall Accuracy. The experimental results indicate that using the One-Vs-Rest-Classification strategy for this multi-class Resume Classification task, the SVM class of Machine Learning algorithms performed better on the study dataset with over 96% overall accuracy. The promising results suggest that NLP & ML techniques employed in this study could be used for the Resume Classification task.

Research on Classification of Sitting Posture with a IMU (하나의 IMU를 이용한 앉은 자세 분류 연구)

  • Kim, Yeon-Wook;Cho, Woo-Hyeong;Jeon, Yu-Yong;Lee, Sangmin
    • Journal of rehabilitation welfare engineering & assistive technology
    • /
    • v.11 no.3
    • /
    • pp.261-270
    • /
    • 2017
  • Bad sitting postures are known to cause for a variety of diseases or physical deformation. However, it is not easy to fit right sitting posture for long periods of time. Therefore, methods of distinguishing and inducing good sitting posture have been constantly proposed. Proposed methods were image processing, using pressure sensor attached to the chair, and using the IMU (Internal Measurement Unit). The method of using IMU has advantages of simple hardware configuration and free of various constraints in measurement. In this paper, we researched on distinguishing sitting postures with a small amount of data using just one IMU. Feature extraction method was used to find data which contribution is the least for classification. Machine learning algorithms were used to find the best position to classify and we found best machine learning algorithm. Used feature extraction method was PCA(Principal Component Analysis). Used Machine learning models were five : SVM(Support Vector Machine), KNN(K Nearest Neighbor), K-means (K-means Algorithm) GMM (Gaussian Mixture Model), and HMM (Hidden Marcov Model). As a result of research, back neck is suitable position for classification because classification rate of it was highest in every model. It was confirmed that Yaw data which is one of the IMU data has the smallest contribution to classification rate using PCA and there was no changes in classification rate after removal it. SVM, KNN are suitable for classification because their classification rate are higher than the others.

Evaluation of a Thermal Conductivity Prediction Model for Compacted Clay Based on a Machine Learning Method (기계학습법을 통한 압축 벤토나이트의 열전도도 추정 모델 평가)

  • Yoon, Seok;Bang, Hyun-Tae;Kim, Geon-Young;Jeon, Haemin
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.41 no.2
    • /
    • pp.123-131
    • /
    • 2021
  • The buffer is a key component of an engineered barrier system that safeguards the disposal of high-level radioactive waste. Buffers are located between disposal canisters and host rock, and they can restrain the release of radionuclides and protect canisters from the inflow of ground water. Since considerable heat is released from a disposal canister to the surrounding buffer, the thermal conductivity of the buffer is a very important parameter in the entire disposal safety. For this reason, a lot of research has been conducted on thermal conductivity prediction models that consider various factors. In this study, the thermal conductivity of a buffer is estimated using the machine learning methods of: linear regression, decision tree, support vector machine (SVM), ensemble, Gaussian process regression (GPR), neural network, deep belief network, and genetic programming. In the results, the machine learning methods such as ensemble, genetic programming, SVM with cubic parameter, and GPR showed better performance compared with the regression model, with the ensemble with XGBoost and Gaussian process regression models showing best performance.

Medical Image Denoising using Wavelet Transform-Based CNN Model

  • Seoyun Jang;Dong Hoon Lim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.10
    • /
    • pp.21-34
    • /
    • 2024
  • In medical images such as MRI(Magnetic Resonance Imaging) and CT(Computed Tomography) images, noise removal has a significant impact on the performance of medical imaging systems. Recently, the introduction of deep learning in image processing technology has improved the performance of noise removal methods. However, there is a limit to removing only noise while preserving details in the image domain. In this paper, we propose a wavelet transform-based CNN(Convolutional Neural Network) model, namely the WT-DnCNN(Wavelet Transform-Denoising Convolutional Neural Network) model, to improve noise removal performance. This model first removes noise by dividing the noisy image into frequency bands using wavelet transform, and then applies the existing DnCNN model to the corresponding frequency bands to finally remove noise. In order to evaluate the performance of the WT-DnCNN model proposed in this paper, experiments were conducted on MRI and CT images damaged by various noises, namely Gaussian noise, Poisson noise, and speckle noise. The performance experiment results show that the WT-DnCNN model is superior to the traditional filter, i.e., the BM3D(Block-Matching and 3D Filtering) filter, as well as the existing deep learning models, DnCNN and CDAE(Convolution Denoising AutoEncoder) model in qualitative comparison, and in quantitative comparison, the PSNR(Peak Signal-to-Noise Ratio) and SSIM(Structural Similarity Index Measure) values were 36~43 and 0.93~0.98 for MRI images and 38~43 and 0.95~0.98 for CT images, respectively. In addition, in the comparison of the execution speed of the models, the DnCNN model was much less than the BM3D model, but it took a long time due to the addition of the wavelet transform in the comparison with the DnCNN model.

A Small-area Hardware Implementation of EGML-based Moving Object Detection Processor (EGML 기반 이동객체 검출 프로세서의 저면적 하드웨어 구현)

  • Sung, Mi-ji;Shin, Kyung-wook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.12
    • /
    • pp.2213-2220
    • /
    • 2017
  • This paper proposes an efficient approach for hardware implementation of moving object detection (MOD) processor using effective Gaussian mixture learning (EGML)-based background subtraction method. Arithmetic units used in background generation were implemented using LUT-based approximation to reduce hardware complexity. Hardware resources used for both background subtraction and Gaussian probability density calculation were shared. The MOD processor was verified by FPGA-in-the-loop simulation using MATLAB/Simulink. The MOD performance was evaluated by using six types of video defined in IEEE CDW-2014 dataset, which resulted the average of recall value of 0.7700, the average of precision value of 0.7170, and the average of F-measure value of 0.7293. The MOD processor was implemented with 882 slices and block RAM of $146{\times}36kbits$ on Virtex5 FPGA, resulting in 60% hardware reduction compared to conventional design based on EGML. It was estimated that the MOD processor could operate with 75 MHz clock, resulting in real-time processing of $800{\times}600$ video with a frame rate of 39 fps.

Power Trading System through the Prediction of Demand and Supply in Distributed Power System Based on Deep Reinforcement Learning (심층강화학습 기반 분산형 전력 시스템에서의 수요와 공급 예측을 통한 전력 거래시스템)

  • Lee, Seongwoo;Seon, Joonho;Kim, Soo-Hyun;Kim, Jin-Young
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.6
    • /
    • pp.163-171
    • /
    • 2021
  • In this paper, the energy transaction system was optimized by applying a resource allocation algorithm and deep reinforcement learning in the distributed power system. The power demand and supply environment were predicted by deep reinforcement learning. We propose a system that pursues common interests in power trading and increases the efficiency of long-term power transactions in the paradigm shift from conventional centralized to distributed power systems in the power trading system. For a realistic energy simulation model and environment, we construct the energy market by learning weather and monthly patterns adding Gaussian noise. In simulation results, we confirm that the proposed power trading systems are cooperative with each other, seek common interests, and increase profits in the prolonged energy transaction.

Cancellation Scheme of impusive Noise based on Deep Learning in Power Line Communication System (딥러닝 기반 전력선 통신 시스템의 임펄시브 잡음 제거 기법)

  • Seo, Sung-Il
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.4
    • /
    • pp.29-33
    • /
    • 2022
  • In this paper, we propose the deep learning based pre interference cancellation scheme algorithm for power line communication (PLC) systems in smart grid. The proposed scheme estimates the channel noise information by applying a deep learning model at the transmitter. Then, the estimated channel noise is updated in database. In the modulator, the channel noise which reduces the power line communication performance is effectively removed through interference cancellation technique. As an impulsive noise model, Middleton Class A interference model was employed. The performance is evaluated in terms of bit error rate (BER). From the simulation results, it is confirmed that the proposed scheme has better BER performance compared to the theoretical model based on additive white Gaussian noise. As a result, the proposed interference cancellation with deep learning improves the signal quality of PLC systems by effectively removing the channel noise. The results of the paper can be applied to PLC for smart grid and general communication systems.

Object Detection Based on Hellinger Distance IoU and Objectron Application (Hellinger 거리 IoU와 Objectron 적용을 기반으로 하는 객체 감지)

  • Kim, Yong-Gil;Moon, Kyung-Il
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.2
    • /
    • pp.63-70
    • /
    • 2022
  • Although 2D Object detection has been largely improved in the past years with the advance of deep learning methods and the use of large labeled image datasets, 3D object detection from 2D imagery is a challenging problem in a variety of applications such as robotics, due to the lack of data and diversity of appearances and shapes of objects within a category. Google has just announced the launch of Objectron that has a novel data pipeline using mobile augmented reality session data. However, it also is corresponding to 2D-driven 3D object detection technique. This study explores more mature 2D object detection method, and applies its 2D projection to Objectron 3D lifting system. Most object detection methods use bounding boxes to encode and represent the object shape and location. In this work, we explore a stochastic representation of object regions using Gaussian distributions. We also present a similarity measure for the Gaussian distributions based on the Hellinger Distance, which can be viewed as a stochastic Intersection-over-Union. Our experimental results show that the proposed Gaussian representations are closer to annotated segmentation masks in available datasets. Thus, less accuracy problem that is one of several limitations of Objectron can be relaxed.

A Survey on Oil Spill and Weather Forecast Using Machine Learning Based on Neural Networks and Statistical Methods (신경망 및 통계 기법 기반의 기계학습을 이용한 유류유출 및 기상 예측 연구 동향)

  • Kim, Gyoung-Do;Kim, Yong-Hyuk
    • Journal of the Korea Convergence Society
    • /
    • v.8 no.10
    • /
    • pp.1-8
    • /
    • 2017
  • Accurate forecasting enables to effectively prepare for future phenomenon. Especially, meteorological phenomenon is closely related with human life, and it can prevent from damage such as human life and property through forecasting of weather and disaster that can occur. To respond quickly and effectively to oil spill accidents, it is important to accurately predict the movement of oil spills and the weather in the surrounding waters. In this paper, we selected four representative machine learning techniques: support vector machine, Gaussian process, multilayer perceptron, and radial basis function network that have shown good performance and predictability in the previous studies related to oil spill detection and prediction in meteorology such as wind, rainfall and ozone. we suggest the applicability of oil spill prediction model based on machine learning.