• Title/Summary/Keyword: train model

Search Result 1,719, Processing Time 0.026 seconds

Deep Learning for Remote Sensing Applications (원격탐사활용을 위한 딥러닝기술)

  • Lee, Moung-Jin;Lee, Won-Jin;Lee, Seung-Kuk;Jung, Hyung-Sup
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_2
    • /
    • pp.1581-1587
    • /
    • 2022
  • Recently, deep learning has become more important in remote sensing data processing. Huge amounts of data for artificial intelligence (AI) has been designed and built to develop new technologies for remote sensing, and AI models have been learned by the AI training dataset. Artificial intelligence models have developed rapidly, and model accuracy is increasing accordingly. However, there are variations in the model accuracy depending on the person who trains the AI model. Eventually, experts who can train AI models well are required more and more. Moreover, the deep learning technique enables us to automate methods for remote sensing applications. Methods having the performance of less than about 60% in the past are now over 90% and entering about 100%. In this special issue, thirteen papers on how deep learning techniques are used for remote sensing applications will be introduced.

A Proposal of Sensor-based Time Series Classification Model using Explainable Convolutional Neural Network

  • Jang, Youngjun;Kim, Jiho;Lee, Hongchul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.5
    • /
    • pp.55-67
    • /
    • 2022
  • Sensor data can provide fault diagnosis for equipment. However, the cause analysis for fault results of equipment is not often provided. In this study, we propose an explainable convolutional neural network framework for the sensor-based time series classification model. We used sensor-based time series dataset, acquired from vehicles equipped with sensors, and the Wafer dataset, acquired from manufacturing process. Moreover, we used Cycle Signal dataset, acquired from real world mechanical equipment, and for Data augmentation methods, scaling and jittering were used to train our deep learning models. In addition, our proposed classification models are convolutional neural network based models, FCN, 1D-CNN, and ResNet, to compare evaluations for each model. Our experimental results show that the ResNet provides promising results in the context of time series classification with accuracy and F1 Score reaching 95%, improved by 3% compared to the previous study. Furthermore, we propose XAI methods, Class Activation Map and Layer Visualization, to interpret the experiment result. XAI methods can visualize the time series interval that shows important factors for sensor data classification.

A SE Approach for Real-Time NPP Response Prediction under CEA Withdrawal Accident Conditions

  • Felix Isuwa, Wapachi;Aya, Diab
    • Journal of the Korean Society of Systems Engineering
    • /
    • v.18 no.2
    • /
    • pp.75-93
    • /
    • 2022
  • Machine learning (ML) data-driven meta-model is proposed as a surrogate model to reduce the excessive computational cost of the physics-based model and facilitate the real-time prediction of a nuclear power plant's transient response. To forecast the transient response three machine learning (ML) meta-models based on recurrent neural networks (RNNs); specifically, Long Short Term Memory (LSTM), Gated Recurrent Unit (GRU), and a sequence combination of Convolutional Neural Network (CNN) and LSTM are developed. The chosen accident scenario is a control element assembly withdrawal at power concurrent with the Loss Of Offsite Power (LOOP). The transient response was obtained using the best estimate thermal hydraulics code, MARS-KS, and cross-validated against the Design and control document (DCD). DAKOTA software is loosely coupled with MARS-KS code via a python interface to perform the Best Estimate Plus Uncertainty Quantification (BEPU) analysis and generate a time series database of the system response to train, test and validate the ML meta-models. Key uncertain parameters identified as required by the CASU methodology were propagated using the non-parametric Monte-Carlo (MC) random propagation and Latin Hypercube Sampling technique until a statistically significant database (181 samples) as required by Wilk's fifth order is achieved with 95% probability and 95% confidence level. The three ML RNN models were built and optimized with the help of the Talos tool and demonstrated excellent performance in forecasting the most probable NPP transient response. This research was guided by the Systems Engineering (SE) approach for the systematic and efficient planning and execution of the research.

Machine Learning-based Optimal VNF Deployment Prediction (기계학습 기반 VNF 최적 배치 예측 기술연구)

  • Park, Suhyun;Kim, Hee-Gon;Hong, Jibum;Yoo, Jae-Hyung;Hong, James Won-Ki
    • KNOM Review
    • /
    • v.23 no.1
    • /
    • pp.34-42
    • /
    • 2020
  • Network Function Virtualization (NFV) environment can deal with dynamic changes in traffic status with appropriate deployment and scaling of Virtualized Network Function (VNF). However, determining and applying the optimal VNF deployment is a complicated and difficult task. In particular, it is necessary to predict the situation at a future point because it takes for the process to be applied and the deployment decision to the actual NFV environment. In this paper, we randomly generate service requests in Multiaccess Edge Computing (MEC) topology, then obtain training data for machine learning model from an Integer Linear Programming (ILP) solution. We use the simulation data to train the machine learning model which predicts the optimal VNF deployment in a predefined future point. The prediction model shows the accuracy over 90% compared to the ILP solution in a 5-minute future time point.

Application of the Rapid Prototyping Instructional Systems Design in Meridianology Laboratory (경혈학실습 체제적 교수설계를 위한 RPISD 모형 적용 연구)

  • Cho, Eunbyul;Kim, Jae-Hyo;Hong, Jiseong
    • Korean Journal of Acupuncture
    • /
    • v.39 no.3
    • /
    • pp.71-83
    • /
    • 2022
  • Objectives : Instructional design is the systematic approach to the Analysis, Design, Development, Implementation, and Evaluation of learning materials and activities. We aimed to apply the rapid prototyping to instructional systems design (RPISD) in meridianology laboratory, a subject in which students train acupuncture to develop lesson plan. Methods : The needs of the stakeholders including client, subject matter expert and students were analyzed using the performance needs analysis model. Task analysis was implemented by observation and interview. First prototype was drafted and implemented in meridianology laboratory class once. The second prototype was modified from the first, by usability evaluation of the stakeholders. Results : The client requested an electronically documented manual to improve the quality of acupuncture training. The learner requested an extension of practice time and detailed practice guidelines. The main problems of students' performance were some cases of violation of clean needle technique, the lack of communication between the operator and recipient in direct, and lack of confidence in their own performance. Stakeholders were generally satisfied with the proposed first prototype. Second prototype of lesson plan was produced by modifying some contents. Conclusions : A lesson plan was developed by applying the systematic RPISD model. It is expected that the developed instructional design may contribute to the quality improvement of meridianology laboratory education.

Development of Machine Learning based Flood Depth and Location Prediction Model (머신러닝을 이용한 침수 깊이와 위치예측 모델 개발)

  • Ji-Wook Kang;Jong-Hyeok Park;Soo-Hee Han;Kyung-Jun Kim
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.1
    • /
    • pp.91-98
    • /
    • 2023
  • With the increasing flood damage by frequently localized heavy rains, flood prediction research are being conducted to prevent flooding damage in advance. In this paper, we present a machine-learning scheme for developing a flooding depth and location prediction model using real-time rainfall data. This scheme proposes a dataset configuration method using the data as input, which can robustly configure various rainfall distribution patterns and train the model with less memory. These data are composed of two: valid total data and valid local. The one data that has a significant effect on flooding predicted the flooding location well but tended to have different values for predicting specific rainfall patterns. The other data that means the flood area partially affects flooding refers to valid local data. The valid local data was well learned for the fixed point method, but the flooding location was not accurately indicated for the arbitrary point method. Through this study, it is expected that a lot of damage can be prevented by predicting the depth and location of flooding in a real-time manner.

Dog-Species Classification through CycleGAN and Standard Data Augmentation

  • Chan, Park;Nammee, Moon
    • Journal of Information Processing Systems
    • /
    • v.19 no.1
    • /
    • pp.67-79
    • /
    • 2023
  • In the image field, data augmentation refers to increasing the amount of data through an editing method such as rotating or cropping a photo. In this study, a generative adversarial network (GAN) image was created using CycleGAN, and various colors of dogs were reflected through data augmentation. In particular, dog data from the Stanford Dogs Dataset and Oxford-IIIT Pet Dataset were used, and 10 breeds of dog, corresponding to 300 images each, were selected. Subsequently, a GAN image was generated using CycleGAN, and four learning groups were established: 2,000 original photos (group I); 2,000 original photos + 1,000 GAN images (group II); 3,000 original photos (group III); and 3,000 original photos + 1,000 GAN images (group IV). The amount of data in each learning group was augmented using existing data augmentation methods such as rotating, cropping, erasing, and distorting. The augmented photo data were used to train the MobileNet_v3_Large, ResNet-152, InceptionResNet_v2, and NASNet_Large frameworks to evaluate the classification accuracy and loss. The top-3 accuracy for each deep neural network model was as follows: MobileNet_v3_Large of 86.4% (group I), 85.4% (group II), 90.4% (group III), and 89.2% (group IV); ResNet-152 of 82.4% (group I), 83.7% (group II), 84.7% (group III), and 84.9% (group IV); InceptionResNet_v2 of 90.7% (group I), 88.4% (group II), 93.3% (group III), and 93.1% (group IV); and NASNet_Large of 85% (group I), 88.1% (group II), 91.8% (group III), and 92% (group IV). The InceptionResNet_v2 model exhibited the highest image classification accuracy, and the NASNet_Large model exhibited the highest increase in the accuracy owing to data augmentation.

Machine-assisted Semi-Simulation Model (MSSM): Predicting Galactic Baryonic Properties from Their Dark Matter Using A Machine Trained on Hydrodynamic Simulations

  • Jo, Yongseok;Kim, Ji-hoon
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.44 no.2
    • /
    • pp.55.3-55.3
    • /
    • 2019
  • We present a pipeline to estimate baryonic properties of a galaxy inside a dark matter (DM) halo in DM-only simulations using a machine trained on high-resolution hydrodynamic simulations. As an example, we use the IllustrisTNG hydrodynamic simulation of a (75 h-1 Mpc)3 volume to train our machine to predict e.g., stellar mass and star formation rate in a galaxy-sized halo based purely on its DM content. An extremely randomized tree (ERT) algorithm is used together with multiple novel improvements we introduce here such as a refined error function in machine training and two-stage learning. Aided by these improvements, our model demonstrates a significantly increased accuracy in predicting baryonic properties compared to prior attempts --- in other words, the machine better mimics IllustrisTNG's galaxy-halo correlation. By applying our machine to the MultiDark-Planck DM-only simulation of a large (1 h-1 Gpc)3 volume, we then validate the pipeline that rapidly generates a galaxy catalogue from a DM halo catalogue using the correlations the machine found in IllustrisTNG. We also compare our galaxy catalogue with the ones produced by popular semi-analytic models (SAMs). Our so-called machine-assisted semi-simulation model (MSSM) is shown to be largely compatible with SAMs, and may become a promising method to transplant the baryon physics of galaxy-scale hydrodynamic calculations onto a larger-volume DM-only run. We discuss the benefits that machine-based approaches like this entail, as well as suggestions to raise the scientific potential of such approaches.

  • PDF

The Prediction Ability of Genomic Selection in the Wheat Core Collection

  • Yuna Kang;Changsoo Kim
    • Proceedings of the Korean Society of Crop Science Conference
    • /
    • 2022.10a
    • /
    • pp.235-235
    • /
    • 2022
  • Genome selection is a promising tool for plant and animal breeding, which uses genome-wide molecular marker data to capture large and small effect quantitative trait loci and predict the genetic value of selection candidates. Genomic selection has been shown previously to have higher prediction accuracies than conventional marker-assisted selection (MAS) for quantitative traits. In this study, the prediction accuracy of 10 agricultural traits in the wheat core group with 567 points was compared. We used a cross-validation approach to train and validate prediction accuracy to evaluate the effects of training population size and training model.As for the prediction accuracy according to the model, the prediction accuracy of 0.4 or more was evaluated except for the SVN model among the 6 models (GBLUP, LASSO, BayseA, RKHS, SVN, RF) used in most all traits. For traits such as days to heading and days to maturity, the prediction accuracy was very high, over 0.8. As for the prediction accuracy according to the training group, the prediction accuracy increased as the number of training groups increased in all traits. It was confirmed that the prediction accuracy was different in the training population according to the genetic composition regardless of the number. All training models were verified through 5-fold cross-validation. To verify the prediction ability of the training population of the wheat core collection, we compared the actual phenotype and genomic estimated breeding value using 35 breeding population. In fact, out of 10 individuals with the fastest days to heading, 5 individuals were selected through genomic selection, and 6 individuals were selected through genomic selection out of the 10 individuals with the slowest days to heading. Therefore, we confirmed the possibility of selecting individuals according to traits with only the genotype for a shorter period of time through genomic selection.

  • PDF

KOMPSAT Optical Image Registration via Deep-Learning Based OffsetNet Model (딥러닝 기반 OffsetNet 모델을 통한 KOMPSAT 광학 영상 정합)

  • Jin-Woo Yu;Che-Won Park;Hyung-Sup Jung
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_3
    • /
    • pp.1707-1720
    • /
    • 2023
  • With the increase in satellite time series data, the utility of remote sensing data is growing. In the analysis of time series data, the relative positional accuracy between images has a significant impact on the results, making image registration essential for correction. In recent years, research on image registration has been increasing by applying deep learning, which outperforms existing image registration algorithms. To train deep learning-based registration models, a large number of image pairs are required. Additionally, creating a correlation map between the data of existing deep learning models and applying additional computations to extract registration points is inefficient. To overcome these drawbacks, this study developed a data augmentation technique for training image registration models and applied it to OffsetNet, a registration model that predicts the offset amount itself, to perform image registration for KOMSAT-2, -3, and -3A. The results of the model training showed that OffsetNet accurately predicted the offset amount for the test data, enabling effective registration of the master and slave images.