• Title/Summary/Keyword: deep Learning

Search Result 5,763, Processing Time 0.032 seconds

Deep Learning-based Approach for Classification of Tribological Time Series Data for Hand Creams (딥러닝을 이용한 핸드크림의 마찰 시계열 데이터 분류)

  • Kim, Ji Won;Lee, You Min;Han, Shawn;Kim, Kyeongtaek
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.44 no.3
    • /
    • pp.98-105
    • /
    • 2021
  • The sensory stimulation of a cosmetic product has been deemed to be an ancillary aspect until a decade ago. That point of view has drastically changed on different levels in just a decade. Nowadays cosmetic formulators should unavoidably meet the needs of consumers who want sensory satisfaction, although they do not have much time for new product development. The selection of new products from candidate products largely depend on the panel of human sensory experts. As new product development cycle time decreases, the formulators wanted to find systematic tools that are required to filter candidate products into a short list. Traditional statistical analysis on most physical property tests for the products including tribology tests and rheology tests, do not give any sound foundation for filtering candidate products. In this paper, we suggest a deep learning-based analysis method to identify hand cream products by raw electric signals from tribological sliding test. We compare the result of the deep learning-based method using raw data as input with the results of several machine learning-based analysis methods using manually extracted features as input. Among them, ResNet that is a deep learning model proved to be the best method to identify hand cream used in the test. According to our search in the scientific reported papers, this is the first attempt for predicting test cosmetic product with only raw time-series friction data without any manual feature extraction. Automatic product identification capability without manually extracted features can be used to narrow down the list of the newly developed candidate products.

Estimation of GNSS Zenith Tropospheric Wet Delay Using Deep Learning (딥러닝 기반 GNSS 천정방향 대류권 습윤지연 추정 연구)

  • Lim, Soo-Hyeon;Bae, Tae-Suk
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.39 no.1
    • /
    • pp.23-28
    • /
    • 2021
  • Data analysis research using deep learning has recently been studied in various field. In this paper, we conduct a GNSS (Global Navigation Satellite System)-based meteorological study applying deep learning by estimating the ZWD (Zenith tropospheric Wet Delay) through MLP (Multi-Layer Perceptron) and LSTM (Long Short-Term Memory) models. Deep learning models were trained with meteorological data and ZWD which is estimated using zenith tropospheric total delay and dry delay. We apply meteorological data not used for learning to the learned model to estimate ZWD with centimeter-level RMSE (Root Mean Square Error) in both models. It is necessary to analyze the GNSS data from coastal areas together and increase time resolution in order to estimate ZWD in various situations.

Predicting flux of forward osmosis membrane module using deep learning (딥러닝을 이용한 정삼투 막모듈의 플럭스 예측)

  • Kim, Jaeyoon;Jeon, Jongmin;Kim, Noori;Kim, Suhan
    • Journal of Korean Society of Water and Wastewater
    • /
    • v.35 no.1
    • /
    • pp.93-100
    • /
    • 2021
  • Forward osmosis (FO) process is a chemical potential driven process, where highly concentrated draw solution (DS) is used to take water through semi-permeable membrane from feed solution (FS) with lower concentration. Recently, commercial FO membrane modules have been developed so that full-scale FO process can be applied to seawater desalination or water reuse. In order to design a real-scale FO plant, the performance prediction of FO membrane modules installed in the plant is essential. Especially, the flux prediction is the most important task because the amount of diluted draw solution and concentrate solution flowing out of FO modules can be expected from the flux. Through a previous study, a theoretical based FO module model to predict flux was developed. However it needs an intensive numerical calculation work and a fitting process to reflect a complex module geometry. The idea of this work is to introduce deep learning to predict flux of FO membrane modules using 116 experimental data set, which include six input variables (flow rate, pressure, and ion concentration of DS and FS) and one output variable (flux). The procedure of optimizing a deep learning model to minimize prediction error and overfitting problem was developed and tested. The optimized deep learning model (error of 3.87%) was found to predict flux better than the theoretical based FO module model (error of 10.13%) in the data set which were not used in machine learning.

Extracting Neural Networks via Meltdown (멜트다운 취약점을 이용한 인공신경망 추출공격)

  • Jeong, Hoyong;Ryu, Dohyun;Hur, Junbeom
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.30 no.6
    • /
    • pp.1031-1041
    • /
    • 2020
  • Cloud computing technology plays an important role in the deep learning industry as deep learning services are deployed frequently on top of cloud infrastructures. In such cloud environment, virtualization technology provides logically independent and isolated computing space for each tenant. However, recent studies demonstrate that by leveraging vulnerabilities of virtualization techniques and shared processor architectures in the cloud system, various side-channels can be established between cloud tenants. In this paper, we propose a novel attack scenario that can steal internal information of deep learning models by exploiting the Meltdown vulnerability in a multi-tenant system environment. On the basis of our experiment, the proposed attack method could extract internal information of a TensorFlow deep-learning service with 92.875% accuracy and 1.325kB/s extraction speed.

Development of Deep Learning-Based Damage Detection Prototype for Concrete Bridge Condition Evaluation (콘크리트 교량 상태평가를 위한 딥러닝 기반 손상 탐지 프로토타입 개발)

  • Nam, Woo-Suk;Jung, Hyunjun;Park, Kyung-Han;Kim, Cheol-Min;Kim, Gyu-Seon
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.42 no.1
    • /
    • pp.107-116
    • /
    • 2022
  • Recently, research has been actively conducted on the technology of inspection facilities through image-based analysis assessment of human-inaccessible facilities. This research was conducted to study the conditions of deep learning-based imaging data on bridges and to develop an evaluation prototype program for bridges. To develop a deep learning-based bridge damage detection prototype, the Semantic Segmentation model, which enables damage detection and quantification among deep learning models, applied Mask-RCNN and constructed learning data 5,140 (including open-data) and labeling suitable for damage types. As a result of performance modeling verification, precision and reproduction rate analysis of concrete cracks, stripping/slapping, rebar exposure and paint stripping showed that the precision was 95.2 %, and the recall was 93.8 %. A 2nd performance verification was performed on onsite data of crack concrete using damage rate of bridge members.

CALS: Channel State Information Auto-Labeling System for Large-scale Deep Learning-based Wi-Fi Sensing (딥러닝 기반 Wi-Fi 센싱 시스템의 효율적인 구축을 위한 지능형 데이터 수집 기법)

  • Jang, Jung-Ik;Choi, Jaehyuk
    • Journal of IKEEE
    • /
    • v.26 no.3
    • /
    • pp.341-348
    • /
    • 2022
  • Wi-Fi Sensing, which uses Wi-Fi technology to sense the surrounding environments, has strong potentials in a variety of sensing applications. Recently several advanced deep learning-based solutions using CSI (Channel State Information) data have achieved high performance, but it is still difficult to use in practice without explicit data collection, which requires expensive adaptation efforts for model retraining. In this study, we propose a Channel State Information Automatic Labeling System (CALS) that automatically collects and labels training CSI data for deep learning-based Wi-Fi sensing systems. The proposed system allows the CSI data collection process to efficiently collect labeled CSI for labeling for supervised learning using computer vision technologies such as object detection algorithms. We built a prototype of CALS to demonstrate its efficiency and collected data to train deep learning models for detecting the presence of a person in an indoor environment, showing to achieve an accuracy of over 90% with the auto-labeled data sets generated by CALS.

Deep learning-based monitoring for conservation and management of coastal dune vegetation (해안사구 식생의 보전 및 관리를 위한 딥러닝 기반 모니터링)

  • Kim, Dong-woo;Gu, Ja-woon;Hong, Ye-ji;Kim, Se-Min;Son, Seung-Woo
    • Journal of the Korean Society of Environmental Restoration Technology
    • /
    • v.25 no.6
    • /
    • pp.25-33
    • /
    • 2022
  • In this study, a monitoring method using high-resolution images acquired by unmanned aerial vehicles and deep learning algorithms was proposed for the management of the Sinduri coastal sand dunes. Class classification was done using U-net, a semantic division method. The classification target classified 3 types of sand dune vegetation into 4 classes, and the model was trained and tested with a total of 320 training images and 48 test images. Ignored label was applied to improve the performance of the model, and then evaluated by applying two loss functions, CE Loss and BCE Loss. As a result of the evaluation, when CE Loss was applied, the value of mIoU for each class was the highest, but it can be judged that the performance of BCE Loss is better considering the time efficiency consumed in learning. It is meaningful as a pilot application of unmanned aerial vehicles and deep learning as a method to monitor and manage sand dune vegetation. The possibility of using the deep learning image analysis technology to monitor sand dune vegetation has been confirmed, and it is expected that the proposed method can be used not only in sand dune vegetation but also in various fields such as forests and grasslands.

Development of Security Anomaly Detection Algorithms using Machine Learning (기계 학습을 활용한 보안 이상징후 식별 알고리즘 개발)

  • Hwangbo, Hyunwoo;Kim, Jae Kyung
    • The Journal of Society for e-Business Studies
    • /
    • v.27 no.1
    • /
    • pp.1-13
    • /
    • 2022
  • With the development of network technologies, the security to protect organizational resources from internal and external intrusions and threats becomes more important. Therefore in recent years, the anomaly detection algorithm that detects and prevents security threats with respect to various security log events has been actively studied. Security anomaly detection algorithms that have been developed based on rule-based or statistical learning in the past are gradually evolving into modeling based on machine learning and deep learning. In this study, we propose a deep-autoencoder model that transforms LSTM-autoencoder as an optimal algorithm to detect insider threats in advance using various machine learning analysis methodologies. This study has academic significance in that it improved the possibility of adaptive security through the development of an anomaly detection algorithm based on unsupervised learning, and reduced the false positive rate compared to the existing algorithm through supervised true positive labeling.

Noise Canceler Based on Deep Learning Using Discrete Wavelet Transform (이산 Wavelet 변환을 이용한 딥러닝 기반 잡음제거기)

  • Haeng-Woo Lee
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.6
    • /
    • pp.1103-1108
    • /
    • 2023
  • In this paper, we propose a new algorithm for attenuating the background noises in acoustic signal. This algorithm improves the noise attenuation performance by using the FNN(: Full-connected Neural Network) deep learning algorithm instead of the existing adaptive filter after wavelet transform. After wavelet transforming the input signal for each short-time period, noise is removed from a single input audio signal containing noise by using a 1024-1024-512-neuron FNN deep learning model. This transforms the time-domain voice signal into the time-frequency domain so that the noise characteristics are well expressed, and effectively predicts voice in a noisy environment through supervised learning using the conversion parameter of the pure voice signal for the conversion parameter. In order to verify the performance of the noise reduction system proposed in this study, a simulation program using Tensorflow and Keras libraries was written and a simulation was performed. As a result of the experiment, the proposed deep learning algorithm improved Mean Square Error (MSE) by 30% compared to the case of using the existing adaptive filter and by 20% compared to the case of using the STFT(: Short-Time Fourier Transform) transform effect was obtained.

Study on the Improvement of Lung CT Image Quality using 2D Deep Learning Network according to Various Noise Types (폐 CT 영상에서 다양한 노이즈 타입에 따른 딥러닝 네트워크를 이용한 영상의 질 향상에 관한 연구)

  • Min-Gwan Lee;Chanrok Park
    • Journal of the Korean Society of Radiology
    • /
    • v.18 no.2
    • /
    • pp.93-99
    • /
    • 2024
  • The digital medical imaging, especially, computed tomography (CT), should necessarily be considered in terms of noise distribution caused by converting to X-ray photon to digital imaging signal. Recently, the denoising technique based on deep learning architecture is increasingly used in the medical imaging field. Here, we evaluated noise reduction effect according to various noise types based on the U-net deep learning model in the lung CT images. The input data for deep learning was generated by applying Gaussian noise, Poisson noise, salt and pepper noise and speckle noise from the ground truth (GT) image. In particular, two types of Gaussian noise input data were applied with standard deviation values of 30 and 50. There are applied hyper-parameters, which were Adam as optimizer function, 100 as epochs, and 0.0001 as learning rate, respectively. To analyze the quantitative values, the mean square error (MSE), the peak signal to noise ratio (PSNR) and coefficient of variation (COV) were calculated. According to the results, it was confirmed that the U-net model was effective for noise reduction all of the set conditions in this study. Especially, it showed the best performance in Gaussian noise.