• Title/Summary/Keyword: Deep Learning Models

Search Result 1,319, Processing Time 0.023 seconds

A Study on Realtime Drone Object Detection Using On-board Deep Learning (온-보드에서의 딥러닝을 활용한 드론의 실시간 객체 인식 연구)

  • Lee, Jang-Woo;Kim, Joo-Young;Kim, Jae-Kyung;Kwon, Cheol-Hee
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.49 no.10
    • /
    • pp.883-892
    • /
    • 2021
  • This paper provides a process for developing deep learning-based aerial object detection models that can run in realtime on onboard. To improve object detection performance, we pre-process and augment the training data in the training stage. In addition, we perform transfer learning and apply a weighted cross-entropy method to reduce the variations of detection performance for each class. To improve the inference speed, we have generated inference acceleration engines with quantization. Then, we analyze the real-time performance and detection performance on custom aerial image dataset to verify generalization.

Cascaded-Hop For DeepFake Videos Detection

  • Zhang, Dengyong;Wu, Pengjie;Li, Feng;Zhu, Wenjie;Sheng, Victor S.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.5
    • /
    • pp.1671-1686
    • /
    • 2022
  • Face manipulation tools represented by Deepfake have threatened the security of people's biological identity information. Particularly, manipulation tools with deep learning technology have brought great challenges to Deepfake detection. There are many solutions for Deepfake detection based on traditional machine learning and advanced deep learning. However, those solutions of detectors almost have problems of poor performance when evaluated on different quality datasets. In this paper, for the sake of making high-quality Deepfake datasets, we provide a preprocessing method based on the image pixel matrix feature to eliminate similar images and the residual channel attention network (RCAN) to resize the scale of images. Significantly, we also describe a Deepfake detector named Cascaded-Hop which is based on the PixelHop++ system and the successive subspace learning (SSL) model. By feeding the preprocessed datasets, Cascaded-Hop achieves a good classification result on different manipulation types and multiple quality datasets. According to the experiment on FaceForensics++ and Celeb-DF, the AUC (area under curve) results of our proposed methods are comparable to the state-of-the-art models.

Semantic Segmentation of Drone Imagery Using Deep Learning for Seagrass Habitat Monitoring (잘피 서식지 모니터링을 위한 딥러닝 기반의 드론 영상 의미론적 분할)

  • Jeon, Eui-Ik;Kim, Seong-Hak;Kim, Byoung-Sub;Park, Kyung-Hyun;Choi, Ock-In
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.2_1
    • /
    • pp.199-215
    • /
    • 2020
  • A seagrass that is marine vascular plants plays an important role in the marine ecosystem, so periodic monitoring ofseagrass habitatsis being performed. Recently, the use of dronesthat can easily acquire very high-resolution imagery is increasing to efficiently monitor seagrass habitats. And deep learning based on a convolutional neural network has shown excellent performance in semantic segmentation. So, studies applied to deep learning models have been actively conducted in remote sensing. However, the segmentation accuracy was different due to the hyperparameter, various deep learning models and imagery. And the normalization of the image and the tile and batch size are also not standardized. So,seagrass habitats were segmented from drone-borne imagery using a deep learning that shows excellent performance in this study. And it compared and analyzed the results focused on normalization and tile size. For comparison of the results according to the normalization, tile and batch size, a grayscale image and grayscale imagery converted to Z-score and Min-Max normalization methods were used. And the tile size isincreased at a specific interval while the batch size is allowed the memory size to be used as much as possible. As a result, IoU was 0.26 ~ 0.4 higher than that of Z-score normalized imagery than other imagery. Also, it wasfound that the difference to 0.09 depending on the tile and batch size. The results were different according to the normalization, tile and batch. Therefore, this experiment found that these factors should have a suitable decision process.

Extracting Neural Networks via Meltdown (멜트다운 취약점을 이용한 인공신경망 추출공격)

  • Jeong, Hoyong;Ryu, Dohyun;Hur, Junbeom
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.30 no.6
    • /
    • pp.1031-1041
    • /
    • 2020
  • Cloud computing technology plays an important role in the deep learning industry as deep learning services are deployed frequently on top of cloud infrastructures. In such cloud environment, virtualization technology provides logically independent and isolated computing space for each tenant. However, recent studies demonstrate that by leveraging vulnerabilities of virtualization techniques and shared processor architectures in the cloud system, various side-channels can be established between cloud tenants. In this paper, we propose a novel attack scenario that can steal internal information of deep learning models by exploiting the Meltdown vulnerability in a multi-tenant system environment. On the basis of our experiment, the proposed attack method could extract internal information of a TensorFlow deep-learning service with 92.875% accuracy and 1.325kB/s extraction speed.

Development of Deep Learning-Based Damage Detection Prototype for Concrete Bridge Condition Evaluation (콘크리트 교량 상태평가를 위한 딥러닝 기반 손상 탐지 프로토타입 개발)

  • Nam, Woo-Suk;Jung, Hyunjun;Park, Kyung-Han;Kim, Cheol-Min;Kim, Gyu-Seon
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.42 no.1
    • /
    • pp.107-116
    • /
    • 2022
  • Recently, research has been actively conducted on the technology of inspection facilities through image-based analysis assessment of human-inaccessible facilities. This research was conducted to study the conditions of deep learning-based imaging data on bridges and to develop an evaluation prototype program for bridges. To develop a deep learning-based bridge damage detection prototype, the Semantic Segmentation model, which enables damage detection and quantification among deep learning models, applied Mask-RCNN and constructed learning data 5,140 (including open-data) and labeling suitable for damage types. As a result of performance modeling verification, precision and reproduction rate analysis of concrete cracks, stripping/slapping, rebar exposure and paint stripping showed that the precision was 95.2 %, and the recall was 93.8 %. A 2nd performance verification was performed on onsite data of crack concrete using damage rate of bridge members.

CALS: Channel State Information Auto-Labeling System for Large-scale Deep Learning-based Wi-Fi Sensing (딥러닝 기반 Wi-Fi 센싱 시스템의 효율적인 구축을 위한 지능형 데이터 수집 기법)

  • Jang, Jung-Ik;Choi, Jaehyuk
    • Journal of IKEEE
    • /
    • v.26 no.3
    • /
    • pp.341-348
    • /
    • 2022
  • Wi-Fi Sensing, which uses Wi-Fi technology to sense the surrounding environments, has strong potentials in a variety of sensing applications. Recently several advanced deep learning-based solutions using CSI (Channel State Information) data have achieved high performance, but it is still difficult to use in practice without explicit data collection, which requires expensive adaptation efforts for model retraining. In this study, we propose a Channel State Information Automatic Labeling System (CALS) that automatically collects and labels training CSI data for deep learning-based Wi-Fi sensing systems. The proposed system allows the CSI data collection process to efficiently collect labeled CSI for labeling for supervised learning using computer vision technologies such as object detection algorithms. We built a prototype of CALS to demonstrate its efficiency and collected data to train deep learning models for detecting the presence of a person in an indoor environment, showing to achieve an accuracy of over 90% with the auto-labeled data sets generated by CALS.

A Prediction Triage System for Emergency Department During Hajj Period using Machine Learning Models

  • Huda N. Alhazmi
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.7
    • /
    • pp.11-23
    • /
    • 2024
  • Triage is a practice of accurately prioritizing patients in emergency department (ED) based on their medical condition to provide them with proper treatment service. The variation in triage assessment among medical staff can cause mis-triage which affect the patients negatively. Developing ED triage system based on machine learning (ML) techniques can lead to accurate and efficient triage outcomes. This study aspires to develop a triage system using machine learning techniques to predict ED triage levels using patients' information. We conducted a retrospective study using Security Forces Hospital ED data, from 2021 through 2023 during Hajj period in Saudia Arabi. Using demographics, vital signs, and chief complaints as predictors, two machine learning models were investigated, naming gradient boosted decision tree (XGB) and deep neural network (DNN). The models were trained to predict ED triage levels and their predictive performance was evaluated using area under the receiver operating characteristic curve (AUC) and confusion matrix. A total of 11,584 ED visits were collected and used in this study. XGB and DNN models exhibit high abilities in the predicting performance with AUC-ROC scores 0.85 and 0.82, respectively. Compared to the traditional approach, our proposed system demonstrated better performance and can be implemented in real-world clinical settings. Utilizing ML applications can power the triage decision-making, clinical care, and resource utilization.

Real-Time Hand Gesture Recognition Based on Deep Learning (딥러닝 기반 실시간 손 제스처 인식)

  • Kim, Gyu-Min;Baek, Joong-Hwan
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.4
    • /
    • pp.424-431
    • /
    • 2019
  • In this paper, we propose a real-time hand gesture recognition algorithm to eliminate the inconvenience of using hand controllers in VR applications. The user's 3D hand coordinate information is detected by leap motion sensor and then the coordinates are generated into two dimensional image. We classify hand gestures in real-time by learning the imaged 3D hand coordinate information through SSD(Single Shot multibox Detector) model which is one of CNN(Convolutional Neural Networks) models. We propose to use all 3 channels rather than only one channel. A sliding window technique is also proposed to recognize the gesture in real time when the user actually makes a gesture. An experiment was conducted to measure the recognition rate and learning performance of the proposed model. Our proposed model showed 99.88% recognition accuracy and showed higher usability than the existing algorithm.

Model Transformation and Inference of Machine Learning using Open Neural Network Format (오픈신경망 포맷을 이용한 기계학습 모델 변환 및 추론)

  • Kim, Seon-Min;Han, Byunghyun;Heo, Junyeong
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.3
    • /
    • pp.107-114
    • /
    • 2021
  • Recently artificial intelligence technology has been introduced in various fields and various machine learning models have been operated in various frameworks as academic interest has increased. However, these frameworks have different data formats, which lack interoperability, and to overcome this, the open neural network exchange format, ONNX, has been proposed. In this paper we describe how to transform multiple machine learning models to ONNX, and propose algorithms and inference systems that can determine machine learning techniques in an integrated ONNX format. Furthermore we compare the inference results of the models before and after the ONNX transformation, showing that there is no loss or performance degradation of the learning results between the ONNX transformation.