• Title/Summary/Keyword: deep Learning

Search Result 5,795, Processing Time 0.028 seconds

Network Intrusion Detection Using Transformer and BiGRU-DNN in Edge Computing

  • Huijuan Sun
    • Journal of Information Processing Systems
    • /
    • v.20 no.4
    • /
    • pp.458-476
    • /
    • 2024
  • To address the issue of class imbalance in network traffic data, which affects the network intrusion detection performance, a combined framework using transformers is proposed. First, Tomek Links, SMOTE, and WGAN are used to preprocess the data to solve the class-imbalance problem. Second, the transformer is used to encode traffic data to extract the correlation between network traffic. Finally, a hybrid deep learning network model combining a bidirectional gated current unit and deep neural network is proposed, which is used to extract long-dependence features. A DNN is used to extract deep level features, and softmax is used to complete classification. Experiments were conducted on the NSLKDD, UNSWNB15, and CICIDS2017 datasets, and the detection accuracy rates of the proposed model were 99.72%, 84.86%, and 99.89% on three datasets, respectively. Compared with other relatively new deep-learning network models, it effectively improved the intrusion detection performance, thereby improving the communication security of network data.

Transfer Learning-Based Vibration Fault Diagnosis for Ball Bearing (전이학습을 이용한 볼베어링의 진동진단)

  • Subin Hong;Youngdae Lee;Chanwoo Moon
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.3
    • /
    • pp.845-850
    • /
    • 2023
  • In this paper, we propose a method for diagnosing ball bearing vibration using transfer learning. STFT, which can analyze vibration signals in time-frequency, was used as input to CNN to diagnose failures. In order to rapidly learn CNN-based deep artificial neural networks and improve diagnostic performance, we proposed a transfer learning-based deep learning learning technique. For transfer learning, the feature extractor and classifier were selectively learned using a VGG-based image classification model, the data set for learning was publicly available ball bearing vibration data provided by Case Western Reserve University, and performance was evaluated by comparing the proposed method with the existing CNN model. Experimental results not only prove that transfer learning is useful for condition diagnosis in ball bearing vibration data, but also allow other industries to use transfer learning to improve condition diagnosis.

Object Detection Performance Analysis between On-GPU and On-Board Analysis for Military Domain Images

  • Du-Hwan Hur;Dae-Hyeon Park;Deok-Woong Kim;Jae-Yong Baek;Jun-Hyeong Bak;Seung-Hwan Bae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.8
    • /
    • pp.157-164
    • /
    • 2024
  • In this paper, we propose a discussion that the feasibility of deploying a deep learning-based detector on the resource-limited board. Although many studies evaluate the detector on machines with high-performed GPUs, evaluation on the board with limited computation resources is still insufficient. Therefore, in this work, we implement the deep-learning detectors and deploy them on the compact board by parsing and optimizing a detector. To figure out the performance of deep learning based detectors on limited resources, we monitor the performance of several detectors with different H/W resource. On COCO detection datasets, we compare and analyze the evaluation results of detection model in On-Board and the detection model in On-GPU in terms of several metrics with mAP, power consumption, and execution speed (FPS). To demonstrate the effect of applying our detector for the military area, we evaluate them on our dataset consisting of thermal images considering the flight battle scenarios. As a results, we investigate the strength of deep learning-based on-board detector, and show that deep learning-based vision models can contribute in the flight battle scenarios.

Atypical Character Recognition Based on Mask R-CNN for Hangul Signboard

  • Lim, Sooyeon
    • International journal of advanced smart convergence
    • /
    • v.8 no.3
    • /
    • pp.131-137
    • /
    • 2019
  • This study proposes a method of learning and recognizing the characteristics that are the classification criteria of Hangul using Mask R-CNN, one of the deep learning techniques, to recognize and classify atypical Hangul characters. The atypical characters on the Hangul signboard have a lot of deformed and colorful shapes beyond the general characters. Therefore, in order to recognize the Hangul signboard character, it is necessary to learn a separate atypical Hangul character rather than the existing formulaic one. We selected the Hangul character '닭' as sample data and constructed 5,383 Hangul image data sets and used them for learning and verifying the deep learning model. The accuracy of the results of analyzing the performance of the learning model using the test set constructed to verify the reliability of the learning model was about 92.65% (the area detection rate). Therefore we confirmed that the proposed method is very useful for Hangul signboard character recognition, and we plan to extend it to various Hangul data.

A Win/Lose prediction model of Korean professional baseball using machine learning technique

  • Seo, Yeong-Jin;Moon, Hyung-Woo;Woo, Yong-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.2
    • /
    • pp.17-24
    • /
    • 2019
  • In this paper, we propose a new model for predicting effective Win/Loss in professional baseball game in Korea using machine learning technique. we used basic baseball data and Sabermetrics data, which are highly correlated with score to predict and we used the deep learning technique to learn based on supervised learning. The Drop-Out algorithm and the ReLu activation function In the trained neural network, the expected odds was calculated using the predictions of the team's expected scores and expected loss. The team with the higher expected rate of victory was predicted as the winning team. In order to verify the effectiveness of the proposed model, we compared the actual percentage of win, pythagorean expectation, and win percentage of the proposed model.

Development of surface detection model for dried semi-finished product of Kimbukak using deep learning (딥러닝 기반 김부각 건조 반제품 표면 검출 모델 개발)

  • Tae Hyong Kim;Ki Hyun Kwon;Ah-Na Kim
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.17 no.4
    • /
    • pp.205-212
    • /
    • 2024
  • This study developed a deep learning model that distinguishes the front (with garnish) and the back (without garnish) surface of the dried semi-finished product (dried bukak) for screening operation before transfter the dried bukak to oil heater using robot's vacuum gripper. For deep learning model training and verification, RGB images for the front and back surfaces of 400 dry bukak that treated by data preproccessing were obtained. YOLO-v5 was used as a base structure of deep learning model. The area, surface information labeling, and data augmentation techniques were applied from the acquired image. Parameters including mAP, mIoU, accumulation, recall, decision, and F1-score were selected to evaluate the performance of the developed YOLO-v5 deep learning model-based surface detection model. The mAP and mIoU on the front surface were 0.98 and 0.96, respectively, and on the back surface, they were 1.00 and 0.95, respectively. The results of binary classification for the two front and back classes were average 98.5%, recall 98.3%, decision 98.6%, and F1-score 98.4%. As a result, the developed model can classify the surface information of the dried bukak using RGB images, and it can be used to develop a robot-automated system for the surface detection process of the dried bukak before deep frying.

Artificial Intelligence based Tumor detection System using Computational Pathology

  • Naeem, Tayyaba;Qamar, Shamweel;Park, Peom
    • Journal of the Korean Society of Systems Engineering
    • /
    • v.15 no.2
    • /
    • pp.72-78
    • /
    • 2019
  • Pathology is the motor that drives healthcare to understand diseases. The way pathologists diagnose diseases, which involves manual observation of images under a microscope has been used for the last 150 years, it's time to change. This paper is specifically based on tumor detection using deep learning techniques. Pathologist examine the specimen slides from the specific portion of body (e-g liver, breast, prostate region) and then examine it under the microscope to identify the effected cells among all the normal cells. This process is time consuming and not sufficiently accurate. So, there is a need of a system that can detect tumor automatically in less time. Solution to this problem is computational pathology: an approach to examine tissue data obtained through whole slide imaging using modern image analysis algorithms and to analyze clinically relevant information from these data. Artificial Intelligence models like machine learning and deep learning are used at the molecular levels to generate diagnostic inferences and predictions; and presents this clinically actionable knowledge to pathologist through dynamic and integrated reports. Which enables physicians, laboratory personnel, and other health care system to make the best possible medical decisions. I will discuss the techniques for the automated tumor detection system within the new discipline of computational pathology, which will be useful for the future practice of pathology and, more broadly, medical practice in general.

CBIR-based Data Augmentation and Its Application to Deep Learning (CBIR 기반 데이터 확장을 이용한 딥 러닝 기술)

  • Kim, Sesong;Jung, Seung-Won
    • Journal of Broadcast Engineering
    • /
    • v.23 no.3
    • /
    • pp.403-408
    • /
    • 2018
  • Generally, a large data set is required for learning of deep learning. However, since it is not easy to create large data sets, there are a lot of techniques that make small data sets larger through data expansion such as rotation, flipping, and filtering. However, these simple techniques have limitation on extendibility because they are difficult to escape from the features already possessed. In order to solve this problem, we propose a method to acquire new image data by using existing data. This is done by retrieving and acquiring similar images using existing image data as a query of the content-based image retrieval (CBIR). Finally, we compare the performance of the base model with the model using CBIR.

Data Augmentation Method of Small Dataset for Object Detection and Classification (영상 내 물체 검출 및 분류를 위한 소규모 데이터 확장 기법)

  • Kim, Jin Yong;Kim, Eun Kyeong;Kim, Sungshin
    • The Journal of Korea Robotics Society
    • /
    • v.15 no.2
    • /
    • pp.184-189
    • /
    • 2020
  • This paper is a study on data augmentation for small dataset by using deep learning. In case of training a deep learning model for recognition and classification of non-mainstream objects, there is a limit to obtaining a large amount of training data. Therefore, this paper proposes a data augmentation method using perspective transform and image synthesis. In addition, it is necessary to save the object area for all training data to detect the object area. Thus, we devised a way to augment the data and save object regions at the same time. To verify the performance of the augmented data using the proposed method, an experiment was conducted to compare classification accuracy with the augmented data by the traditional method, and transfer learning was used in model learning. As experimental results, the model trained using the proposed method showed higher accuracy than the model trained using the traditional method.

Facial Action Unit Detection with Multilayer Fused Multi-Task and Multi-Label Deep Learning Network

  • He, Jun;Li, Dongliang;Bo, Sun;Yu, Lejun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.11
    • /
    • pp.5546-5559
    • /
    • 2019
  • Facial action units (AUs) have recently drawn increased attention because they can be used to recognize facial expressions. A variety of methods have been designed for frontal-view AU detection, but few have been able to handle multi-view face images. In this paper we propose a method for multi-view facial AU detection using a fused multilayer, multi-task, and multi-label deep learning network. The network can complete two tasks: AU detection and facial view detection. AU detection is a multi-label problem and facial view detection is a single-label problem. A residual network and multilayer fusion are applied to obtain more representative features. Our method is effective and performs well. The F1 score on FERA 2017 is 13.1% higher than the baseline. The facial view recognition accuracy is 0.991. This shows that our multi-task, multi-label model could achieve good performance on the two tasks.