• Title/Summary/Keyword: Deep Learning Dataset

Search Result 764, Processing Time 0.021 seconds

DCNN Optimization Using Multi-Resolution Image Fusion

  • Alshehri, Abdullah A.;Lutz, Adam;Ezekiel, Soundararajan;Pearlstein, Larry;Conlen, John
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.11
    • /
    • pp.4290-4309
    • /
    • 2020
  • In recent years, advancements in machine learning capabilities have allowed it to see widespread adoption for tasks such as object detection, image classification, and anomaly detection. However, despite their promise, a limitation lies in the fact that a network's performance quality is based on the data which it receives. A well-trained network will still have poor performance if the subsequent data supplied to it contains artifacts, out of focus regions, or other visual distortions. Under normal circumstances, images of the same scene captured from differing points of focus, angles, or modalities must be separately analysed by the network, despite possibly containing overlapping information such as in the case of images of the same scene captured from different angles, or irrelevant information such as images captured from infrared sensors which can capture thermal information well but not topographical details. This factor can potentially add significantly to the computational time and resources required to utilize the network without providing any additional benefit. In this study, we plan to explore using image fusion techniques to assemble multiple images of the same scene into a single image that retains the most salient key features of the individual source images while discarding overlapping or irrelevant data that does not provide any benefit to the network. Utilizing this image fusion step before inputting a dataset into the network, the number of images would be significantly reduced with the potential to improve the classification performance accuracy by enhancing images while discarding irrelevant and overlapping regions.

Channel Attention Module in Convolutional Neural Network and Its Application to SAR Target Recognition Under Limited Angular Diversity Condition (합성곱 신경망의 Channel Attention 모듈 및 제한적인 각도 다양성 조건에서의 SAR 표적영상 식별로의 적용)

  • Park, Ji-Hoon;Seo, Seung-Mo;Yoo, Ji Hee
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.24 no.2
    • /
    • pp.175-186
    • /
    • 2021
  • In the field of automatic target recognition(ATR) with synthetic aperture radar(SAR) imagery, it is usually impractical to obtain SAR target images covering a full range of aspect views. When the database consists of SAR target images with limited angular diversity, it can lead to performance degradation of the SAR-ATR system. To address this problem, this paper proposes a deep learning-based method where channel attention modules(CAMs) are inserted to a convolutional neural network(CNN). Motivated by the idea of the squeeze-and-excitation(SE) network, the CAM is considered to help improve recognition performance by selectively emphasizing discriminative features and suppressing ones with less information. After testing various CAM types included in the ResNet18-type base network, the SE CAM and its modified forms are applied to SAR target recognition using MSTAR dataset with different reduction ratios in order to validate recognition performance improvement under the limited angular diversity condition.

Target Image Exchange Model for Object Tracking Based on Siamese Network (샴 네트워크 기반 객체 추적을 위한 표적 이미지 교환 모델)

  • Park, Sung-Jun;Kim, Gyu-Min;Hwang, Seung-Jun;Baek, Joong-Hwan
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.3
    • /
    • pp.389-395
    • /
    • 2021
  • In this paper, we propose a target image exchange model to improve performance of the object tracking algorithm based on a Siamese network. The object tracking algorithm based on the Siamese network tracks the object by finding the most similar part in the search image using only the target image specified in the first frame of the sequence. Since only the object of the first frame and the search image compare similarity, if tracking fails once, errors accumulate and drift in a part other than the tracked object occurs. Therefore, by designing a CNN(Convolutional Neural Network) based model, we check whether the tracking is progressing well, and the target image exchange timing is defined by using the score output from the Siamese network-based object tracking algorithm. The proposed model is evaluated the performance using the VOT-2018 dataset, and finally achieved an accuracy of 0.611 and a robustness of 22.816.

Research on Human Posture Recognition System Based on The Object Detection Dataset (객체 감지 데이터 셋 기반 인체 자세 인식시스템 연구)

  • Liu, Yan;Li, Lai-Cun;Lu, Jing-Xuan;Xu, Meng;Jeong, Yang-Kwon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.1
    • /
    • pp.111-118
    • /
    • 2022
  • In computer vision research, the two-dimensional human pose is a very extensive research direction, especially in pose tracking and behavior recognition, which has very important research significance. The acquisition of human pose targets, which is essentially the study of how to accurately identify human targets from pictures, is of great research significance and has been a hot research topic of great interest in recent years. Human pose recognition is used in artificial intelligence on the one hand and in daily life on the other. The excellent effect of pose recognition is mainly determined by the success rate and the accuracy of the recognition process, so it reflects the importance of human pose recognition in terms of recognition rate. In this human body gesture recognition, the human body is divided into 17 key points for labeling. Not only that but also the key points are segmented to ensure the accuracy of the labeling information. In the recognition design, use the comprehensive data set MS COCO for deep learning to design a neural network model to train a large number of samples, from simple step-by-step to efficient training, so that a good accuracy rate can be obtained.

Automatic Building Extraction Using SpaceNet Building Dataset and Context-based ResU-Net (SpaceNet 건물 데이터셋과 Context-based ResU-Net을 이용한 건물 자동 추출)

  • Yoo, Suhong;Kim, Cheol Hwan;Kwon, Youngmok;Choi, Wonjun;Sohn, Hong-Gyoo
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.5_2
    • /
    • pp.685-694
    • /
    • 2022
  • Building information is essential for various urban spatial analyses. For this reason, continuous building monitoring is required, but it is a subject with many practical difficulties. To this end, research is being conducted to extract buildings from satellite images that can be continuously observed over a wide area. Recently, deep learning-based semantic segmentation techniques have been used. In this study, a part of the structure of the context-based ResU-Net was modified, and training was conducted to automatically extract a building from a 30 cm Worldview-3 RGB image using SpaceNet's building v2 free open data. As a result of the classification accuracy evaluation, the f1-score, which was higher than the classification accuracy of the 2nd SpaceNet competition winners. Therefore, if Worldview-3 satellite imagery can be continuously provided, it will be possible to use the building extraction results of this study to generate an automatic model of building around the world.

Multiple Binarization Quadtree Framework for Optimizing Deep Learning-Based Smoke Synthesis Method

  • Kim, Jong-Hyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.4
    • /
    • pp.47-53
    • /
    • 2021
  • In this paper, we propose a quadtree-based optimization technique that enables fast Super-resolution(SR) computation by efficiently classifying and dividing physics-based simulation data required to calculate SR. The proposed method reduces the time required for quadtree computation by downscaling the smoke simulation data used as input data. By binarizing the density of the smoke in this process, a quadtree is constructed while mitigating the problem of numerical loss of density in the downscaling process. The data used for training is the COCO 2017 Dataset, and the artificial neural network uses a VGG19-based network. In order to prevent data loss when passing through the convolutional layer, similar to the residual method, the output value of the previous layer is added and learned. In the case of smoke, the proposed method achieved a speed improvement of about 15 to 18 times compared to the previous approach.

Towards Low Complexity Model for Audio Event Detection

  • Saleem, Muhammad;Shah, Syed Muhammad Shehram;Saba, Erum;Pirzada, Nasrullah;Ahmed, Masood
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.9
    • /
    • pp.175-182
    • /
    • 2022
  • In our daily life, we come across different types of information, for example in the format of multimedia and text. We all need different types of information for our common routines as watching/reading the news, listening to the radio, and watching different types of videos. However, sometimes we could run into problems when a certain type of information is required. For example, someone is listening to the radio and wants to listen to jazz, and unfortunately, all the radio channels play pop music mixed with advertisements. The listener gets stuck with pop music and gives up searching for jazz. So, the above example can be solved with an automatic audio classification system. Deep Learning (DL) models could make human life easy by using audio classifications, but it is expensive and difficult to deploy such models at edge devices like nano BLE sense raspberry pi, because these models require huge computational power like graphics processing unit (G.P.U), to solve the problem, we proposed DL model. In our proposed work, we had gone for a low complexity model for Audio Event Detection (AED), we extracted Mel-spectrograms of dimension 128×431×1 from audio signals and applied normalization. A total of 3 data augmentation methods were applied as follows: frequency masking, time masking, and mixup. In addition, we designed Convolutional Neural Network (CNN) with spatial dropout, batch normalization, and separable 2D inspired by VGGnet [1]. In addition, we reduced the model size by using model quantization of float16 to the trained model. Experiments were conducted on the updated dataset provided by the Detection and Classification of Acoustic Events and Scenes (DCASE) 2020 challenge. We confirm that our model achieved a val_loss of 0.33 and an accuracy of 90.34% within the 132.50KB model size.

Classification Method based on Graph Neural Network Model for Diagnosing IoT Device Fault (사물인터넷 기기 고장 진단을 위한 그래프 신경망 모델 기반 분류 방법)

  • Kim, Jin-Young;Seon, Joonho;Yoon, Sung-Hun
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.3
    • /
    • pp.9-14
    • /
    • 2022
  • In the IoT(internet of things) where various devices can be connected, failure of essential devices may lead to a lot of economic and life losses. For reducing the losses, fault diagnosis techniques have been considered an essential part of IoT. In this paper, the method based on a graph neural network is proposed for determining fault and classifying types by extracting features from vibration data of systems. For training of the deep learning model, fault dataset are used as input data obtained from the CWRU(case western reserve university). To validate the classification performance of the proposed model, a conventional CNN(convolutional neural networks)-based fault classification model is compared with the proposed model. From the simulation results, it was confirmed that the classification performance of the proposed model outweighed the conventional model by up to 5% in the unevenly distributed data. The classification runtime can be improved by lightweight the proposed model in future works.

Metal Surface Defect Detection and Classification using EfficientNetV2 and YOLOv5 (EfficientNetV2 및 YOLOv5를 사용한 금속 표면 결함 검출 및 분류)

  • Alibek, Esanov;Kim, Kang-Chul
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.4
    • /
    • pp.577-586
    • /
    • 2022
  • Detection and classification of steel surface defects are critical for product quality control in the steel industry. However, due to its low accuracy and slow speed, the traditional approach cannot be effectively used in a production line. The current, widely used algorithm (based on deep learning) has an accuracy problem, and there are still rooms for development. This paper proposes a method of steel surface defect detection combining EfficientNetV2 for image classification and YOLOv5 as an object detector. Shorter training time and high accuracy are advantages of this model. Firstly, the image input into EfficientNetV2 model classifies defect classes and predicts probability of having defects. If the probability of having a defect is less than 0.25, the algorithm directly recognizes that the sample has no defects. Otherwise, the samples are further input into YOLOv5 to accomplish the defect detection process on the metal surface. Experiments show that proposed model has good performance on the NEU dataset with an accuracy of 98.3%. Simultaneously, the average training speed is shorter than other models.

A Study on the Generation of Webtoons through Fine-Tuning of Diffusion Models (확산모델의 미세조정을 통한 웹툰 생성연구)

  • Kyungho Yu;Hyungju Kim;Jeongin Kim;Chanjun Chun;Pankoo Kim
    • Smart Media Journal
    • /
    • v.12 no.7
    • /
    • pp.76-83
    • /
    • 2023
  • This study proposes a method to assist webtoon artists in the process of webtoon creation by utilizing a pretrained Text-to-Image model to generate webtoon images from text. The proposed approach involves fine-tuning a pretrained Stable Diffusion model using a webtoon dataset transformed into the desired webtoon style. The fine-tuning process, using LoRA technique, completes in a quick training time of approximately 4.5 hours with 30,000 steps. The generated images exhibit the representation of shapes and backgrounds based on the input text, resulting in the creation of webtoon-like images. Furthermore, the quantitative evaluation using the Inception score shows that the proposed method outperforms DCGAN-based Text-to-Image models. If webtoon artists adopt the proposed Text-to-Image model for webtoon creation, it is expected to significantly reduce the time required for the creative process.