• Title/Summary/Keyword: Deep Learning Dataset

Search Result 776, Processing Time 0.028 seconds

A study on the detection of pedestrians in crosswalks using multi-spectrum (다중스펙트럼을 이용한 횡단보도 보행자 검지에 관한 연구)

  • kim, Junghun;Choi, Doo-Hyun;Lee, JongSun;Lee, Donghwa
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.27 no.1
    • /
    • pp.11-18
    • /
    • 2022
  • The use of multi-spectral cameras is essential for day and night pedestrian detection. In this paper, a color camera and a thermal imaging infrared camera were used to detect pedestrians near a crosswalk for 24 hours at an intersection with a high risk of traffic accidents. For pedestrian detection, the YOLOv5 object detector was used, and the detection performance was improved by using color images and thermal images at the same time. The proposed system showed a high performance of 0.940 mAP in the day/night multi-spectral (color and thermal image) pedestrian dataset obtained from the actual crosswalk site.

MS Office Malicious Document Detection Based on CNN (CNN 기반 MS Office 악성 문서 탐지)

  • Park, Hyun-su;Kang, Ah Reum
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.32 no.2
    • /
    • pp.439-446
    • /
    • 2022
  • Document-type malicious codes are being actively distributed using attachments on websites or e-mails. Document-type malicious code is relatively easy to bypass security programs because the executable file is not executed directly. Therefore, document-type malicious code should be detected and prevented in advance. To detect document-type malicious code, we identified the document structure and selected keywords suspected of being malicious. We then created a dataset by converting the stream data in the document to ASCII code values. We specified the location of malicious keywords in the document stream data, and classified the stream as malicious by recognizing the adjacent information of the malicious keywords. As a result of detecting malicious codes by applying the CNN model, we derived accuracies of 0.97 and 0.92 in stream units and file units, respectively.

Deep Learning Based Rumor Detection for Arabic Micro-Text

  • Alharbi, Shada;Alyoubi, Khaled;Alotaibi, Fahd
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.11
    • /
    • pp.73-80
    • /
    • 2021
  • Nowadays microblogs have become the most popular platforms to obtain and spread information. Twitter is one of the most used platforms to share everyday life event. However, rumors and misinformation on Arabic social media platforms has become pervasive which can create inestimable harm to society. Therefore, it is imperative to tackle and study this issue to distinguish the verified information from the unverified ones. There is an increasing interest in rumor detection on microblogs recently, however, it is mostly applied on English language while the work on Arabic language is still ongoing research topic and need more efforts. In this paper, we propose a combined Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) to detect rumors on Twitter dataset. Various experiments were conducted to choose the best hyper-parameters tuning to achieve the best results. Moreover, different neural network models are used to evaluate performance and compare results. Experiments show that the CNN-LSTM model achieved the best accuracy 0.95 and an F1-score of 0.94 which outperform the state-of-the-art methods.

Data anomaly detection for structural health monitoring using a combination network of GANomaly and CNN

  • Liu, Gaoyang;Niu, Yanbo;Zhao, Weijian;Duan, Yuanfeng;Shu, Jiangpeng
    • Smart Structures and Systems
    • /
    • v.29 no.1
    • /
    • pp.53-62
    • /
    • 2022
  • The deployment of advanced structural health monitoring (SHM) systems in large-scale civil structures collects large amounts of data. Note that these data may contain multiple types of anomalies (e.g., missing, minor, outlier, etc.) caused by harsh environment, sensor faults, transfer omission and other factors. These anomalies seriously affect the evaluation of structural performance. Therefore, the effective analysis and mining of SHM data is an extremely important task. Inspired by the deep learning paradigm, this study develops a novel generative adversarial network (GAN) and convolutional neural network (CNN)-based data anomaly detection approach for SHM. The framework of the proposed approach includes three modules : (a) A three-channel input is established based on fast Fourier transform (FFT) and Gramian angular field (GAF) method; (b) A GANomaly is introduced and trained to extract features from normal samples alone for class-imbalanced problems; (c) Based on the output of GANomaly, a CNN is employed to distinguish the types of anomalies. In addition, a dataset-oriented method (i.e., multistage sampling) is adopted to obtain the optimal sampling ratios between all different samples. The proposed approach is tested with acceleration data from an SHM system of a long-span bridge. The results show that the proposed approach has a higher accuracy in detecting the multi-pattern anomalies of SHM data.

Multimodal Attention-Based Fusion Model for Context-Aware Emotion Recognition

  • Vo, Minh-Cong;Lee, Guee-Sang
    • International Journal of Contents
    • /
    • v.18 no.3
    • /
    • pp.11-20
    • /
    • 2022
  • Human Emotion Recognition is an exciting topic that has been attracting many researchers for a lengthy time. In recent years, there has been an increasing interest in exploiting contextual information on emotion recognition. Some previous explorations in psychology show that emotional perception is impacted by facial expressions, as well as contextual information from the scene, such as human activities, interactions, and body poses. Those explorations initialize a trend in computer vision in exploring the critical role of contexts, by considering them as modalities to infer predicted emotion along with facial expressions. However, the contextual information has not been fully exploited. The scene emotion created by the surrounding environment, can shape how people perceive emotion. Besides, additive fusion in multimodal training fashion is not practical, because the contributions of each modality are not equal to the final prediction. The purpose of this paper was to contribute to this growing area of research, by exploring the effectiveness of the emotional scene gist in the input image, to infer the emotional state of the primary target. The emotional scene gist includes emotion, emotional feelings, and actions or events that directly trigger emotional reactions in the input image. We also present an attention-based fusion network, to combine multimodal features based on their impacts on the target emotional state. We demonstrate the effectiveness of the method, through a significant improvement on the EMOTIC dataset.

Behavior Pattern Prediction Algorithm Based on 2D Pose Estimation and LSTM from Videos (비디오 영상에서 2차원 자세 추정과 LSTM 기반의 행동 패턴 예측 알고리즘)

  • Choi, Jiho;Hwang, Gyutae;Lee, Sang Jun
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.17 no.4
    • /
    • pp.191-197
    • /
    • 2022
  • This study proposes an image-based Pose Intention Network (PIN) algorithm for rehabilitation via patients' intentions. The purpose of the PIN algorithm is for enabling an active rehabilitation exercise, which is implemented by estimating the patient's motion and classifying the intention. Existing rehabilitation involves the inconvenience of attaching a sensor directly to the patient's skin. In addition, the rehabilitation device moves the patient, which is a passive rehabilitation method. Our algorithm consists of two steps. First, we estimate the user's joint position through the OpenPose algorithm, which is efficient in estimating 2D human pose in an image. Second, an intention classifier is constructed for classifying the motions into three categories, and a sequence of images including joint information is used as input. The intention network also learns correlations between joints and changes in joints over a short period of time, which can be easily used to determine the intention of the motion. To implement the proposed algorithm and conduct real-world experiments, we collected our own dataset, which is composed of videos of three classes. The network is trained using short segment clips of the video. Experimental results demonstrate that the proposed algorithm is effective for classifying intentions based on a short video clip.

Semi-Supervised Domain Adaptation on LiDAR 3D Object Detection with Self-Training and Knowledge Distillation (자가학습과 지식증류 방법을 활용한 LiDAR 3차원 물체 탐지에서의 준지도 도메인 적응)

  • Jungwan Woo;Jaeyeul Kim;Sunghoon Im
    • The Journal of Korea Robotics Society
    • /
    • v.18 no.3
    • /
    • pp.346-351
    • /
    • 2023
  • With the release of numerous open driving datasets, the demand for domain adaptation in perception tasks has increased, particularly when transferring knowledge from rich datasets to novel domains. However, it is difficult to solve the change 1) in the sensor domain caused by heterogeneous LiDAR sensors and 2) in the environmental domain caused by different environmental factors. We overcome domain differences in the semi-supervised setting with 3-stage model parameter training. First, we pre-train the model with the source dataset with object scaling based on statistics of the object size. Then we fine-tine the partially frozen model weights with copy-and-paste augmentation. The 3D points in the box labels are copied from one scene and pasted to the other scenes. Finally, we use the knowledge distillation method to update the student network with a moving average from the teacher network along with a self-training method with pseudo labels. Test-Time Augmentation with varying z values is employed to predict the final results. Our method achieved 3rd place in ECCV 2022 workshop on the 3D Perception for Autonomous Driving challenge.

Image Anomaly Detection Using MLP-Mixer (MLP-Mixer를 이용한 이미지 이상탐지)

  • Hwang, Ju-hyo;Jin, Kyo-hong
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.104-107
    • /
    • 2022
  • autoencoder deep learning model has excellent ability to restore abnormal data to normal data, so it is not appropriate for anomaly detection. In addition, the Inpainting method, which is a method of restoring hidden data after masking (masking) a part of the data, has a problem in that the restoring ability is poor for noisy images. In this paper, we use a method of modifying and improving the MLP-Mixer model to mask the image at a certain ratio and to reconstruct the image by delivering compressed information of the masked image to the model. After constructing a model learned with normal data from the MVTec AD dataset, a reconstruction error was obtained by inputting normal and abnormal images, respectively, and anomaly detection was performed through this. As a result of the performance evaluation, it was found that the proposed method has superior anomaly detection performance compared to the existing method.

  • PDF

Detection of Defect Patterns on Wafer Bin Map Using Fully Convolutional Data Description (FCDD) (FCDD 기반 웨이퍼 빈 맵 상의 결함패턴 탐지)

  • Seung-Jun Jang;Suk Joo Bae
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.46 no.2
    • /
    • pp.1-12
    • /
    • 2023
  • To make semiconductor chips, a number of complex semiconductor manufacturing processes are required. Semiconductor chips that have undergone complex processes are subjected to EDS(Electrical Die Sorting) tests to check product quality, and a wafer bin map reflecting the information about the normal and defective chips is created. Defective chips found in the wafer bin map form various patterns, which are called defective patterns, and the defective patterns are a very important clue in determining the cause of defects in the process and design of semiconductors. Therefore, it is desired to automatically and quickly detect defective patterns in the field, and various methods have been proposed to detect defective patterns. Existing methods have considered simple, complex, and new defect patterns, but they had the disadvantage of being unable to provide field engineers the evidence of classification results through deep learning. It is necessary to supplement this and provide detailed information on the size, location, and patterns of the defects. In this paper, we propose an anomaly detection framework that can be explained through FCDD(Fully Convolutional Data Description) trained only with normal data to provide field engineers with details such as detection results of abnormal defect patterns, defect size, and location of defect patterns on wafer bin map. The results are analyzed using open dataset, providing prominent results of the proposed anomaly detection framework.

Object Detection and Localization on Map using Multiple Camera and Lidar Point Cloud

  • Pansipansi, Leonardo John;Jang, Minseok;Lee, Yonsik
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.422-424
    • /
    • 2021
  • In this paper, it leads the approach of fusing multiple RGB cameras for visual objects recognition based on deep learning with convolution neural network and 3D Light Detection and Ranging (LiDAR) to observe the environment and match into a 3D world in estimating the distance and position in a form of point cloud map. The goal of perception in multiple cameras are to extract the crucial static and dynamic objects around the autonomous vehicle, especially the blind spot which assists the AV to navigate according to the goal. Numerous cameras with object detection might tend slow-going the computer process in real-time. The computer vision convolution neural network algorithm to use for eradicating this problem use must suitable also to the capacity of the hardware. The localization of classified detected objects comes from the bases of a 3D point cloud environment. But first, the LiDAR point cloud data undergo parsing, and the used algorithm is based on the 3D Euclidean clustering method which gives an accurate on localizing the objects. We evaluated the method using our dataset that comes from VLP-16 and multiple cameras and the results show the completion of the method and multi-sensor fusion strategy.

  • PDF