• Title/Summary/Keyword: 3D-CNN

Search Result 157, Processing Time 0.028 seconds

Real-time Human Detection under Omni-dir ectional Camera based on CNN with Unified Detection and AGMM for Visual Surveillance

  • Nguyen, Thanh Binh;Nguyen, Van Tuan;Chung, Sun-Tae;Cho, Seongwon
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.8
    • /
    • pp.1345-1360
    • /
    • 2016
  • In this paper, we propose a new real-time human detection under omni-directional cameras for visual surveillance purpose, based on CNN with unified detection and AGMM. Compared to CNN-based state-of-the-art object detection methods. YOLO model-based object detection method boasts of very fast object detection, but with less accuracy. The proposed method adapts the unified detecting CNN of YOLO model so as to be intensified by the additional foreground contextual information obtained from pre-stage AGMM. Increased computational time incurred by additional AGMM processing is compensated by speed-up gain obtained from utilizing 2-D input data consisting of grey-level image data and foreground context information instead of 3-D color input data. Through various experiments, it is shown that the proposed method performs better with respect to accuracy and more robust to environment changes than YOLO model-based human detection method, but with the similar processing speeds to that of YOLO model-based one. Thus, it can be successfully employed for embedded surveillance application.

Combining 2D CNN and Bidirectional LSTM to Consider Spatio-Temporal Features in Crop Classification (작물 분류에서 시공간 특징을 고려하기 위한 2D CNN과 양방향 LSTM의 결합)

  • Kwak, Geun-Ho;Park, Min-Gyu;Park, Chan-Won;Lee, Kyung-Do;Na, Sang-Il;Ahn, Ho-Yong;Park, No-Wook
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.5_1
    • /
    • pp.681-692
    • /
    • 2019
  • In this paper, a hybrid deep learning model, called 2D convolution with bidirectional long short-term memory (2DCBLSTM), is presented that can effectively combine both spatial and temporal features for crop classification. In the proposed model, 2D convolution operators are first applied to extract spatial features of crops and the extracted spatial features are then used as inputs for a bidirectional LSTM model that can effectively process temporal features. To evaluate the classification performance of the proposed model, a case study of crop classification was carried out using multi-temporal unmanned aerial vehicle images acquired in Anbandegi, Korea. For comparison purposes, we applied conventional deep learning models including two-dimensional convolutional neural network (CNN) using spatial features, LSTM using temporal features, and three-dimensional CNN using spatio-temporal features. Through the impact analysis of hyper-parameters on the classification performance, the use of both spatial and temporal features greatly reduced misclassification patterns of crops and the proposed hybrid model showed the best classification accuracy, compared to the conventional deep learning models that considered either spatial features or temporal features. Therefore, it is expected that the proposed model can be effectively applied to crop classification owing to its ability to consider spatio-temporal features of crops.

1D CNN and Machine Learning Methods for Fall Detection (1D CNN과 기계 학습을 사용한 낙상 검출)

  • Kim, Inkyung;Kim, Daehee;Noh, Song;Lee, Jaekoo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.3
    • /
    • pp.85-90
    • /
    • 2021
  • In this paper, fall detection using individual wearable devices for older people is considered. To design a low-cost wearable device for reliable fall detection, we present a comprehensive analysis of two representative models. One is a machine learning model composed of a decision tree, random forest, and Support Vector Machine(SVM). The other is a deep learning model relying on a one-dimensional(1D) Convolutional Neural Network(CNN). By considering data segmentation, preprocessing, and feature extraction methods applied to the input data, we also evaluate the considered models' validity. Simulation results verify the efficacy of the deep learning model showing improved overall performance.

A Distributed Real-time 3D Pose Estimation Framework based on Asynchronous Multiviews

  • Taemin, Hwang;Jieun, Kim;Minjoon, Kim
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.2
    • /
    • pp.559-575
    • /
    • 2023
  • 3D human pose estimation is widely applied in various fields, including action recognition, sports analysis, and human-computer interaction. 3D human pose estimation has achieved significant progress with the introduction of convolutional neural network (CNN). Recently, several researches have proposed the use of multiview approaches to avoid occlusions in single-view approaches. However, as the number of cameras increases, a 3D pose estimation system relying on a CNN may lack in computational resources. In addition, when a single host system uses multiple cameras, the data transition speed becomes inadequate owing to bandwidth limitations. To address this problem, we propose a distributed real-time 3D pose estimation framework based on asynchronous multiple cameras. The proposed framework comprises a central server and multiple edge devices. Each multiple-edge device estimates a 2D human pose from its view and sendsit to the central server. Subsequently, the central server synchronizes the received 2D human pose data based on the timestamps. Finally, the central server reconstructs a 3D human pose using geometrical triangulation. We demonstrate that the proposed framework increases the percentage of detected joints and successfully estimates 3D human poses in real-time.

Enhancing Alzheimer's Disease Classification using 3D Convolutional Neural Network and Multilayer Perceptron Model with Attention Network

  • Enoch A. Frimpong;Zhiguang Qin;Regina E. Turkson;Bernard M. Cobbinah;Edward Y. Baagyere;Edwin K. Tenagyei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.11
    • /
    • pp.2924-2944
    • /
    • 2023
  • Alzheimer's disease (AD) is a neurological condition that is recognized as one of the primary causes of memory loss. AD currently has no cure. Therefore, the need to develop an efficient model with high precision for timely detection of the disease is very essential. When AD is detected early, treatment would be most likely successful. The most often utilized indicators for AD identification are the Mini-mental state examination (MMSE), and the clinical dementia. However, the use of these indicators as ground truth marking could be imprecise for AD detection. Researchers have proposed several computer-aided frameworks and lately, the supervised model is mostly used. In this study, we propose a novel 3D Convolutional Neural Network Multilayer Perceptron (3D CNN-MLP) based model for AD classification. The model uses Attention Mechanism to automatically extract relevant features from Magnetic Resonance Images (MRI) to generate probability maps which serves as input for the MLP classifier. Three MRI scan categories were considered, thus AD dementia patients, Mild Cognitive Impairment patients (MCI), and Normal Control (NC) or healthy patients. The performance of the model is assessed by comparing basic CNN, VGG16, DenseNet models, and other state of the art works. The models were adjusted to fit the 3D images before the comparison was done. Our model exhibited excellent classification performance, with an accuracy of 91.27% for AD and NC, 80.85% for MCI and NC, and 87.34% for AD and MCI.

Deep Neural Network Architecture for Video - based Facial Expression Recognition (동영상 기반 감정인식을 위한 DNN 구조)

  • Lee, Min Kyu;Choi, Jun Ho;Song, Byung Cheol
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2019.06a
    • /
    • pp.35-37
    • /
    • 2019
  • 최근 딥 러닝의 급격한 발전과 함께 얼굴표정인식 기술이 상당한 진보를 이루었다. 그러나 기존 얼굴표정인식 기법들은 제한된 환경에서 취득한 인위적인 동영상에 대해 주로 개발되었기 때문에 실제 wild 한 환경에서 취득한 동영상에 대해 강인하게 동작하지 않을 수 있다. 이런 문제를 해결하기 위해 3D CNN, 2D CNN 그리고 RNN 의 새로운 결합으로 이루어진 Deep neural network 구조를 제안한다. 제안 네트워크는 주어진 동영상으로부터 두 가지 서로 다른 CNN 을 통해서 영상 내 공간적 정보뿐만 아니라 시간적 정보를 담고 있는 특징 벡터를 추출할 수 있다. 그 다음, RNN 이 시간 도메인 학습을 수행할 뿐만 아니라 상기 네트워크들에서 추출된 특징 벡터들을 융합한다. 상기 기술들이 유기적으로 연동하는 제안된 네트워크는 대표적인 wild 한 공인 데이터세트인 AFEW 로 실험한 결과 49.6%의 정확도로 종래 기법 대비 향상된 성능을 보인다.

  • PDF

Automatic Volumetric Brain Tumor Segmentation using Convolutional Neural Networks

  • Yavorskyi, Vladyslav;Sull, Sanghoon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2019.05a
    • /
    • pp.432-435
    • /
    • 2019
  • Convolutional Neural Networks (CNNs) have recently been gaining popularity in the medical image analysis field because of their image segmentation capabilities. In this paper, we present a CNN that performs automated brain tumor segmentations of sparsely annotated 3D Magnetic Resonance Imaging (MRI) scans. Our CNN is based on 3D U-net architecture, and it includes separate Dilated and Depth-wise Convolutions. It is fully-trained on the BraTS 2018 data set, and it produces more accurate results even when compared to the winners of the BraTS 2017 competition despite having a significantly smaller amount of parameters.

  • PDF

Research on Methods to Increase Recognition Rate of Korean Sign Language using Deep Learning

  • So-Young Kwon;Yong-Hwan Lee
    • Journal of Platform Technology
    • /
    • v.12 no.1
    • /
    • pp.3-11
    • /
    • 2024
  • Deaf people who use sign language as their first language sometimes have difficulty communicating because they do not know spoken Korean. Deaf people are also members of society, so we must support to create a society where everyone can live together. In this paper, we present a method to increase the recognition rate of Korean sign language using a CNN model. When the original image was used as input to the CNN model, the accuracy was 0.96, and when the image corresponding to the skin area in the YCbCr color space was used as input, the accuracy was 0.72. It was confirmed that inserting the original image itself would lead to better results. In other studies, the accuracy of the combined Conv1d and LSTM model was 0.92, and the accuracy of the AlexNet model was 0.92. The CNN model proposed in this paper is 0.96 and is proven to be helpful in recognizing Korean sign language.

  • PDF

Prediction of pathological complete response in rectal cancer using 3D tumor PET image (3차원 종양 PET 영상을 이용한 직장암 치료반응 예측)

  • Jinyu Yang;Kangsan Kim;Ui-sup Shin;Sang-Keun Woo
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2023.07a
    • /
    • pp.63-65
    • /
    • 2023
  • 본 논문에서는 FDG-PET 영상을 사용하는 딥러닝 네트워크를 이용하여 직장암 환자의 치료 후 완치를 예측하는 연구를 수행하였다. 직장암은 흔한 악성 종양 중 하나이지만 병리학적으로 완전하게 치료되는 가능성이 매우 낮아, 치료 후의 반응을 예측하고 적절한 치료 방법을 선택하는 것이 중요하다. 따라서 본 연구에서는 FDG-PET 영상에 합성곱 신경망(CNN)모델을 활용하여 딥러닝 네트워크를 구축하고 직장암 환자의 치료반응을 예측하는 연구를 진행하였다. 116명의 직장암 환자의 FDG-PET 영상을 획득하였다. 대상군은 2cm 이상의 종양 크기를 가지는 환자를 대상으로 하였으며 치료 후 완치된 환자는 21명이었다. FDG-PET 영상은 전신 영역과 종양 영역으로 나누어 평가하였다. 딥러닝 네트워크는 2차원 및 3차원 영상입력에 대한 CNN 모델로 구성되었다. 학습된 CNN 모델을 사용하여 직장암의 치료 후 완치를 예측하는 성능을 평가하였다. 학습 결과에서 평균 정확도와 정밀도는 각각 0.854와 0.905로 나타났으며, 모든 CNN 모델과 영상 영역에 따른 성능을 보였다. 테스트 결과에서는 3차원 CNN 모델과 종양 영역만을 이용한 네트워크에서 정확도가 높게 평가됨을 확인하였다. 본 연구에서는 CNN 모델의 입력 영상에 따른 차이와 영상 영역에 따른 딥러닝 네트워크의 성능을 평가하였으며 딥러닝 네트워크 모델을 통해 직장암 치료반응을 예측하고 적절한 치료 방향 결정에 도움이 될 것으로 기대한다.

  • PDF

Development of Gas Type Identification Deep-learning Model through Multimodal Method (멀티모달 방식을 통한 가스 종류 인식 딥러닝 모델 개발)

  • Seo Hee Ahn;Gyeong Yeong Kim;Dong Ju Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.12
    • /
    • pp.525-534
    • /
    • 2023
  • Gas leak detection system is a key to minimize the loss of life due to the explosiveness and toxicity of gas. Most of the leak detection systems detect by gas sensors or thermal imaging cameras. To improve the performance of gas leak detection system using single-modal methods, the paper propose multimodal approach to gas sensor data and thermal camera data in developing a gas type identification model. MultimodalGasData, a multimodal open-dataset, is used to compare the performance of the four models developed through multimodal approach to gas sensors and thermal cameras with existing models. As a result, 1D CNN and GasNet models show the highest performance of 96.3% and 96.4%. The performance of the combined early fusion model of 1D CNN and GasNet reached 99.3%, 3.3% higher than the existing model. We hoped that further damage caused by gas leaks can be minimized through the gas leak detection system proposed in the study.