• 제목/요약/키워드: 3D Convolutional Neural Network

검색결과 108건 처리시간 0.03초

ROV Manipulation from Observation and Exploration using Deep Reinforcement Learning

  • Jadhav, Yashashree Rajendra;Moon, Yong Seon
    • Journal of Advanced Research in Ocean Engineering
    • /
    • 제3권3호
    • /
    • pp.136-148
    • /
    • 2017
  • The paper presents dual arm ROV manipulation using deep reinforcement learning. The purpose of this underwater manipulator is to investigate and excavate natural resources in ocean, finding lost aircraft blackboxes and for performing other extremely dangerous tasks without endangering humans. This research work emphasizes on a self-learning approach using Deep Reinforcement Learning (DRL). DRL technique allows ROV to learn the policy of performing manipulation task directly, from raw image data. Our proposed architecture maps the visual inputs (images) to control actions (output) and get reward after each action, which allows an agent to learn manipulation skill through trial and error method. We have trained our network in simulation. The raw images and rewards are directly provided by our simple Lua simulator. Our simulator achieve accuracy by considering underwater dynamic environmental conditions. Major goal of this research is to provide a smart self-learning way to achieve manipulation in highly dynamic underwater environment. The results showed that a dual robotic arm trained for a 3DOF movement successfully achieved target reaching task in a 2D space by considering real environmental factor.

A Three-Dimensional Deep Convolutional Neural Network for Automatic Segmentation and Diameter Measurement of Type B Aortic Dissection

  • Yitong Yu;Yang Gao;Jianyong Wei;Fangzhou Liao;Qianjiang Xiao;Jie Zhang;Weihua Yin;Bin Lu
    • Korean Journal of Radiology
    • /
    • 제22권2호
    • /
    • pp.168-178
    • /
    • 2021
  • Objective: To provide an automatic method for segmentation and diameter measurement of type B aortic dissection (TBAD). Materials and Methods: Aortic computed tomography angiographic images from 139 patients with TBAD were consecutively collected. We implemented a deep learning method based on a three-dimensional (3D) deep convolutional neural (CNN) network, which realizes automatic segmentation and measurement of the entire aorta (EA), true lumen (TL), and false lumen (FL). The accuracy, stability, and measurement time were compared between deep learning and manual methods. The intra- and inter-observer reproducibility of the manual method was also evaluated. Results: The mean dice coefficient scores were 0.958, 0.961, and 0.932 for EA, TL, and FL, respectively. There was a linear relationship between the reference standard and measurement by the manual and deep learning method (r = 0.964 and 0.991, respectively). The average measurement error of the deep learning method was less than that of the manual method (EA, 1.64% vs. 4.13%; TL, 2.46% vs. 11.67%; FL, 2.50% vs. 8.02%). Bland-Altman plots revealed that the deviations of the diameters between the deep learning method and the reference standard were -0.042 mm (-3.412 to 3.330 mm), -0.376 mm (-3.328 to 2.577 mm), and 0.026 mm (-3.040 to 3.092 mm) for EA, TL, and FL, respectively. For the manual method, the corresponding deviations were -0.166 mm (-1.419 to 1.086 mm), -0.050 mm (-0.970 to 1.070 mm), and -0.085 mm (-1.010 to 0.084 mm). Intra- and inter-observer differences were found in measurements with the manual method, but not with the deep learning method. The measurement time with the deep learning method was markedly shorter than with the manual method (21.7 ± 1.1 vs. 82.5 ± 16.1 minutes, p < 0.001). Conclusion: The performance of efficient segmentation and diameter measurement of TBADs based on the 3D deep CNN was both accurate and stable. This method is promising for evaluating aortic morphology automatically and alleviating the workload of radiologists in the near future.

위상 홀로그램을 위한 딥러닝 기반의 초고해상도 (Deep Learning-based Super Resolution for Phase-only Holograms)

  • 김우석;박병서;김진겸;오관정;김진웅;김동욱;서영호
    • 방송공학회논문지
    • /
    • 제25권6호
    • /
    • pp.935-943
    • /
    • 2020
  • 본 논문에서는 위상 홀로그램의 고해상도 디스플레이를 위하여 딥러닝을 사용하는 방법을 제안한다. 일반적인 보간법을 사용하면 복원결과의 밝기가 낮아지고 노이즈와 잔상이 생기는 문제점이 발생한다. 이를 해결하고자 SISR(Single-Image Super Resolution) 분야에서 좋은 성능을 보였던 신경망 구조로 홀로그램을 학습시켰다. 그 결과로 복원결과에서 발생한 문제를 개선하며 해상도를 증가시킬 수 있었다. 또한 성능을 높이기 위해 채널 수를 조절하여 동일한 학습 시에 0.3dB 이상의 결과 상승을 보였다.

얼굴인식을 위한 다중입력 CNN의 기본 구현 (Basic Implementation of Multi Input CNN for Face Recognition)

  • Cheema, Usman;Moon, Seungbin
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2019년도 추계학술발표대회
    • /
    • pp.1002-1003
    • /
    • 2019
  • Face recognition is an extensively researched area of computer vision. Visible, infrared, thermal, and 3D modalities have been used against various challenges of face recognition such as illumination, pose, expression, partial information, and disguise. In this paper we present a multi-modal approach to face recognition using convolutional neural networks. We use visible and thermal face images as two separate inputs to a multi-input deep learning network for face recognition. The experiments are performed on IRIS visible and thermal face database and high face verification rates are achieved.

A Sketch-based 3D Object Retrieval Approach for Augmented Reality Models Using Deep Learning

  • 지명근;전준철
    • 인터넷정보학회논문지
    • /
    • 제21권1호
    • /
    • pp.33-43
    • /
    • 2020
  • Retrieving a 3D model from a 3D database and augmenting the retrieved model in the Augmented Reality system simultaneously became an issue in developing the plausible AR environments in a convenient fashion. It is considered that the sketch-based 3D object retrieval is an intuitive way for searching 3D objects based on human-drawn sketches as query. In this paper, we propose a novel deep learning based approach of retrieving a sketch-based 3D object as for an Augmented Reality Model. For this work, we introduce a new method which uses Sketch CNN, Wasserstein CNN and Wasserstein center loss for retrieving a sketch-based 3D object. Especially, Wasserstein center loss is used for learning the center of each object category and reducing the Wasserstein distance between center and features of the same category. The proposed 3D object retrieval and augmentation consist of three major steps as follows. Firstly, Wasserstein CNN extracts 2D images taken from various directions of 3D object using CNN, and extracts features of 3D data by computing the Wasserstein barycenters of features of each image. Secondly, the features of the sketch are extracted using a separate Sketch CNN. Finally, we adopt sketch-based object matching method to localize the natural marker of the images to register a 3D virtual object in AR system. Using the detected marker, the retrieved 3D virtual object is augmented in AR system automatically. By the experiments, we prove that the proposed method is efficiency for retrieving and augmenting objects.

Accuracy of artificial intelligence-assisted landmark identification in serial lateral cephalograms of Class III patients who underwent orthodontic treatment and two-jaw orthognathic surgery

  • Hong, Mihee;Kim, Inhwan;Cho, Jin-Hyoung;Kang, Kyung-Hwa;Kim, Minji;Kim, Su-Jung;Kim, Yoon-Ji;Sung, Sang-Jin;Kim, Young Ho;Lim, Sung-Hoon;Kim, Namkug;Baek, Seung-Hak
    • 대한치과교정학회지
    • /
    • 제52권4호
    • /
    • pp.287-297
    • /
    • 2022
  • Objective: To investigate the pattern of accuracy change in artificial intelligence-assisted landmark identification (LI) using a convolutional neural network (CNN) algorithm in serial lateral cephalograms (Lat-cephs) of Class III (C-III) patients who underwent two-jaw orthognathic surgery. Methods: A total of 3,188 Lat-cephs of C-III patients were allocated into the training and validation sets (3,004 Lat-cephs of 751 patients) and test set (184 Lat-cephs of 46 patients; subdivided into the genioplasty and non-genioplasty groups, n = 23 per group) for LI. Each C-III patient in the test set had four Lat-cephs: initial (T0), pre-surgery (T1, presence of orthodontic brackets [OBs]), post-surgery (T2, presence of OBs and surgical plates and screws [S-PS]), and debonding (T3, presence of S-PS and fixed retainers [FR]). After mean errors of 20 landmarks between human gold standard and the CNN model were calculated, statistical analysis was performed. Results: The total mean error was 1.17 mm without significant difference among the four time-points (T0, 1.20 mm; T1, 1.14 mm; T2, 1.18 mm; T3, 1.15 mm). In comparison of two time-points ([T0, T1] vs. [T2, T3]), ANS, A point, and B point showed an increase in error (p < 0.01, 0.05, 0.01, respectively), while Mx6D and Md6D showeda decrease in error (all p < 0.01). No difference in errors existed at B point, Pogonion, Menton, Md1C, and Md1R between the genioplasty and non-genioplasty groups. Conclusions: The CNN model can be used for LI in serial Lat-cephs despite the presence of OB, S-PS, FR, genioplasty, and bone remodeling.

CNN based data anomaly detection using multi-channel imagery for structural health monitoring

  • Shajihan, Shaik Althaf V.;Wang, Shuo;Zhai, Guanghao;Spencer, Billie F. Jr.
    • Smart Structures and Systems
    • /
    • 제29권1호
    • /
    • pp.181-193
    • /
    • 2022
  • Data-driven structural health monitoring (SHM) of civil infrastructure can be used to continuously assess the state of a structure, allowing preemptive safety measures to be carried out. Long-term monitoring of large-scale civil infrastructure often involves data-collection using a network of numerous sensors of various types. Malfunctioning sensors in the network are common, which can disrupt the condition assessment and even lead to false-negative indications of damage. The overwhelming size of the data collected renders manual approaches to ensure data quality intractable. The task of detecting and classifying an anomaly in the raw data is non-trivial. We propose an approach to automate this task, improving upon the previously developed technique of image-based pre-processing on one-dimensional (1D) data by enriching the features of the neural network input data with multiple channels. In particular, feature engineering is employed to convert the measured time histories into a 3-channel image comprised of (i) the time history, (ii) the spectrogram, and (iii) the probability density function representation of the signal. To demonstrate this approach, a CNN model is designed and trained on a dataset consisting of acceleration records of sensors installed on a long-span bridge, with the goal of fault detection and classification. The effect of imbalance in anomaly patterns observed is studied to better account for unseen test cases. The proposed framework achieves high overall accuracy and recall even when tested on an unseen dataset that is much larger than the samples used for training, offering a viable solution for implementation on full-scale structures where limited labeled-training data is available.

3D 오토인코더 기반의 뇌 자기공명영상에서 다발성 경화증 병변 검출 (Multiple Sclerosis Lesion Detection using 3D Autoencoder in Brain Magnetic Resonance Images)

  • 최원준;박성수;김윤수;감진규
    • 한국멀티미디어학회논문지
    • /
    • 제24권8호
    • /
    • pp.979-987
    • /
    • 2021
  • Multiple Sclerosis (MS) can be early diagnosed by detecting lesions in brain magnetic resonance images (MRI). Unsupervised anomaly detection methods based on autoencoder have been recently proposed for automated detection of MS lesions. However, these autoencoder-based methods were developed only for 2D images (e.g. 2D cross-sectional slices) of MRI, so do not utilize the full 3D information of MRI. In this paper, therefore, we propose a novel 3D autoencoder-based framework for detection of the lesion volume of MS in MRI. We first define a 3D convolutional neural network (CNN) for full MRI volumes, and build each encoder and decoder layer of the 3D autoencoder based on 3D CNN. We also add a skip connection between the encoder and decoder layer for effective data reconstruction. In the experimental results, we compare the 3D autoencoder-based method with the 2D autoencoder models using the training datasets of 80 healthy subjects from the Human Connectome Project (HCP) and the testing datasets of 25 MS patients from the Longitudinal multiple sclerosis lesion segmentation challenge, and show that the proposed method achieves superior performance in prediction of MS lesion by up to 15%.

CNN 기법을 활용한 운전자 시선 사각지대 보조 시스템 설계 및 구현 연구 (A Study on Design and Implementation of Driver's Blind Spot Assist System Using CNN Technique)

  • 임승철;고재승
    • 한국인터넷방송통신학회논문지
    • /
    • 제20권2호
    • /
    • pp.149-155
    • /
    • 2020
  • 한국도로교통공단은 교통사고분석시스템(TAAS)을 활용하여 2015년부터 발생한 교통사고 원인을 분석한 통계를 제공하고 있다. 교통사고 발생 주요 원인으로, 2018년 한해 전체 교통사고 발생원인 중 전방주시 부주의가 대부분의 원인임을 TAAS를 통해 발표했다. 교통사고 원인에 대한 통계자료의 세부항목으로 운전 중 스마트폰 사용, DMB 시청 등의 안전운전 불이행 51.2%와 안전거리 미확보 14%, 보행자 보호의무 위반 3.6% 등으로, 전체적으로 68.8%의 비율을 보여준다. 본 논문에서는 Deep Learning의 알고리듬 중 CNN(Convolutional Neural Network)를 활용하여 첨단 운전자 보조 시스템 ADAS(Advanced Driver Assistance Systems)을 개선한 시스템을 제안하고자 한다. 제안된 시스템은 영상처리에 주로 사용되는 Conv2D 기법을 사용하여 운전자의 얼굴과 눈동자의 조향을 분류하는 모델을 학습하고, 차량 전방에 부착된 카메라로 자동차의 주변 object를 인지 및 검출하여 주행환경을 인지한다. 그 후, 학습된 시선 조향모델과 주행환경 데이터를 사용하여 운전자의 시선과 주행환경에 따라, 위험요소를 3단계로 분류하고 검출하여 운전자의 전방 및 사각지대 보조한다.

Video Representation via Fusion of Static and Motion Features Applied to Human Activity Recognition

  • Arif, Sheeraz;Wang, Jing;Fei, Zesong;Hussain, Fida
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권7호
    • /
    • pp.3599-3619
    • /
    • 2019
  • In human activity recognition system both static and motion information play crucial role for efficient and competitive results. Most of the existing methods are insufficient to extract video features and unable to investigate the level of contribution of both (Static and Motion) components. Our work highlights this problem and proposes Static-Motion fused features descriptor (SMFD), which intelligently leverages both static and motion features in the form of descriptor. First, static features are learned by two-stream 3D convolutional neural network. Second, trajectories are extracted by tracking key points and only those trajectories have been selected which are located in central region of the original video frame in order to to reduce irrelevant background trajectories as well computational complexity. Then, shape and motion descriptors are obtained along with key points by using SIFT flow. Next, cholesky transformation is introduced to fuse static and motion feature vectors to guarantee the equal contribution of all descriptors. Finally, Long Short-Term Memory (LSTM) network is utilized to discover long-term temporal dependencies and final prediction. To confirm the effectiveness of the proposed approach, extensive experiments have been conducted on three well-known datasets i.e. UCF101, HMDB51 and YouTube. Findings shows that the resulting recognition system is on par with state-of-the-art methods.