• Title/Summary/Keyword: 3D-CNN

Search Result 157, Processing Time 0.025 seconds

Fusion System of Time-of-Flight Sensor and Stereo Cameras Considering Single Photon Avalanche Diode and Convolutional Neural Network (SPAD과 CNN의 특성을 반영한 ToF 센서와 스테레오 카메라 융합 시스템)

  • Kim, Dong Yeop;Lee, Jae Min;Jun, Sewoong
    • The Journal of Korea Robotics Society
    • /
    • v.13 no.4
    • /
    • pp.230-236
    • /
    • 2018
  • 3D depth perception has played an important role in robotics, and many sensory methods have also proposed for it. As a photodetector for 3D sensing, single photon avalanche diode (SPAD) is suggested due to sensitivity and accuracy. We have researched for applying a SPAD chip in our fusion system of time-of-fight (ToF) sensor and stereo camera. Our goal is to upsample of SPAD resolution using RGB stereo camera. Currently, we have 64 x 32 resolution SPAD ToF Sensor, even though there are higher resolution depth sensors such as Kinect V2 and Cube-Eye. This may be a weak point of our system, however we exploit this gap using a transition of idea. A convolution neural network (CNN) is designed to upsample our low resolution depth map using the data of the higher resolution depth as label data. Then, the upsampled depth data using CNN and stereo camera depth data are fused using semi-global matching (SGM) algorithm. We proposed simplified fusion method created for the embedded system.

Synthesizing Image and Automated Annotation Tool for CNN based Under Water Object Detection (강건한 CNN기반 수중 물체 인식을 위한 이미지 합성과 자동화된 Annotation Tool)

  • Jeon, MyungHwan;Lee, Yeongjun;Shin, Young-Sik;Jang, Hyesu;Yeu, Taekyeong;Kim, Ayoung
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.2
    • /
    • pp.139-149
    • /
    • 2019
  • In this paper, we present auto-annotation tool and synthetic dataset using 3D CAD model for deep learning based object detection. To be used as training data for deep learning methods, class, segmentation, bounding-box, contour, and pose annotations of the object are needed. We propose an automated annotation tool and synthetic image generation. Our resulting synthetic dataset reflects occlusion between objects and applicable for both underwater and in-air environments. To verify our synthetic dataset, we use MASK R-CNN as a state-of-the-art method among object detection model using deep learning. For experiment, we make the experimental environment reflecting the actual underwater environment. We show that object detection model trained via our dataset show significantly accurate results and robustness for the underwater environment. Lastly, we verify that our synthetic dataset is suitable for deep learning model for the underwater environments.

Manufacture artificial intelligence education kit using Jetson Nano and 3D printer (Jetson Nano와 3D프린터를 이용한 인공지능 교육용 키트 제작)

  • SeongJu Park;NamHo Kim
    • Smart Media Journal
    • /
    • v.11 no.11
    • /
    • pp.40-48
    • /
    • 2022
  • In this paper, an educational kit that can be used in AI education was developed to solve the difficulties of AI education. Through this, object detection and person detection in computer vision using CNN and OpenCV to learn practical-oriented experiences from theory-centered and user image recognition (Your Own) that learns and recognizes specific objects Image Recognition), user object classification (Segmentation) and segmentation (Classification Datasets), IoT hardware control that attacks the learned target, and Jetson Nano GPIO, an AI board, are developed and utilized to develop and utilize textbooks that help effective AI learning made it possible.

Fault diagnosis of linear transfer robot using XAI

  • Taekyung Kim;Arum Park
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.3
    • /
    • pp.121-138
    • /
    • 2024
  • Artificial intelligence is crucial to manufacturing productivity. Understanding the difficulties in producing disruptions, especially in linear feed robot systems, is essential for efficient operations. These mechanical tools, essential for linear movements within systems, are prone to damage and degradation, especially in the LM guide, due to repetitive motions. We examine how explainable artificial intelligence (XAI) may diagnose wafer linear robot linear rail clearance and ball screw clearance anomalies. XAI helps diagnose problems and explain anomalies, enriching management and operational strategies. By interpreting the reasons for anomaly detection through visualizations such as Class Activation Maps (CAMs) using technologies like Grad-CAM, FG-CAM, and FFT-CAM, and comparing 1D-CNN with 2D-CNN, we illustrates the potential of XAI in enhancing diagnostic accuracy. The use of datasets from accelerometer and torque sensors in our experiments validates the high accuracy of the proposed method in binary and ternary classifications. This study exemplifies how XAI can elucidate deep learning models trained on industrial signals, offering a practical approach to understanding and applying AI in maintaining the integrity of critical components such as LM guides in linear feed robots.

A Study on Shape Warpage Defect Detecion Model of Scaffold Using Deep Learning Based CNN (CNN 기반 딥러닝을 이용한 인공지지체의 외형 변형 불량 검출 모델에 관한 연구)

  • Lee, Song-Yeon;Huh, Yong Jeong
    • Journal of the Semiconductor & Display Technology
    • /
    • v.20 no.1
    • /
    • pp.99-103
    • /
    • 2021
  • Warpage defect detecting of scaffold is very important in biosensor production. Because warpaged scaffold cause problem in cell culture. Currently, there is no detection equipment to warpaged scaffold. In this paper, we produced detection model for shape warpage detection using deep learning based CNN. We confirmed the shape of the scaffold that is widely used in cell culture. We produced scaffold specimens, which are widely used in biosensor fabrications. Then, the scaffold specimens were photographed to collect image data necessary for model manufacturing. We produced the detecting model of scaffold warpage defect using Densenet among CNN models. We evaluated the accuracy of the defect detection model with mAP, which evaluates the detection accuracy of deep learning. As a result of model evaluating, it was confirmed that the defect detection accuracy of the scaffold was more than 95%.

Indirect Inspection Signal Diagnosis of Buried Pipe Coating Flaws Using Deep Learning Algorithm (딥러닝 알고리즘을 이용한 매설 배관 피복 결함의 간접 검사 신호 진단에 관한 연구)

  • Sang Jin Cho;Young-Jin Oh;Soo Young Shin
    • Transactions of the Korean Society of Pressure Vessels and Piping
    • /
    • v.19 no.2
    • /
    • pp.93-101
    • /
    • 2023
  • In this study, a deep learning algorithm was used to diagnose electric potential signals obtained through CIPS and DCVG, used indirect inspection methods to confirm the soundness of buried pipes. The deep learning algorithm consisted of CNN(Convolutional Neural Network) model for diagnosing the electric potential signal and Grad CAM(Gradient-weighted Class Activation Mapping) for showing the flaw prediction point. The CNN model for diagnosing electric potential signals classifies input data as normal/abnormal according to the presence or absence of flaw in the buried pipe, and for abnormal data, Grad CAM generates a heat map that visualizes the flaw prediction part of the buried pipe. The CIPS/DCVG signal and piping layout obtained from the 3D finite element model were used as input data for learning the CNN. The trained CNN classified the normal/abnormal data with 93% accuracy, and the Grad-CAM predicted flaws point with an average error of 2m. As a result, it confirmed that the electric potential signal of buried pipe can be diagnosed using a CNN-based deep learning algorithm.

Multiple Sclerosis Lesion Detection using 3D Autoencoder in Brain Magnetic Resonance Images (3D 오토인코더 기반의 뇌 자기공명영상에서 다발성 경화증 병변 검출)

  • Choi, Wonjune;Park, Seongsu;Kim, Yunsoo;Gahm, Jin Kyu
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.8
    • /
    • pp.979-987
    • /
    • 2021
  • Multiple Sclerosis (MS) can be early diagnosed by detecting lesions in brain magnetic resonance images (MRI). Unsupervised anomaly detection methods based on autoencoder have been recently proposed for automated detection of MS lesions. However, these autoencoder-based methods were developed only for 2D images (e.g. 2D cross-sectional slices) of MRI, so do not utilize the full 3D information of MRI. In this paper, therefore, we propose a novel 3D autoencoder-based framework for detection of the lesion volume of MS in MRI. We first define a 3D convolutional neural network (CNN) for full MRI volumes, and build each encoder and decoder layer of the 3D autoencoder based on 3D CNN. We also add a skip connection between the encoder and decoder layer for effective data reconstruction. In the experimental results, we compare the 3D autoencoder-based method with the 2D autoencoder models using the training datasets of 80 healthy subjects from the Human Connectome Project (HCP) and the testing datasets of 25 MS patients from the Longitudinal multiple sclerosis lesion segmentation challenge, and show that the proposed method achieves superior performance in prediction of MS lesion by up to 15%.

Artificial Intelligence Image Segmentation for Extracting Construction Formwork Elements (거푸집 부재 인식을 위한 인공지능 이미지 분할)

  • Ayesha Munira, Chowdhury;Moon, Sung-Woo
    • Journal of KIBIM
    • /
    • v.12 no.1
    • /
    • pp.1-9
    • /
    • 2022
  • Concrete formwork is a crucial component for any construction project. Artificial intelligence offers great potential to automate formwork design by offering various design options and under different criteria depending on the requirements. This study applied image segmentation in 2D formwork drawings to extract sheathing, strut and pipe support formwork elements. The proposed artificial intelligence model can recognize, classify, and extract formwork elements from 2D CAD drawing image and training and test results confirmed the model performed very well at formwork element recognition with average precision and recall better than 80%. Recognition systems for each formwork element can be implemented later to generate 3D BIM models.

Morphological Analysis of Hydraulically Stimulated Fractures by Deep-Learning Segmentation Method (딥러닝 기반 균열 추출 기법을 통한 수압 파쇄 균열 형상 분석)

  • Park, Jimin;Kim, Kwang Yeom ;Yun, Tae Sup
    • Journal of the Korean Geotechnical Society
    • /
    • v.39 no.8
    • /
    • pp.17-28
    • /
    • 2023
  • Laboratory-scale hydraulic fracturing experiments were conducted on granite specimens at various viscosities and injection rates of the fracturing fluid. A series of cross-sectional computed tomography (CT) images of fractured specimens was obtained via a three-dimensional X-ray CT imaging method. Pixel-level fracture segmentation of the CT images was conducted using a convolutional neural network (CNN)-based Nested U-Net model structure. Compared with traditional image processing methods, the CNN-based model showed a better performance in the extraction of thin and complex fractures. These extracted fractures extracted were reconstructed in three dimensions and morphologically analyzed based on their fracture volume, aperture, tortuosity, and surface roughness. The fracture volume and aperture increased with the increase in viscosity of the fracturing fluid, while the tortuosity and roughness of the fracture surface decreased. The findings also confirmed the anisotropic tortuosity and roughness of the fracture surface. In this study, a CNN-based model was used to perform accurate fracture segmentation, and quantitative analysis of hydraulic stimulated fractures was conducted successfully.

Automatic Classification of Bridge Component based on Deep Learning (딥러닝 기반 교량 구성요소 자동 분류)

  • Lee, Jae Hyuk;Park, Jeong Jun;Yoon, Hyungchul
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.40 no.2
    • /
    • pp.239-245
    • /
    • 2020
  • Recently, BIM (Building Information Modeling) are widely being utilized in Construction industry. However, most structures that have been constructed in the past do not have BIM. For structures without BIM, the use of SfM (Structure from Motion) techniques in the 2D image obtained from the camera allows the generation of 3D model point cloud data and BIM to be established. However, since these generated point cloud data do not contain semantic information, it is necessary to manually classify what elements of the structure. Therefore, in this study, deep learning was applied to automate the process of classifying structural components. In the establishment of deep learning network, Inception-ResNet-v2 of CNN (Convolutional Neural Network) structure was used, and the components of bridge structure were learned through transfer learning. As a result of classifying components using the data collected to verify the developed system, the components of the bridge were classified with an accuracy of 96.13 %.