• Title/Summary/Keyword: Three-dimensional Convolutional Neural Network

Search Result 33, Processing Time 0.025 seconds

A Study on Autonomous Cavitation Image Recognition Using Deep Learning Technology (딥러닝 기술을 이용한 캐비테이션 자동인식에 대한 연구)

  • Ji, Bahan;Ahn, Byoung-Kwon
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.58 no.2
    • /
    • pp.105-111
    • /
    • 2021
  • The main source of underwater radiated noise of ships is cavitation generated by propeller blades. After the Cavitation Inception Speed (CIS), noise level at all frequencies increases severely. In determining the CIS, it is based on the results observed with the naked eye during the model test, however accuracy and consistency of CIS values are becoming practical issues. This study was carried out with the aim of developing a technology that can automatically recognize cavitation images using deep learning technique based on a Convolutional Neural Network (CNN). Model tests on a three-dimensional hydrofoil were conducted at a cavitation tunnel, and tip vortex cavitation was strictly observed using a high-speed camera to obtain analysis data. The results show that this technique can be used to quantitatively evaluate not only the CIS, but also the amount and rate of cavitation from recorded images.

3D Virtual Reality Game with Deep Learning-based Hand Gesture Recognition (딥러닝 기반 손 제스처 인식을 통한 3D 가상현실 게임)

  • Lee, Byeong-Hee;Oh, Dong-Han;Kim, Tae-Young
    • Journal of the Korea Computer Graphics Society
    • /
    • v.24 no.5
    • /
    • pp.41-48
    • /
    • 2018
  • The most natural way to increase immersion and provide free interaction in a virtual environment is to provide a gesture interface using the user's hand. However, most studies about hand gesture recognition require specialized sensors or equipment, or show low recognition rates. This paper proposes a three-dimensional DenseNet Convolutional Neural Network that enables recognition of hand gestures with no sensors or equipment other than an RGB camera for hand gesture input and introduces a virtual reality game based on it. Experimental results on 4 static hand gestures and 6 dynamic hand gestures showed that they could be used as real-time user interfaces for virtual reality games with an average recognition rate of 94.2% at 50ms. Results of this research can be used as a hand gesture interface not only for games but also for education, medicine, and shopping.

Detection of Frame Deletion Using Convolutional Neural Network (CNN 기반 동영상의 프레임 삭제 검출 기법)

  • Hong, Jin Hyung;Yang, Yoonmo;Oh, Byung Tae
    • Journal of Broadcast Engineering
    • /
    • v.23 no.6
    • /
    • pp.886-895
    • /
    • 2018
  • In this paper, we introduce a technique to detect the video forgery by using the regularity that occurs in the video compression process. The proposed method uses the hierarchical regularity lost by the video double compression and the frame deletion. In order to extract such irregularities, the depth information of CU and TU, which are basic units of HEVC, is used. For improving performance, we make a depth map of CU and TU using local information, and then create input data by grouping them in GoP units. We made a decision whether or not the video is double-compressed and forged by using a general three-dimensional convolutional neural network. Experimental results show that it is more effective to detect whether or not the video is forged compared with the results using the existing machine learning algorithm.

Acceleration signal-based haptic texture recognition according to characteristics of object surface material using conformer model (Conformer 모델을 이용한 물체 표면 재료의 특성에 따른 가속도 신호 기반 햅틱 질감 인식)

  • Hyoung-Gook Kim;Dong-Ki Jeong;Jin-Young Kim
    • The Journal of the Acoustical Society of Korea
    • /
    • v.42 no.3
    • /
    • pp.214-220
    • /
    • 2023
  • In this paper, we propose a method to improve texture recognition performance from haptic acceleration signals representing the texture characteristics of object surface materials by using a Conformer model that combines the advantages of a convolutional neural network and a transformer. In the proposed method, three-axis acceleration signals generated by impact sound and vibration are combined into one-dimensional acceleration data while a person contacts the surface of the object materials using a tool such as a stylus , and the logarithmic Mel-spectrogram is extracted from the haptic acceleration signal similar to the audio signal. Then, Conformer is applied to the extracted the logarithmic Mel-spectrogram to learn main local and global frequency features in recognizing the texture of various object materials. Experiments on the Lehrstuhl für Medientechnik (LMT) haptic texture dataset consisting of 60 materials to evaluate the performance of the proposed model showed that the proposed method can effectively recognize the texture of the object surface material better than the existing methods.

People Counting and Coordinate Estimation Using Multiple IR-UWB Radars (다수의 IR-UWB 레이다를 이용한 인원수 및 좌표 추정 연구)

  • Tae-Yun Kim;Se-Won Yoon;In-Oh Choi;Joo-Ho Jung;Sang-Hong Park
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.19 no.1
    • /
    • pp.39-46
    • /
    • 2024
  • In this paper, we propose an efficient method for estimating the number of people and their locations using multiple IR-UWB radar sensors. Using three IR-UWB radar sensors in the indoor space, the measured signal from the target is processed to remove the clutter using rejection methods. Then, to further remove the clutter and to determine the presence of the human, the time-frequency image representing the micro-Doppler is obtained and classified by a convolutional neural network. Finally, the system finds the number of human objects and estimates each position in a two-dimensional space. In experiments using the measured data, the system successfully estimated the location and number of individuals with a high accuracy ≈ 88.68 %.

A Three-Dimensional Deep Convolutional Neural Network for Automatic Segmentation and Diameter Measurement of Type B Aortic Dissection

  • Yitong Yu;Yang Gao;Jianyong Wei;Fangzhou Liao;Qianjiang Xiao;Jie Zhang;Weihua Yin;Bin Lu
    • Korean Journal of Radiology
    • /
    • v.22 no.2
    • /
    • pp.168-178
    • /
    • 2021
  • Objective: To provide an automatic method for segmentation and diameter measurement of type B aortic dissection (TBAD). Materials and Methods: Aortic computed tomography angiographic images from 139 patients with TBAD were consecutively collected. We implemented a deep learning method based on a three-dimensional (3D) deep convolutional neural (CNN) network, which realizes automatic segmentation and measurement of the entire aorta (EA), true lumen (TL), and false lumen (FL). The accuracy, stability, and measurement time were compared between deep learning and manual methods. The intra- and inter-observer reproducibility of the manual method was also evaluated. Results: The mean dice coefficient scores were 0.958, 0.961, and 0.932 for EA, TL, and FL, respectively. There was a linear relationship between the reference standard and measurement by the manual and deep learning method (r = 0.964 and 0.991, respectively). The average measurement error of the deep learning method was less than that of the manual method (EA, 1.64% vs. 4.13%; TL, 2.46% vs. 11.67%; FL, 2.50% vs. 8.02%). Bland-Altman plots revealed that the deviations of the diameters between the deep learning method and the reference standard were -0.042 mm (-3.412 to 3.330 mm), -0.376 mm (-3.328 to 2.577 mm), and 0.026 mm (-3.040 to 3.092 mm) for EA, TL, and FL, respectively. For the manual method, the corresponding deviations were -0.166 mm (-1.419 to 1.086 mm), -0.050 mm (-0.970 to 1.070 mm), and -0.085 mm (-1.010 to 0.084 mm). Intra- and inter-observer differences were found in measurements with the manual method, but not with the deep learning method. The measurement time with the deep learning method was markedly shorter than with the manual method (21.7 ± 1.1 vs. 82.5 ± 16.1 minutes, p < 0.001). Conclusion: The performance of efficient segmentation and diameter measurement of TBADs based on the 3D deep CNN was both accurate and stable. This method is promising for evaluating aortic morphology automatically and alleviating the workload of radiologists in the near future.

Machine Learning of GCM Atmospheric Variables for Spatial Downscaling of Precipitation Data

  • Sunmin Kim;Masaharu Shibata;YasutoTachikawa
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2023.05a
    • /
    • pp.26-26
    • /
    • 2023
  • General circulation models (GCMs) are widely used in hydrological prediction, however their coarse grids make them unsuitable for regional analysis, therefore a downscaling method is required to utilize them in hydrological assessment. As one of the downscaling methods, convolutional neural network (CNN)-based downscaling has been proposed in recent years. The aim of this study is to generate the process of dynamic downscaling using CNNs by applying GCM output as input and RCM output as label data output. Prediction accuracy is compared between different input datasets, and model structures. Several input datasets with key atmospheric variables such as precipitation, temperature, and humidity were tested with two different formats; one is two-dimensional data and the other one is three-dimensional data. And in the model structure, the hyperparameters were tested to check the effect on model accuracy. The results of the experiments on the input dataset showed that the accuracy was higher for the input dataset without precipitation than with precipitation. The results of the experiments on the model structure showed that substantially increasing the number of convolutions resulted in higher accuracy, however increasing the size of the receptive field did not necessarily lead to higher accuracy. Though further investigation is required for the application, this paper can contribute to the development of efficient downscaling method with CNNs.

  • PDF

The Edge Computing System for the Detection of Water Usage Activities with Sound Classification (음향 기반 물 사용 활동 감지용 엣지 컴퓨팅 시스템)

  • Seung-Ho Hyun;Youngjoon Chee
    • Journal of Biomedical Engineering Research
    • /
    • v.44 no.2
    • /
    • pp.147-156
    • /
    • 2023
  • Efforts to employ smart home sensors to monitor the indoor activities of elderly single residents have been made to assess the feasibility of a safe and healthy lifestyle. However, the bathroom remains an area of blind spot. In this study, we have developed and evaluated a new edge computer device that can automatically detect water usage activities in the bathroom and record the activity log on a cloud server. Three kinds of sound as flushing, showering, and washing using wash basin generated during water usage were recorded and cut into 1-second scenes. These sound clips were then converted into a 2-dimensional image using MEL-spectrogram. Sound data augmentation techniques were adopted to obtain better learning effect from smaller number of data sets. These techniques, some of which are applied in time domain and others in frequency domain, increased the number of training data set by 30 times. A deep learning model, called CRNN, combining Convolutional Neural Network and Recurrent Neural Network was employed. The edge device was implemented using Raspberry Pi 4 and was equipped with a condenser microphone and amplifier to run the pre-trained model in real-time. The detected activities were recorded as text-based activity logs on a Firebase server. Performance was evaluated in two bathrooms for the three water usage activities, resulting in an accuracy of 96.1% and 88.2%, and F1 Score of 96.1% and 87.8%, respectively. Most of the classification errors were observed in the water sound from washing. In conclusion, this system demonstrates the potential for use in recording the activities as a lifelog of elderly single residents to a cloud server over the long-term.

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.

Deep learning based Person Re-identification with RGB-D sensors

  • Kim, Min;Park, Dong-Hyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.3
    • /
    • pp.35-42
    • /
    • 2021
  • In this paper, we propose a deep learning-based person re-identification method using a three-dimensional RGB-Depth Xtion2 camera considering joint coordinates and dynamic features(velocity, acceleration). The main idea of the proposed identification methodology is to easily extract gait data such as joint coordinates, dynamic features with an RGB-D camera and automatically identify gait patterns through a self-designed one-dimensional convolutional neural network classifier(1D-ConvNet). The accuracy was measured based on the F1 Score, and the influence was measured by comparing the accuracy with the classifier model (JC) that did not consider dynamic characteristics. As a result, our proposed classifier model in the case of considering the dynamic characteristics(JCSpeed) showed about 8% higher F1-Score than JC.