• Title/Summary/Keyword: convolution layer

Search Result 138, Processing Time 0.027 seconds

A Comparative Study of the CNN Model for AD Diagnosis

  • Vyshnavi Ramineni;Goo-Rak Kwon
    • Smart Media Journal
    • /
    • v.12 no.7
    • /
    • pp.52-58
    • /
    • 2023
  • Alzheimer's disease is one type of dementia, the symptoms can be treated by detecting the disease at its early stages. Recently, many computer-aided diagnosis using magnetic resonance image(MRI) have shown a good results in the classification of AD. Taken these MRI images and feed to Free surfer software to extra the features. In consideration, using T1-weighted images and classifying using the convolution neural network (CNN) model are proposed. In this paper, taking the subjects from ADNI of subcortical and cortical features of 190 subjects. Consider the study to reduce the complexity of the model by using the single layer in the Res-Net, VGG, and Alex Net. Multi-class classification is used to classify four different stages, CN, EMCI, LMCI, AD. The following experiment shows for respective classification Res-Net, VGG, and Alex Net with the best accuracy with VGG at 96%, Res-Net, GoogLeNet and Alex Net at 91%, 93% and 89% respectively.

Video Quality Assessment based on Deep Neural Network

  • Zhiming Shi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.8
    • /
    • pp.2053-2067
    • /
    • 2023
  • This paper proposes two video quality assessment methods based on deep neural network. (i)The first method uses the IQF-CNN (convolution neural network based on image quality features) to build image quality assessment method. The LIVE image database is used to test this method, the experiment show that it is effective. Therefore, this method is extended to the video quality assessment. At first every image frame of video is predicted, next the relationship between different image frames are analyzed by the hysteresis function and different window function to improve the accuracy of video quality assessment. (ii)The second method proposes a video quality assessment method based on convolution neural network (CNN) and gated circular unit network (GRU). First, the spatial features of video frames are extracted using CNN network, next the temporal features of the video frame using GRU network. Finally the extracted temporal and spatial features are analyzed by full connection layer of CNN network to obtain the video quality assessment score. All the above proposed methods are verified on the video databases, and compared with other methods.

SKU-Net: Improved U-Net using Selective Kernel Convolution for Retinal Vessel Segmentation

  • Hwang, Dong-Hwan;Moon, Gwi-Seong;Kim, Yoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.4
    • /
    • pp.29-37
    • /
    • 2021
  • In this paper, we propose a deep learning-based retinal vessel segmentation model for handling multi-scale information of fundus images. we integrate the selective kernel convolution into U-Net-based convolutional neural network. The proposed model extracts and segment features information with various shapes and sizes of retinal blood vessels, which is important information for diagnosing eye-related diseases from fundus images. The proposed model consists of standard convolutions and selective kernel convolutions. While the standard convolutional layer extracts information through the same size kernel size, The selective kernel convolution extracts information from branches with various kernel sizes and combines them by adaptively adjusting them through split-attention. To evaluate the performance of the proposed model, we used the DRIVE and CHASE DB1 datasets and the proposed model showed F1 score of 82.91% and 81.71% on both datasets respectively, confirming that the proposed model is effective in segmenting retinal blood vessels.

Real-time 3D Pose Estimation of Both Human Hands via RGB-Depth Camera and Deep Convolutional Neural Networks (RGB-Depth 카메라와 Deep Convolution Neural Networks 기반의 실시간 사람 양손 3D 포즈 추정)

  • Park, Na Hyeon;Ji, Yong Bin;Gi, Geon;Kim, Tae Yeon;Park, Hye Min;Kim, Tae-Seong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2018.10a
    • /
    • pp.686-689
    • /
    • 2018
  • 3D 손 포즈 추정(Hand Pose Estimation, HPE)은 스마트 인간 컴퓨터 인터페이스를 위해서 중요한 기술이다. 이 연구에서는 딥러닝 방법을 기반으로 하여 단일 RGB-Depth 카메라로 촬영한 양손의 3D 손 자세를 실시간으로 인식하는 손 포즈 추정 시스템을 제시한다. 손 포즈 추정 시스템은 4단계로 구성된다. 첫째, Skin Detection 및 Depth cutting 알고리즘을 사용하여 양손을 RGB와 깊이 영상에서 감지하고 추출한다. 둘째, Convolutional Neural Network(CNN) Classifier는 오른손과 왼손을 구별하는데 사용된다. CNN Classifier 는 3개의 convolution layer와 2개의 Fully-Connected Layer로 구성되어 있으며, 추출된 깊이 영상을 입력으로 사용한다. 셋째, 학습된 CNN regressor는 추출된 왼쪽 및 오른쪽 손의 깊이 영상에서 손 관절을 추정하기 위해 다수의 Convolutional Layers, Pooling Layers, Fully Connected Layers로 구성된다. CNN classifier와 regressor는 22,000개 깊이 영상 데이터셋으로 학습된다. 마지막으로, 각 손의 3D 손 자세는 추정된 손 관절 정보로부터 재구성된다. 테스트 결과, CNN classifier는 오른쪽 손과 왼쪽 손을 96.9%의 정확도로 구별할 수 있으며, CNN regressor는 형균 8.48mm의 오차 범위로 3D 손 관절 정보를 추정할 수 있다. 본 연구에서 제안하는 손 포즈 추정 시스템은 가상 현실(virtual reality, VR), 증강 현실(Augmented Reality, AR) 및 융합 현실 (Mixed Reality, MR) 응용 프로그램을 포함한 다양한 응용 분야에서 사용할 수 있다.

Implementation of Speech Recognition and Flight Controller Based on Deep Learning for Control to Primary Control Surface of Aircraft

  • Hur, Hwa-La;Kim, Tae-Sun;Park, Myeong-Chul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.9
    • /
    • pp.57-64
    • /
    • 2021
  • In this paper, we propose a device that can control the primary control surface of an aircraft by recognizing speech commands. The speech command consists of 19 commands, and a learning model is constructed based on a total of 2,500 datasets. The training model is composed of a CNN model using the Sequential library of the TensorFlow-based Keras model, and the speech file used for training uses the MFCC algorithm to extract features. The learning model consists of two convolution layers for feature recognition and Fully Connected Layer for classification consists of two dense layers. The accuracy of the validation dataset was 98.4%, and the performance evaluation of the test dataset showed an accuracy of 97.6%. In addition, it was confirmed that the operation was performed normally by designing and implementing a Raspberry Pi-based control device. In the future, it can be used as a virtual training environment in the field of voice recognition automatic flight and aviation maintenance.

Design of a Deep Neural Network Model for Image Caption Generation (이미지 캡션 생성을 위한 심층 신경망 모델의 설계)

  • Kim, Dongha;Kim, Incheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.6 no.4
    • /
    • pp.203-210
    • /
    • 2017
  • In this paper, we propose an effective neural network model for image caption generation and model transfer. This model is a kind of multi-modal recurrent neural network models. It consists of five distinct layers: a convolution neural network layer for extracting visual information from images, an embedding layer for converting each word into a low dimensional feature, a recurrent neural network layer for learning caption sentence structure, and a multi-modal layer for combining visual and language information. In this model, the recurrent neural network layer is constructed by LSTM units, which are well known to be effective for learning and transferring sequence patterns. Moreover, this model has a unique structure in which the output of the convolution neural network layer is linked not only to the input of the initial state of the recurrent neural network layer but also to the input of the multimodal layer, in order to make use of visual information extracted from the image at each recurrent step for generating the corresponding textual caption. Through various comparative experiments using open data sets such as Flickr8k, Flickr30k, and MSCOCO, we demonstrated the proposed multimodal recurrent neural network model has high performance in terms of caption accuracy and model transfer effect.

A study on discharge estimation for the event using a deep learning algorithm (딥러닝 알고리즘을 이용한 강우 발생시의 유량 추정에 관한 연구)

  • Song, Chul Min
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2021.06a
    • /
    • pp.246-246
    • /
    • 2021
  • 본 연구는 강우 발생시 유량을 추정하는 것에 목적이 있다. 이를 위해 본 연구는 선행연구의 모형 개발방법론에서 벗어나 딥러닝 알고리즘 중 하나인 합성곱 신경망 (convolution neural network)과 수문학적 이미지 (hydrological image)를 이용하여 강우 발생시 유량을 추정하였다. 합성곱 신경망은 일반적으로 분류 문제 (classification)을 해결하기 위한 목적으로 개발되었기 때문에 불특정 연속변수인 유량을 모의하기에는 적합하지 않다. 이를 위해 본 연구에서는 합성곱 신경망의 완전 연결층 (Fully connected layer)를 개선하여 연속변수를 모의할 수 있도록 개선하였다. 대부분 합성곱 신경망은 RGB (red, green, blue) 사진 (photograph)을 이용하여 해당 사진이 나타내는 것을 예측하는 목적으로 사용하지만, 본 연구의 경우 일반 RGB 사진을 이용하여 유출량을 예측하는 것은 경험적 모형의 전제(독립변수와 종속변수의 관계)를 무너뜨리는 결과를 초래할 수 있다. 이를 위해 본 연구에서는 임의의 유역에 대해 2차원 공간에서 무차원의 수문학적 속성을 갖는 grid의 집합으로 정의되는 수문학적 이미지는 입력자료로 활용했다. 합성곱 신경망의 구조는 Convolution Layer와 Pulling Layer가 5회 반복하는 구조로 설정하고, 이후 Flatten Layer, 2개의 Dense Layer, 1개의 Batch Normalization Layer를 배열하고, 다시 1개의 Dense Layer가 이어지는 구조로 설계하였다. 마지막 Dense Layer의 활성화 함수는 분류모형에 이용되는 softmax 또는 sigmoid 함수를 대신하여 회귀모형에서 자주 사용되는 Linear 함수로 설정하였다. 이와 함께 각 층의 활성화 함수는 정규화 선형함수 (ReLu)를 이용하였으며, 모형의 학습 평가 및 검정을 판단하기 위해 MSE 및 MAE를 사용했다. 또한, 모형평가는 NSE와 RMSE를 이용하였다. 그 결과, 모형의 학습 평가에 대한 MSE는 11.629.8 m3/s에서 118.6 m3/s로, MAE는 25.4 m3/s에서 4.7 m3/s로 감소하였으며, 모형의 검정에 대한 MSE는 1,997.9 m3/s에서 527.9 m3/s로, MAE는 21.5 m3/s에서 9.4 m3/s로 감소한 것으로 나타났다. 또한, 모형평가를 위한 NSE는 0.7, RMSE는 27.0 m3/s로 나타나, 본 연구의 모형은 양호(moderate)한 것으로 판단하였다. 이에, 본 연구를 통해 제시된 방법론에 기반을 두어 CNN 모형 구조의 확장과 수문학적 이미지의 개선 또는 새로운 이미지 개발 등을 추진할 경우 모형의 예측 성능이 향상될 수 있는 여지가 있으며, 원격탐사 분야나, 위성 영상을 이용한 전 지구적 또는 광역 단위의 실시간 유량 모의 분야 등으로의 응용이 가능할 것으로 기대된다.

  • PDF

Evaluation of Shear Deformation Energy and Fatigue Performance of Single-layer and Multi-layer Metal Bellows (단층 및 다층 금속 벨로우즈의 전단 변형 에너지 및 피로성능 평가)

  • Kyeong-Seok Lee;Jin-Seok Yu;Young-Soo Jeong
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.28 no.1
    • /
    • pp.39-45
    • /
    • 2024
  • Seismic safety of expansion joints for piping systems has been underscored by water pipe ruptures and leaks resulting from the Gyeongju and Pohang earthquakes. Metal bellows in piping systems are applied to prevent damage from earthquakes and road subsidence in soft ground. Designed with a series of corrugated segments called convolutions, metal bellows exhibit flexibility to accommodate displacements. Several studies have examined variations in convolution shapes and layers based on the intended performance to be evaluated. Nonetheless, the research on the seismic performance of complex bellows having multiple corrugation heights is limited. In this study, monotonic loading tests, cyclic loading tests, and fatigue tests were conducted to evaluate the shear performance in seismic conditions, of metal bellows with variable convolution heights. Single- and triple-layer bellows were considered for the experimentation. The results reveal that triple-layer bellows exhibit larger maximum deformation and fatigue life than single-layer bellows. However, the high stiffness of triple-layer bellows in resisting internal pressure poses certain disadvantages. The convolutions are less flexible at lower displacements and experience leakage at a rate related to the variable height of the convolutions in certain conditions. At lower deformation rates, the fatigue life is rated higher as the number of layers increase. It converges to a similar fatigue life at higher deformation rates.

Real-time Segmentation of Black Ice Region in Infrared Road Images

  • Li, Yu-Jie;Kang, Sun-Kyoung;Jung, Sung-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.2
    • /
    • pp.33-42
    • /
    • 2022
  • In this paper, we proposed a deep learning model based on multi-scale dilated convolution feature fusion for the segmentation of black ice region in road image to send black ice warning to drivers in real time. In the proposed multi-scale dilated convolution feature fusion network, different dilated ratio convolutions are connected in parallel in the encoder blocks, and different dilated ratios are used in different resolution feature maps, and multi-layer feature information are fused together. The multi-scale dilated convolution feature fusion improves the performance by diversifying and expending the receptive field of the network and by preserving detailed space information and enhancing the effectiveness of diated convolutions. The performance of the proposed network model was gradually improved with the increase of the number of dilated convolution branch. The mIoU value of the proposed method is 96.46%, which was higher than the existing networks such as U-Net, FCN, PSPNet, ENet, LinkNet. The parameter was 1,858K, which was 6 times smaller than the existing LinkNet model. From the experimental results of Jetson Nano, the FPS of the proposed method was 3.63, which can realize segmentation of black ice field in real time.

A Study on the Accuracy Improvement of Movie Recommender System Using Word2Vec and Ensemble Convolutional Neural Networks (Word2Vec과 앙상블 합성곱 신경망을 활용한 영화추천 시스템의 정확도 개선에 관한 연구)

  • Kang, Boo-Sik
    • Journal of Digital Convergence
    • /
    • v.17 no.1
    • /
    • pp.123-130
    • /
    • 2019
  • One of the most commonly used methods of web recommendation techniques is collaborative filtering. Many studies on collaborative filtering have suggested ways to improve accuracy. This study proposes a method of movie recommendation using Word2Vec and an ensemble convolutional neural networks. First, in the user, movie, and rating information, construct the user sentences and movie sentences. It inputs user sentences and movie sentences into Word2Vec to obtain user vectors and movie vectors. User vectors are entered into user convolution model and movie vectors are input to movie convolution model. The user and the movie convolution models are linked to a fully connected neural network model. Finally, the output layer of the fully connected neural network outputs forecasts of user movie ratings. Experimentation results showed that the accuracy of the technique proposed in this study accuracy of conventional collaborative filtering techniques was improved compared to those of conventional collaborative filtering technique and the technique using Word2Vec and deep neural networks proposed in a similar study.