• Title/Summary/Keyword: Deep Fully Convolutional Network

Search Result 79, Processing Time 0.022 seconds

A Triple Residual Multiscale Fully Convolutional Network Model for Multimodal Infant Brain MRI Segmentation

  • Chen, Yunjie;Qin, Yuhang;Jin, Zilong;Fan, Zhiyong;Cai, Mao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.3
    • /
    • pp.962-975
    • /
    • 2020
  • The accurate segmentation of infant brain MR image into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) is very important for early studying of brain growing patterns and morphological changes in neurodevelopmental disorders. Because of inherent myelination and maturation process, the WM and GM of babies (between 6 and 9 months of age) exhibit similar intensity levels in both T1-weighted (T1w) and T2-weighted (T2w) MR images in the isointense phase, which makes brain tissue segmentation very difficult. We propose a deep network architecture based on U-Net, called Triple Residual Multiscale Fully Convolutional Network (TRMFCN), whose structure exists three gates of input and inserts two blocks: residual multiscale block and concatenate block. We solved some difficulties and completed the segmentation task with the model. Our model outperforms the U-Net and some cutting-edge deep networks based on U-Net in evaluation of WM, GM and CSF. The data set we used for training and testing comes from iSeg-2017 challenge (http://iseg2017.web.unc.edu).

Performance Evaluation of Deep Learning Model according to the Ratio of Cultivation Area in Training Data (훈련자료 내 재배지역의 비율에 따른 딥러닝 모델의 성능 평가)

  • Seong, Seonkyeong;Choi, Jaewan
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1007-1014
    • /
    • 2022
  • Compact Advanced Satellite 500 (CAS500) can be used for various purposes, including vegetation, forestry, and agriculture fields. It is expected that it will be possible to acquire satellite images of various areas quickly. In order to use satellite images acquired through CAS500 in the agricultural field, it is necessary to develop a satellite image-based extraction technique for crop-cultivated areas.In particular, as research in the field of deep learning has become active in recent years, research on developing a deep learning model for extracting crop cultivation areas and generating training data is necessary. This manuscript classified the onion and garlic cultivation areas in Hapcheon-gun using PlanetScope satellite images and farm maps. In particular, for effective model learning, the model performance was analyzed according to the proportion of crop-cultivated areas. For the deep learning model used in the experiment, Fully Convolutional Densely Connected Convolutional Network (FC-DenseNet) was reconstructed to fit the purpose of crop cultivation area classification and utilized. As a result of the experiment, the ratio of crop cultivation areas in the training data affected the performance of the deep learning model.

Deep-Learning Approach for Text Detection Using Fully Convolutional Networks

  • Tung, Trieu Son;Lee, Gueesang
    • International Journal of Contents
    • /
    • v.14 no.1
    • /
    • pp.1-6
    • /
    • 2018
  • Text, as one of the most influential inventions of humanity, has played an important role in human life since ancient times. The rich and precise information embodied in text is very useful in a wide range of vision-based applications such as the text data extracted from images that can provide information for automatic annotation, indexing, language translation, and the assistance systems for impaired persons. Therefore, natural-scene text detection with active research topics regarding computer vision and document analysis is very important. Previous methods have poor performances due to numerous false-positive and true-negative regions. In this paper, a fully-convolutional-network (FCN)-based method that uses supervised architecture is used to localize textual regions. The model was trained directly using images wherein pixel values were used as inputs and binary ground truth was used as label. The method was evaluated using ICDAR-2013 dataset and proved to be comparable to other feature-based methods. It could expedite research on text detection using deep-learning based approach in the future.

Fast and Robust Face Detection based on CNN in Wild Environment (CNN 기반의 와일드 환경에 강인한 고속 얼굴 검출 방법)

  • Song, Junam;Kim, Hyung-Il;Ro, Yong Man
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.8
    • /
    • pp.1310-1319
    • /
    • 2016
  • Face detection is the first step in a wide range of face applications. However, detecting faces in the wild is still a challenging task due to the wide range of variations in pose, scale, and occlusions. Recently, many deep learning methods have been proposed for face detection. However, further improvements are required in the wild. Another important issue to be considered in the face detection is the computational complexity. Current state-of-the-art deep learning methods require a large number of patches to deal with varying scales and the arbitrary image sizes, which result in an increased computational complexity. To reduce the complexity while achieving better detection accuracy, we propose a fully convolutional network-based face detection that can take arbitrarily-sized input and produce feature maps (heat maps) corresponding to the input image size. To deal with the various face scales, a multi-scale network architecture that utilizes the facial components when learning the feature maps is proposed. On top of it, we design multi-task learning technique to improve detection performance. Extensive experiments have been conducted on the FDDB dataset. The experimental results show that the proposed method outperforms state-of-the-art methods with the accuracy of 82.33% at 517 false alarms, while improving computational efficiency significantly.

U-Net Based Plant Image Segmentation (U-Net 기반의 식물 영상 분할 기법)

  • Lee, Sang-Ho;Kim, Tae-Hyeon;Kim, Jong-Ok
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.81-83
    • /
    • 2021
  • In this paper, we propose a method to segment a plant from a plant image using U-Net. The network is an end-to-end fully convolutional network that is mainly used for image segmentation. When training the network, we used a binary image that is acquired by the manual segmentation of a plant from the background. Experimental results show that the U-Net based segmentation network can extract a plant from a digital image accurately.

  • PDF

Design of new CNN structure with internal FC layer (내부 FC층을 갖는 새로운 CNN 구조의 설계)

  • Park, Hee-mun;Park, Sung-chan;Hwang, Kwang-bok;Choi, Young-kiu;Park, Jin-hyun
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.05a
    • /
    • pp.466-467
    • /
    • 2018
  • Recently, artificial intelligence has been applied to various fields such as image recognition, image recognition speech recognition, and natural language processing, and interest in Deep Learning technology is increasing. Many researches on Convolutional Neural Network(CNN), which is one of the most representative algorithms among Deep Learning, have strong advantages in image recognition and classification and are widely used in various fields. In this paper, we propose a new network structure that transforms the general CNN structure. A typical CNN structure consists of a convolution layer, ReLU layer, and a pooling layer. Therefore in this paper, We intend to construct a new network by adding fully connected layer inside a general CNN structure. This modification is intended to increase the learning and accuracy of the convoluted image by including the generalization which is an advantage of the neural network.

  • PDF

Speech Emotion Recognition Using 2D-CNN with Mel-Frequency Cepstrum Coefficients

  • Eom, Youngsik;Bang, Junseong
    • Journal of information and communication convergence engineering
    • /
    • v.19 no.3
    • /
    • pp.148-154
    • /
    • 2021
  • With the advent of context-aware computing, many attempts were made to understand emotions. Among these various attempts, Speech Emotion Recognition (SER) is a method of recognizing the speaker's emotions through speech information. The SER is successful in selecting distinctive 'features' and 'classifying' them in an appropriate way. In this paper, the performances of SER using neural network models (e.g., fully connected network (FCN), convolutional neural network (CNN)) with Mel-Frequency Cepstral Coefficients (MFCC) are examined in terms of the accuracy and distribution of emotion recognition. For Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) dataset, by tuning model parameters, a two-dimensional Convolutional Neural Network (2D-CNN) model with MFCC showed the best performance with an average accuracy of 88.54% for 5 emotions, anger, happiness, calm, fear, and sadness, of men and women. In addition, by examining the distribution of emotion recognition accuracies for neural network models, the 2D-CNN with MFCC can expect an overall accuracy of 75% or more.

Investigating the Feature Collection for Semantic Segmentation via Single Skip Connection (깊은 신경망에서 단일 중간층 연결을 통한 물체 분할 능력의 심층적 분석)

  • Yim, Jonghwa;Sohn, Kyung-Ah
    • Journal of KIISE
    • /
    • v.44 no.12
    • /
    • pp.1282-1289
    • /
    • 2017
  • Since the study of deep convolutional neural network became prevalent, one of the important discoveries is that a feature map from a convolutional network can be extracted before going into the fully connected layer and can be used as a saliency map for object detection. Furthermore, the model can use features from each different layer for accurate object detection: the features from different layers can have different properties. As the model goes deeper, it has many latent skip connections and feature maps to elaborate object detection. Although there are many intermediate layers that we can use for semantic segmentation through skip connection, still the characteristics of each skip connection and the best skip connection for this task are uncertain. Therefore, in this study, we exhaustively research skip connections of state-of-the-art deep convolutional networks and investigate the characteristics of the features from each intermediate layer. In addition, this study would suggest how to use a recent deep neural network model for semantic segmentation and it would therefore become a cornerstone for later studies with the state-of-the-art network models.

Image-Based Automatic Detection of Construction Helmets Using R-FCN and Transfer Learning (R-FCN과 Transfer Learning 기법을 이용한 영상기반 건설 안전모 자동 탐지)

  • Park, Sangyoon;Yoon, Sanghyun;Heo, Joon
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.39 no.3
    • /
    • pp.399-407
    • /
    • 2019
  • In Korea, the construction industry has been known to have the highest risk of safety accidents compared to other industries. Therefore, in order to improve safety in the construction industry, several researches have been carried out from the past. This study aims at improving safety of labors in construction site by constructing an effective automatic safety helmet detection system using object detection algorithm based on image data of construction field. Deep learning was conducted using Region-based Fully Convolutional Network (R-FCN) which is one of the object detection algorithms based on Convolutional Neural Network (CNN) with Transfer Learning technique. Learning was conducted with 1089 images including human and safety helmet collected from ImageNet and the mean Average Precision (mAP) of the human and the safety helmet was measured as 0.86 and 0.83, respectively.

Movie Box-office Prediction using Deep Learning and Feature Selection : Focusing on Multivariate Time Series

  • Byun, Jun-Hyung;Kim, Ji-Ho;Choi, Young-Jin;Lee, Hong-Chul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.6
    • /
    • pp.35-47
    • /
    • 2020
  • Box-office prediction is important to movie stakeholders. It is necessary to accurately predict box-office and select important variables. In this paper, we propose a multivariate time series classification and important variable selection method to improve accuracy of predicting the box-office. As a research method, we collected daily data from KOBIS and NAVER for South Korean movies, selected important variables using Random Forest and predicted multivariate time series using Deep Learning. Based on the Korean screen quota system, Deep Learning was used to compare the accuracy of box-office predictions on the 73rd day from movie release with the important variables and entire variables, and the results was tested whether they are statistically significant. As a Deep Learning model, Multi-Layer Perceptron, Fully Convolutional Neural Networks, and Residual Network were used. Among the Deep Learning models, the model using important variables and Residual Network had the highest prediction accuracy at 93%.