• 제목/요약/키워드: Deep Fully Convolutional Network

검색결과 78건 처리시간 0.022초

Multi-focus Image Fusion using Fully Convolutional Two-stream Network for Visual Sensors

  • Xu, Kaiping;Qin, Zheng;Wang, Guolong;Zhang, Huidi;Huang, Kai;Ye, Shuxiong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제12권5호
    • /
    • pp.2253-2272
    • /
    • 2018
  • We propose a deep learning method for multi-focus image fusion. Unlike most existing pixel-level fusion methods, either in spatial domain or in transform domain, our method directly learns an end-to-end fully convolutional two-stream network. The framework maps a pair of different focus images to a clean version, with a chain of convolutional layers, fusion layer and deconvolutional layers. Our deep fusion model has advantages of efficiency and robustness, yet demonstrates state-of-art fusion quality. We explore different parameter settings to achieve trade-offs between performance and speed. Moreover, the experiment results on our training dataset show that our network can achieve good performance with subjective visual perception and objective assessment metrics.

Modeling of Convolutional Neural Network-based Recommendation System

  • Kim, Tae-Yeun
    • 통합자연과학논문집
    • /
    • 제14권4호
    • /
    • pp.183-188
    • /
    • 2021
  • Collaborative filtering is one of the commonly used methods in the web recommendation system. Numerous researches on the collaborative filtering proposed the numbers of measures for enhancing the accuracy. This study suggests the movie recommendation system applied with Word2Vec and ensemble convolutional neural networks. First, user sentences and movie sentences are made from the user, movie, and rating information. Then, the user sentences and movie sentences are input into Word2Vec to figure out the user vector and movie vector. The user vector is input on the user convolutional model while the movie vector is input on the movie convolutional model. These user and movie convolutional models are connected to the fully-connected neural network model. Ultimately, the output layer of the fully-connected neural network model outputs the forecasts for user, movie, and rating. The test result showed that the system proposed in this study showed higher accuracy than the conventional cooperative filtering system and Word2Vec and deep neural network-based system suggested in the similar researches. The Word2Vec and deep neural network-based recommendation system is expected to help in enhancing the satisfaction while considering about the characteristics of users.

심층 합성곱 신경망을 이용한 교통신호등 인식 (Traffic Light Recognition Using a Deep Convolutional Neural Network)

  • 김민기
    • 한국멀티미디어학회논문지
    • /
    • 제21권11호
    • /
    • pp.1244-1253
    • /
    • 2018
  • The color of traffic light is sensitive to various illumination conditions. Especially it loses the hue information when oversaturation happens on the lighting area. This paper proposes a traffic light recognition method robust to these illumination variations. The method consists of two steps of traffic light detection and recognition. It just uses the intensity and saturation in the first step of traffic light detection. It delays the use of hue information until it reaches to the second step of recognizing the signal of traffic light. We utilized a deep learning technique in the second step. We designed a deep convolutional neural network(DCNN) which is composed of three convolutional networks and two fully connected networks. 12 video clips were used to evaluate the performance of the proposed method. Experimental results show the performance of traffic light detection reporting the precision of 93.9%, the recall of 91.6%, and the recognition accuracy of 89.4%. Considering that the maximum distance between the camera and traffic lights is 70m, the results shows that the proposed method is effective.

Surface Water Mapping of Remote Sensing Data Using Pre-Trained Fully Convolutional Network

  • Song, Ah Ram;Jung, Min Young;Kim, Yong Il
    • 한국측량학회지
    • /
    • 제36권5호
    • /
    • pp.423-432
    • /
    • 2018
  • Surface water mapping has been widely used in various remote sensing applications. Water indices have been commonly used to distinguish water bodies from land; however, determining the optimal threshold and discriminating water bodies from similar objects such as shadows and snow is difficult. Deep learning algorithms have greatly advanced image segmentation and classification. In particular, FCN (Fully Convolutional Network) is state-of-the-art in per-pixel image segmentation and are used in most benchmarks such as PASCAL VOC2012 and Microsoft COCO (Common Objects in Context). However, these data sets are designed for daily scenarios and a few studies have conducted on applications of FCN using large scale remotely sensed data set. This paper aims to fine-tune the pre-trained FCN network using the CRMS (Coastwide Reference Monitoring System) data set for surface water mapping. The CRMS provides color infrared aerial photos and ground truth maps for the monitoring and restoration of wetlands in Louisiana, USA. To effectively learn the characteristics of surface water, we used pre-trained the DeepWaterMap network, which classifies water, land, snow, ice, clouds, and shadows using Landsat satellite images. Furthermore, the DeepWaterMap network was fine-tuned for the CRMS data set using two classes: water and land. The fine-tuned network finally classifies surface water without any additional learning process. The experimental results show that the proposed method enables high-quality surface mapping from CRMS data set and show the suitability of pre-trained FCN networks using remote sensing data for surface water mapping.

Residual Learning Based CNN for Gesture Recognition in Robot Interaction

  • Han, Hua
    • Journal of Information Processing Systems
    • /
    • 제17권2호
    • /
    • pp.385-398
    • /
    • 2021
  • The complexity of deep learning models affects the real-time performance of gesture recognition, thereby limiting the application of gesture recognition algorithms in actual scenarios. Hence, a residual learning neural network based on a deep convolutional neural network is proposed. First, small convolution kernels are used to extract the local details of gesture images. Subsequently, a shallow residual structure is built to share weights, thereby avoiding gradient disappearance or gradient explosion as the network layer deepens; consequently, the difficulty of model optimisation is simplified. Additional convolutional neural networks are used to accelerate the refinement of deep abstract features based on the spatial importance of the gesture feature distribution. Finally, a fully connected cascade softmax classifier is used to complete the gesture recognition. Compared with the dense connection multiplexing feature information network, the proposed algorithm is optimised in feature multiplexing to avoid performance fluctuations caused by feature redundancy. Experimental results from the ISOGD gesture dataset and Gesture dataset prove that the proposed algorithm affords a fast convergence speed and high accuracy.

다양한 차수의 합성 미니맥스 근사 다항식이 완전 동형 암호 상에서의 컨볼루션 신경망 네트워크에 미치는 영향 (The Impact of Various Degrees of Composite Minimax ApproximatePolynomials on Convolutional Neural Networks over Fully HomomorphicEncryption)

  • 이정현;노종선
    • 정보보호학회논문지
    • /
    • 제33권6호
    • /
    • pp.861-868
    • /
    • 2023
  • 보안을 유지하는 가운데 딥 러닝을 이용하여 데이터 분석 결과를 제공하는 서비스의 핵심적인 기술 중의 하나로 완전 동형 암호가 있다. 완전 동형 암호화된 데이터 간의 연산의 제약으로 인해 딥 러닝에 사용되는 비산술 함수를 다항식으로 근사해야 한다. 현재까지는 합성 미니맥스 다항식을 사용하여 비산술 함수를 근사한 다항식을 컨볼루션 뉴럴 네트워크에 적용했을 때 계층별로 같은 차수의 다항식만 적용하였는데, 이는 완전 동형 암호를 위한 효과적인 네트워크의 설계에 어려움을 준다. 본 연구는 합성 미니맥스 다항식으로 설계한 근사 다항식의 차수를 계층별로 서로 다르게 설정하여도 컨볼루션 뉴럴 네트워크에서 데이터의 분석에 문제가 없음을 이론적으로 증명하였다.

Facial Expression Classification Using Deep Convolutional Neural Network

  • Choi, In-kyu;Ahn, Ha-eun;Yoo, Jisang
    • Journal of Electrical Engineering and Technology
    • /
    • 제13권1호
    • /
    • pp.485-492
    • /
    • 2018
  • In this paper, we propose facial expression recognition using CNN (Convolutional Neural Network), one of the deep learning technologies. The proposed structure has general classification performance for any environment or subject. For this purpose, we collect a variety of databases and organize the database into six expression classes such as 'expressionless', 'happy', 'sad', 'angry', 'surprised' and 'disgusted'. Pre-processing and data augmentation techniques are applied to improve training efficiency and classification performance. In the existing CNN structure, the optimal structure that best expresses the features of six facial expressions is found by adjusting the number of feature maps of the convolutional layer and the number of nodes of fully-connected layer. The experimental results show good classification performance compared to the state-of-the-arts in experiments of the cross validation and the cross database. Also, compared to other conventional models, it is confirmed that the proposed structure is superior in classification performance with less execution time.

공분산과 모듈로그램을 이용한 콘볼루션 신경망 기반 양서류 울음소리 구별 (Convolutional neural network based amphibian sound classification using covariance and modulogram)

  • 고경득;박상욱;고한석
    • 한국음향학회지
    • /
    • 제37권1호
    • /
    • pp.60-65
    • /
    • 2018
  • 본 논문에서는 양서류 울음소리 구별을 CNN(Convolutional Neural Network)에 적용하기 위한 방법으로 공분산 행렬과 모듈로그램(modulogram)을 제안한다. 먼저, 멸종 위기 종을 포함한 양서류 9종의 울음소리를 자연 환경에서 추출하여 데이터베이스를 구축했다. 구축된 데이터를 CNN에 적용하기 위해서는 길이가 다른 음향신호를 정형화하는 과정이 필요하다. 음향신호를 정형화하기 위해서 분포에 대한 정보를 나타내는 공분산 행렬과 시간에 대한 변화를 내포하는 모듈로그램을 추출하여, CNN의 입력으로 사용했다. CNN은 convolutional layer와 fully-connected layer의 수를 변경해 가며 실험하였다. 추가적으로, CNN의 성능을 비교하기 위해 기존에 음향 신호 분석에서 쓰이는 알고리즘과 비교해보았다. 그 결과, convolutional layer가 fully-connected layer보다 성능에 큰 영향을 끼치는 것을 확인했다. 또한 CNN을 사용하였을 때 99.07 % 인식률로, 기존에 음향분석에 쓰이는 알고리즘 보다 높은 성능을 보인 것을 확인했다.

깊은 Convolutional Neural Network를 이용한 얼굴표정 분류 기법 (Facial Expression Classification Using Deep Convolutional Neural Network)

  • 최인규;송혁;이상용;유지상
    • 방송공학회논문지
    • /
    • 제22권2호
    • /
    • pp.162-172
    • /
    • 2017
  • 본 논문에서는 딥러닝 기술 중의 하나인 CNN(Convolutional Neural Network)을 이용한 얼굴 표정 인식 기법을 제안한다. 기존의 얼굴 표정 데이터베이스의 단점을 보완하고자 질 좋은 다양한 데이터베이스를 이용한다. 제안한 기법에서는 '무표정', '행복', '슬픔', '화남', '놀람', 그리고 '역겨움' 등의 여섯 가지 얼굴 표정 data-set을 구축한다. 효율적인 학습 및 분류 성능을 향상시키기 위해서 전처리 및 데이터 증대 기법(data augmentation)도 적용한다. 기존의 CNN 구조에서 convolutional layer의 특징지도의 수와 fully-connected layer의 node의 수를 조정하면서 여섯 가지 얼굴 표정의 특징을 가장 잘 표현하는 최적의 CNN 구조를 찾는다. 실험 결과 제안하는 구조가 다른 모델에 비해 CNN 구조를 통과하는 시간이 가장 적게 걸리면서도 96.88%의 가장 높은 분류 성능을 보이는 것을 확인하였다.

초음파 B-모드 영상에서 FCN(fully convolutional network) 모델을 이용한 간 섬유화 단계 분류 알고리즘 (A Fully Convolutional Network Model for Classifying Liver Fibrosis Stages from Ultrasound B-mode Images)

  • 강성호;유선경;이정은;안치영
    • 대한의용생체공학회:의공학회지
    • /
    • 제41권1호
    • /
    • pp.48-54
    • /
    • 2020
  • In this paper, we deal with a liver fibrosis classification problem using ultrasound B-mode images. Commonly representative methods for classifying the stages of liver fibrosis include liver biopsy and diagnosis based on ultrasound images. The overall liver shape and the smoothness and roughness of speckle pattern represented in ultrasound images are used for determining the fibrosis stages. Although the ultrasound image based classification is used frequently as an alternative or complementary method of the invasive biopsy, it also has the limitations that liver fibrosis stage decision depends on the image quality and the doctor's experience. With the rapid development of deep learning algorithms, several studies using deep learning methods have been carried out for automated liver fibrosis classification and showed superior performance of high accuracy. The performance of those deep learning methods depends closely on the amount of datasets. We propose an enhanced U-net architecture to maximize the classification accuracy with limited small amount of image datasets. U-net is well known as a neural network for fast and precise segmentation of medical images. We design it newly for the purpose of classifying liver fibrosis stages. In order to assess the performance of the proposed architecture, numerical experiments are conducted on a total of 118 ultrasound B-mode images acquired from 78 patients with liver fibrosis symptoms of F0~F4 stages. The experimental results support that the performance of the proposed architecture is much better compared to the transfer learning using the pre-trained model of VGGNet.