• 제목/요약/키워드: Region Convolutional Neural Network

검색결과 82건 처리시간 0.025초

이미지로부터 피사계 심도 영역을 효율적으로 추출하기 위한 합성곱 신경망 기법 (Convolutional Neural Network Technique for Efficiently Extracting Depth of Field from Images)

  • 김동희;김종현
    • 한국컴퓨터정보학회:학술대회논문집
    • /
    • 한국컴퓨터정보학회 2020년도 제62차 하계학술대회논문집 28권2호
    • /
    • pp.429-432
    • /
    • 2020
  • 본 논문에서는 카메라의 포커싱과 아웃포커싱에 의해 이미지에서 뿌옇게 표현되는 DoF(Depth of field, 피사계 심도) 영역을 합성곱 신경망을 통해 찾는 방법을 제안한다. 우리의 접근 방식은 RGB채널기반의 상호-상관 필터를 이용하여 DoF영역을 이미지로부터 효율적으로 분류하고, 합성곱 신경망 네트워크에 학습하기 위한 데이터를 구축하며, 이렇게 얻어진 데이터를 이용하여 이미지-DoF가중치 맵 데이터 쌍을 설정한다. 학습할 때 사용되는 데이터는 이미지와 상호-상관 필터 기반으로 추출된 DoF 가중치 맵을 이용하며, 네트워크 학습 단계에서 수렴률을 높이기 위해 스무딩을 과정을 한번 더 적용한 결과를 사용한다. 본 논문에서 제안하는 합성곱 신경망은 이미지로부터 포커싱과 아웃포커싱된 DoF영역을 자동으로 추출하는 과정을 학습시키기 위해 사용된다. 테스트 결과로 얻은 DoF 가중치 이미지는 입력 이미지에서 DoF영역을 빠른 시간 내에 찾아내며, 제안하는 방법은 DoF영역을 사용자의 ROI(Region of interest)로 활용하여 NPR렌더링, 객체 검출 등 다양한 곳에 활용이 가능하다.

  • PDF

Siame-FPN기반 객체 특징 추적 알고리즘 (Object Feature Tracking Algorithm based on Siame-FPN)

  • 김종찬;임수창
    • 한국멀티미디어학회논문지
    • /
    • 제25권2호
    • /
    • pp.247-256
    • /
    • 2022
  • Visual tracking of selected target objects is fundamental challenging problems in computer vision. Object tracking localize the region of target object with bounding box in the video. We propose a Siam-FPN based custom fully CNN to solve visual tracking problems by regressing the target area in an end-to-end manner. A method of preserving the feature information flow using a feature map connection structure was applied. In this way, information is preserved and emphasized across the network. To regress object region and to classify object, the region proposal network was connected with the Siamese network. The performance of the tracking algorithm was evaluated using the OTB-100 dataset. Success Plot and Precision Plot were used as evaluation matrix. As a result of the experiment, 0.621 in Success Plot and 0.838 in Precision Plot were achieved.

합성곱 신경망 기반 선체 표면 유동 속도의 픽셀 수준 예측 (Pixel-level prediction of velocity vectors on hull surface based on convolutional neural network)

  • 서정범;김다연;이인원
    • 한국가시화정보학회지
    • /
    • 제21권1호
    • /
    • pp.18-25
    • /
    • 2023
  • In these days, high dimensional data prediction technology based on neural network shows compelling results in many different kind of field including engineering. Especially, a lot of variants of convolution neural network are widely utilized to develop pixel level prediction model for high dimensional data such as picture, or physical field value from the sensors. In this study, velocity vector field of ideal flow on ship surface is estimated on pixel level by Unet. First, potential flow analysis was conducted for the set of hull form data which are generated by hull form transformation method. Thereafter, four different neural network with a U-shape structure were conFig.d to train velocity vectors at the node position of pre-processed hull form data. As a result, for the test hull forms, it was confirmed that the network with short skip-connection gives the most accurate prediction results of streamlines and velocity magnitude. And the results also have a good agreement with potential flow analysis results. However, in some cases which don't have nothing in common with training data in terms of speed or shape, the network has relatively high error at the region of large curvature.

Skin Lesion Image Segmentation Based on Adversarial Networks

  • Wang, Ning;Peng, Yanjun;Wang, Yuanhong;Wang, Meiling
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제12권6호
    • /
    • pp.2826-2840
    • /
    • 2018
  • Traditional methods based active contours or region merging are powerless in processing images with blurring border or hair occlusion. In this paper, a structure based convolutional neural networks is proposed to solve segmentation of skin lesion image. The structure mainly consists of two networks which are segmentation net and discrimination net. The segmentation net is designed based U-net that used to generate the mask of lesion, while the discrimination net is designed with only convolutional layers that used to determine whether input image is from ground truth labels or generated images. Images were obtained from "Skin Lesion Analysis Toward Melanoma Detection" challenge which was hosted by ISBI 2016 conference. We achieved segmentation average accuracy of 0.97, dice coefficient of 0.94 and Jaccard index of 0.89 which outperform the other existed state-of-the-art segmentation networks, including winner of ISBI 2016 challenge for skin melanoma segmentation.

Deep Window Detection in Street Scenes

  • Ma, Wenguang;Ma, Wei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권2호
    • /
    • pp.855-870
    • /
    • 2020
  • Windows are key components of building facades. Detecting windows, crucial to 3D semantic reconstruction and scene parsing, is a challenging task in computer vision. Early methods try to solve window detection by using hand-crafted features and traditional classifiers. However, these methods are unable to handle the diversity of window instances in real scenes and suffer from heavy computational costs. Recently, convolutional neural networks based object detection algorithms attract much attention due to their good performances. Unfortunately, directly training them for challenging window detection cannot achieve satisfying results. In this paper, we propose an approach for window detection. It involves an improved Faster R-CNN architecture for window detection, featuring in a window region proposal network, an RoI feature fusion and a context enhancement module. Besides, a post optimization process is designed by the regular distribution of windows to refine detection results obtained by the improved deep architecture. Furthermore, we present a newly collected dataset which is the largest one for window detection in real street scenes to date. Experimental results on both existing datasets and the new dataset show that the proposed method has outstanding performance.

Pest Control System using Deep Learning Image Classification Method

  • Moon, Backsan;Kim, Daewon
    • 한국컴퓨터정보학회논문지
    • /
    • 제24권1호
    • /
    • pp.9-23
    • /
    • 2019
  • In this paper, we propose a layer structure of a pest image classifier model using CNN (Convolutional Neural Network) and background removal image processing algorithm for improving classification accuracy in order to build a smart monitoring system for pine wilt pest control. In this study, we have constructed and trained a CNN classifier model by collecting image data of pine wilt pest mediators, and experimented to verify the classification accuracy of the model and the effect of the proposed classification algorithm. Experimental results showed that the proposed method successfully detected and preprocessed the region of the object accurately for all the test images, resulting in showing classification accuracy of about 98.91%. This study shows that the layer structure of the proposed CNN classifier model classified the targeted pest image effectively in various environments. In the field test using the Smart Trap for capturing the pine wilt pest mediators, the proposed classification algorithm is effective in the real environment, showing a classification accuracy of 88.25%, which is improved by about 8.12% according to whether the image cropping preprocessing is performed. Ultimately, we will proceed with procedures to apply the techniques and verify the functionality to field tests on various sites.

MRU-Net: A remote sensing image segmentation network for enhanced edge contour Detection

  • Jing Han;Weiyu Wang;Yuqi Lin;Xueqiang LYU
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제17권12호
    • /
    • pp.3364-3382
    • /
    • 2023
  • Remote sensing image segmentation plays an important role in realizing intelligent city construction. The current mainstream segmentation networks effectively improve the segmentation effect of remote sensing images by deeply mining the rich texture and semantic features of images. But there are still some problems such as rough results of small target region segmentation and poor edge contour segmentation. To overcome these three challenges, we propose an improved semantic segmentation model, referred to as MRU-Net, which adopts the U-Net architecture as its backbone. Firstly, the convolutional layer is replaced by BasicBlock structure in U-Net network to extract features, then the activation function is replaced to reduce the computational load of model in the network. Secondly, a hybrid multi-scale recognition module is added in the encoder to improve the accuracy of image segmentation of small targets and edge parts. Finally, test on Massachusetts Buildings Dataset and WHU Dataset the experimental results show that compared with the original network the ACC, mIoU and F1 value are improved, and the imposed network shows good robustness and portability in different datasets.

정규화 및 교차검증 횟수 감소를 위한 무작위 풀링 연산 선택에 관한 연구 (A Study on Random Selection of Pooling Operations for Regularization and Reduction of Cross Validation)

  • 류서현
    • 한국산학기술학회논문지
    • /
    • 제19권4호
    • /
    • pp.161-166
    • /
    • 2018
  • 본 논문에서는 컨볼루션 신경망 구조(Convolution Neural Network)에서 정규화 및 교차검증 횟수 감소를 위한 무작위로 풀링 연산을 선택하는 방법에 대해 설명한다. 컨볼루션 신경망 구조에서 풀링 연산은 피쳐맵(Feature Map) 크기 감소 및 이동 불변(Shift Invariant)을 위해 사용된다. 기존의 풀링 방법은 각 풀링 계층에서 하나의 풀링 연산이 적용된다. 이러한 방법은 학습 간 신경망 구조의 변화가 없기 때문에, 학습 자료에 과도하게 맞추는 과 적합(Overfitting) 문제를 가지고 있다. 또한 최적의 풀링 연산 조합을 찾기 위해서는, 각 풀링 연산 조합에 대해 교차검증을 하여 최고의 성능을 내는 조합을 찾아야 한다. 이러한 문제를 해결하기 위해, 풀링 계층에 확률적인 개념을 도입한 무작위 풀링 연산 선택 방법을 제안한다. 제안한 방법은 풀링 계층에 하나의 풀링 연산을 적용하지 않는다. 학습기간 동안 각 풀링 영역에서 여러 풀링 연산 중 하나를 무작위로 선택한다. 그리고 시험 시에는 각 풀링 영역에서 사용된 풀링 연산의 평균을 적용한다. 이러한 방법은 풀링 영역에서 서로 다른 풀링 조합을 사용한 구조의 평균을 한 것으로 볼 수 있다. 따라서, 컨볼루션 신경망 구조가 학습데이터에 과도하게 맞추어지는 과적합 문제를 피할 수 있으며, 또한 각 풀링 계층에서 특정 풀링 연산을 선택할 필요가 없기 때문에 교차 검증 횟수를 감소시킬 수 있다. 실험을 통해, 제안한 방법은 정규화 성능을 향상시킬 뿐만 아니라 및 교차 검증 횟수를 줄일 수 있다는 것을 검증하였다.

객체 추적을 위한 보틀넥 기반 Siam-CNN 알고리즘 (Bottleneck-based Siam-CNN Algorithm for Object Tracking)

  • 임수창;김종찬
    • 한국멀티미디어학회논문지
    • /
    • 제25권1호
    • /
    • pp.72-81
    • /
    • 2022
  • Visual Object Tracking is known as the most fundamental problem in the field of computer vision. Object tracking localize the region of target object with bounding box in the video. In this paper, a custom CNN is created to extract object feature that has strong and various information. This network was constructed as a Siamese network for use as a feature extractor. The input images are passed convolution block composed of a bottleneck layers, and features are emphasized. The feature map of the target object and the search area, extracted from the Siamese network, was input as a local proposal network. Estimate the object area using the feature map. The performance of the tracking algorithm was evaluated using the OTB2013 dataset. Success Plot and Precision Plot were used as evaluation matrix. As a result of the experiment, 0.611 in Success Plot and 0.831 in Precision Plot were achieved.

Siamese Network for Learning Robust Feature of Hippocampi

  • Ahmed, Samsuddin;Jung, Ho Yub
    • 스마트미디어저널
    • /
    • 제9권3호
    • /
    • pp.9-17
    • /
    • 2020
  • Hippocampus is a complex brain structure embedded deep into the temporal lobe. Studies have shown that this structure gets affected by neurological and psychiatric disorders and it is a significant landmark for diagnosing neurodegenerative diseases. Hippocampus features play very significant roles in region-of-interest based analysis for disease diagnosis and prognosis. In this study, we have attempted to learn the embeddings of this important biomarker. As conventional metric learning methods for feature embedding is known to lacking in capturing semantic similarity among the data under study, we have trained deep Siamese convolutional neural network for learning metric of the hippocampus. We have exploited Gwangju Alzheimer's and Related Dementia cohort data set in our study. The input to the network was pairs of three-view patches (TVPs) of size 32 × 32 × 3. The positive samples were taken from the vicinity of a specified landmark for the hippocampus and negative samples were taken from random locations of the brain excluding hippocampi regions. We have achieved 98.72% accuracy in verifying hippocampus TVPs.