• Title/Summary/Keyword: Region Convolutional Neural Network

Search Result 82, Processing Time 0.026 seconds

Building Detection by Convolutional Neural Network with Infrared Image, LiDAR Data and Characteristic Information Fusion (적외선 영상, 라이다 데이터 및 특성정보 융합 기반의 합성곱 인공신경망을 이용한 건물탐지)

  • Cho, Eun Ji;Lee, Dong-Cheon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.6
    • /
    • pp.635-644
    • /
    • 2020
  • Object recognition, detection and instance segmentation based on DL (Deep Learning) have being used in various practices, and mainly optical images are used as training data for DL models. The major objective of this paper is object segmentation and building detection by utilizing multimodal datasets as well as optical images for training Detectron2 model that is one of the improved R-CNN (Region-based Convolutional Neural Network). For the implementation, infrared aerial images, LiDAR data, and edges from the images, and Haralick features, that are representing statistical texture information, from LiDAR (Light Detection And Ranging) data were generated. The performance of the DL models depends on not only on the amount and characteristics of the training data, but also on the fusion method especially for the multimodal data. The results of segmenting objects and detecting buildings by applying hybrid fusion - which is a mixed method of early fusion and late fusion - results in a 32.65% improvement in building detection rate compared to training by optical image only. The experiments demonstrated complementary effect of the training multimodal data having unique characteristics and fusion strategy.

Attention Deep Neural Networks Learning based on Multiple Loss functions for Video Face Recognition (비디오 얼굴인식을 위한 다중 손실 함수 기반 어텐션 심층신경망 학습 제안)

  • Kim, Kyeong Tae;You, Wonsang;Choi, Jae Young
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.10
    • /
    • pp.1380-1390
    • /
    • 2021
  • The video face recognition (FR) is one of the most popular researches in the field of computer vision due to a variety of applications. In particular, research using the attention mechanism is being actively conducted. In video face recognition, attention represents where to focus on by using the input value of the whole or a specific region, or which frame to focus on when there are many frames. In this paper, we propose a novel attention based deep learning method. Main novelties of our method are (1) the use of combining two loss functions, namely weighted Softmax loss function and a Triplet loss function and (2) the feasibility of end-to-end learning which includes the feature embedding network and attention weight computation. The feature embedding network has a positive effect on the attention weight computation by using combined loss function and end-to-end learning. To demonstrate the effectiveness of our proposed method, extensive and comparative experiments have been carried out to evaluate our method on IJB-A dataset with their standard evaluation protocols. Our proposed method represented better or comparable recognition rate compared to other state-of-the-art video FR methods.

Feature Voting for Object Localization via Density Ratio Estimation

  • Wang, Liantao;Deng, Dong;Chen, Chunlei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.12
    • /
    • pp.6009-6027
    • /
    • 2019
  • Support vector machine (SVM) classifiers have been widely used for object detection. These methods usually locate the object by finding the region with maximal score in an image. With bag-of-features representation, the SVM score of an image region can be written as the sum of its inside feature-weights. As a result, the searching process can be executed efficiently by using strategies such as branch-and-bound. However, the feature-weight derived by optimizing region classification cannot really reveal the category knowledge of a feature-point, which could cause bad localization. In this paper, we represent a region in an image by a collection of local feature-points and determine the object by the region with the maximum posterior probability of belonging to the object class. Based on the Bayes' theorem and Naive-Bayes assumptions, the posterior probability is reformulated as the sum of feature-scores. The feature-score is manifested in the form of the logarithm of a probability ratio. Instead of estimating the numerator and denominator probabilities separately, we readily employ the density ratio estimation techniques directly, and overcome the above limitation. Experiments on a car dataset and PASCAL VOC 2007 dataset validated the effectiveness of our method compared to the baselines. In addition, the performance can be further improved by taking advantage of the recently developed deep convolutional neural network features.

Building change detection in high spatial resolution images using deep learning and graph model (딥러닝과 그래프 모델을 활용한 고해상도 영상의 건물 변화탐지)

  • Park, Seula;Song, Ahram
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.40 no.3
    • /
    • pp.227-237
    • /
    • 2022
  • The most critical factors for detecting changes in very high-resolution satellite images are building positional inconsistencies and relief displacements caused by satellite side-view. To resolve the above problems, additional processing using a digital elevation model and deep learning approach have been proposed. Unfortunately, these approaches are not sufficiently effective in solving these problems. This study proposed a change detection method that considers both positional and topology information of buildings. Mask R-CNN (Region-based Convolutional Neural Network) was trained on a SpaceNet building detection v2 dataset, and the central points of each building were extracted as building nodes. Then, triangulated irregular network graphs were created on building nodes from temporal images. To extract the area, where there is a structural difference between two graphs, a change index reflecting the similarity of the graphs and differences in the location of building nodes was proposed. Finally, newly changed or deleted buildings were detected by comparing the two graphs. Three pairs of test sites were selected to evaluate the proposed method's effectiveness, and the results showed that changed buildings were detected in the case of side-view satellite images with building positional inconsistencies.

Pixel-level Crack Detection in X-ray Computed Tomography Image of Granite using Deep Learning (딥러닝을 이용한 화강암 X-ray CT 영상에서의 균열 검출에 관한 연구)

  • Hyun, Seokhwan;Lee, Jun Sung;Jeon, Seonghwan;Kim, Yejin;Kim, Kwang Yeom;Yun, Tae Sup
    • Tunnel and Underground Space
    • /
    • v.29 no.3
    • /
    • pp.184-196
    • /
    • 2019
  • This study aims to extract a 3D image of micro-cracks generated by hydraulic fracturing tests, using the deep learning method and X-ray computed tomography images. The pixel-level cracks are difficult to be detected via conventional image processing methods, such as global thresholding, canny edge detection, and the region growing method. Thus, the convolutional neural network-based encoder-decoder network is adapted to extract and analyze the micro-crack quantitatively. The number of training data can be acquired by dividing, rotating, and flipping images and the optimum combination for the image augmentation method is verified. Application of the optimal image augmentation method shows enhanced performance for not only the validation dataset but also the test dataset. In addition, the influence of the original number of training data to the performance of the deep learning-based neural network is confirmed, and it leads to succeed the pixel-level crack detection.

A Hybrid Optimized Deep Learning Techniques for Analyzing Mammograms

  • Bandaru, Satish Babu;Deivarajan, Natarajasivan;Gatram, Rama Mohan Babu
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.10
    • /
    • pp.73-82
    • /
    • 2022
  • Early detection continues to be the mainstay of breast cancer control as well as the improvement of its treatment. Even so, the absence of cancer symptoms at the onset has early detection quite challenging. Therefore, various researchers continue to focus on cancer as a topic of health to try and make improvements from the perspectives of diagnosis, prevention, and treatment. This research's chief goal is development of a system with deep learning for classification of the breast cancer as non-malignant and malignant using mammogram images. The following two distinct approaches: the first one with the utilization of patches of the Region of Interest (ROI), and the second one with the utilization of the overall images is used. The proposed system is composed of the following two distinct stages: the pre-processing stage and the Convolution Neural Network (CNN) building stage. Of late, the use of meta-heuristic optimization algorithms has accomplished a lot of progress in resolving these problems. Teaching-Learning Based Optimization algorithm (TIBO) meta-heuristic was originally employed for resolving problems of continuous optimization. This work has offered the proposals of novel methods for training the Residual Network (ResNet) as well as the CNN based on the TLBO and the Genetic Algorithm (GA). The classification of breast cancer can be enhanced with direct application of the hybrid TLBO- GA. For this hybrid algorithm, the TLBO, i.e., a core component, will combine the following three distinct operators of the GA: coding, crossover, and mutation. In the TLBO, there is a representation of the optimization solutions as students. On the other hand, the hybrid TLBO-GA will have further division of the students as follows: the top students, the ordinary students, and the poor students. The experiments demonstrated that the proposed hybrid TLBO-GA is more effective than TLBO and GA.

Recognition of Car Manufacturers using Faster R-CNN and Perspective Transformation

  • Ansari, Israfil;Lee, Yeunghak;Jeong, Yunju;Shim, Jaechang
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.8
    • /
    • pp.888-896
    • /
    • 2018
  • In this paper, we report detection and recognition of vehicle logo from images captured from street CCTV. Image data includes both the front and rear view of the vehicles. The proposed method is a two-step process which combines image preprocessing and faster region-based convolutional neural network (R-CNN) for logo recognition. Without preprocessing, faster R-CNN accuracy is high only if the image quality is good. The proposed system is focusing on street CCTV camera where image quality is different from a front facing camera. Using perspective transformation the top view images are transformed into front view images. In this system, the detection and accuracy are much higher as compared to the existing algorithm. As a result of the experiment, on day data the detection and recognition rate is improved by 2% and night data, detection rate improved by 14%.

Multi-scale Pedestrian Detection Method using Faster Region-Convolutional Neural Network (빠른 영역-합성곱 신경망을 이용한 다중 스케일 보행자 검출 방법)

  • Tran, Quoc Huy;Kim, Eung Tae
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2019.06a
    • /
    • pp.1-4
    • /
    • 2019
  • 최근에 딥러닝 기술을 적용한 보행자 검출 연구가 활발히 진행되고 있다. 연구자들은 딥러닝 네트워크를 이용하여 보행자 오검출율을 낮추는 방법에 대해 지속적으로 연구하여 성능을 꾸준히 상승시켰다. 그러나 대부분의 연구는 다중 스케일 보행자가 분포되는 저해상도 영상에서 보행자를 제대로 검출하지 못하는 어려움이 존재한다. 따라서 본 연구에서는 기존의 Faster R-CNN구조를 기반으로 하여 새로운 다중 특징 융합 레이어와 다중 스케일 앵커 박스를 적용하여 보행자 오검출율을 줄이는 MS-FRCNN(Multi-scaleFaster R-CNN)구조를 제안한다. 제안된 방식의 성능 검증을 위해 Caltech 데이터세트를 이용하여 실험한 결과, 제안된 MS-FRCNN방식이 기존의 다른 보행자 검출 방식보다 다중 스케일 보행자 검출에서 medium 조건하에 5%, all 조건하에 3.9% 나아짐을 알 수 있었다.

  • PDF

DeepSDO: Solar event detection using deep-learning-based object detection methods

  • Baek, Ji-Hye;Kim, Sujin;Choi, Seonghwan;Park, Jongyeob;Kim, Jihun;Jo, Wonkeum;Kim, Dongil
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.46 no.2
    • /
    • pp.46.2-46.2
    • /
    • 2021
  • We present solar event auto detection using deep-learning-based object detection algorithms and DeepSDO event dataset. DeepSDO event dataset is a new detection dataset with bounding boxed as ground-truth for three solar event (coronal holes, sunspots and prominences) features using Solar Dynamics Observatory data. To access the reliability of DeepSDO event dataset, we compared to HEK data. We train two representative object detection models, the Single Shot MultiBox Detector (SSD) and the Faster Region-based Convolutional Neural Network (R-CNN) with DeepSDO event dataset. We compared the performance of the two models for three solar events and this study demonstrates that deep learning-based object detection can successfully detect multiple types of solar events. In addition, we provide DeepSDO event dataset for further achievements event detection in solar physics.

  • PDF

Cascade CNN with CPU-FPGA Architecture for Real-time Face Detection (실시간 얼굴 검출을 위한 Cascade CNN의 CPU-FPGA 구조 연구)

  • Nam, Kwang-Min;Jeong, Yong-Jin
    • Journal of IKEEE
    • /
    • v.21 no.4
    • /
    • pp.388-396
    • /
    • 2017
  • Since there are many variables such as various poses, illuminations and occlusions in a face detection problem, a high performance detection system is required. Although CNN is excellent in image classification, CNN operatioin requires high-performance hardware resources. But low cost low power environments are essential for small and mobile systems. So in this paper, the CPU-FPGA integrated system is designed based on 3-stage cascade CNN architecture using small size FPGA. Adaptive Region of Interest (ROI) is applied to reduce the number of CNN operations using face information of the previous frame. We use a Field Programmable Gate Array(FPGA) to accelerate the CNN computations. The accelerator reads multiple featuremap at once on the FPGA and performs a Multiply-Accumulate (MAC) operation in parallel for convolution operation. The system is implemented on Altera Cyclone V FPGA in which ARM Cortex A-9 and on-chip SRAM are embedded. The system runs at 30FPS with HD resolution input images. The CPU-FPGA integrated system showed 8.5 times of the power efficiency compared to systems using CPU only.