• Title/Summary/Keyword: Faster-RCNN

Search Result 20, Processing Time 0.025 seconds

Road Crack Detection based on Object Detection Algorithm using Unmanned Aerial Vehicle Image (드론영상을 이용한 물체탐지알고리즘 기반 도로균열탐지)

  • Kim, Jeong Min;Hyeon, Se Gwon;Chae, Jung Hwan;Do, Myung Sik
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.18 no.6
    • /
    • pp.155-163
    • /
    • 2019
  • This paper proposes a new methodology to recognize cracks on asphalt road surfaces using the image data obtained with drones. The target section was Yuseong-daero, the main highway of Daejeon. Furthermore, two object detection algorithms, such as Tiny-YOLO-V2 and Faster-RCNN, were used to recognize cracks on road surfaces, classify the crack types, and compare the experimental results. As a result, mean average precision of Faster-RCNN and Tiny-YOLO-V2 was 71% and 33%, respectively. The Faster-RCNN algorithm, 2Stage Detection, showed better performance in identifying and separating road surface cracks than the Yolo algorithm, 1Stage Detection. In the future, it will be possible to prepare a plan for building an infrastructure asset-management system using drones and AI crack detection systems. An efficient and economical road-maintenance decision-support system will be established and an operating environment will be produced.

SSD-based Fire Recognition and Notification System Linked with Power Line Communication (유도형 전력선 통신과 연동된 SSD 기반 화재인식 및 알림 시스템)

  • Yang, Seung-Ho;Sohn, Kyung-Rak;Jeong, Jae-Hwan;Kim, Hyun-Sik
    • Journal of IKEEE
    • /
    • v.23 no.3
    • /
    • pp.777-784
    • /
    • 2019
  • A pre-fire awareness and automatic notification system are required because it is possible to minimize the damage if the fire situation is precisely detected after a fire occurs in a place where people are unusual or in a mountainous area. In this study, we developed a RaspberryPi-based fire recognition system using Faster-recurrent convolutional neural network (F-RCNN) and single shot multibox detector (SSD) and demonstrated a fire alarm system that works with power line communication. Image recognition was performed with a pie camera of RaspberryPi, and the detected fire image was transmitted to a monitoring PC through an inductive power line communication network. The frame rate per second (fps) for each learning model was 0.05 fps for Faster-RCNN and 1.4 fps for SSD. SSD was 28 times faster than F-RCNN.

Recognition of PCB Components Using Faster-RCNN (Faster-RCNN을 이용한 PCB 부품 인식)

  • Ki, Cheol-min;Cho, Tai-Hoon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2017.10a
    • /
    • pp.166-169
    • /
    • 2017
  • Currently, studies using Deep Learning are actively carried out showing good results in many fields. A template matching method is mainly used to recognize parts mounted on PCB(Printed Circuit Board). However, template matching should have multiple templates depending on the shape, orientation and brightness. And it takes long time to perform matching because it searches for the entire image. And there is also a disadvantage that the recognition rate is considerably low. In this paper, we use the Faster-RCNN method for recognizing PCB components as machine learning for classifying several objects in one image. This method performs better than the template matching method, execution time and recognition.

  • PDF

Municipal waste classification system design based on Faster-RCNN and YoloV4 mixed model

  • Liu, Gan;Lee, Sang-Hyun
    • International Journal of Advanced Culture Technology
    • /
    • v.9 no.3
    • /
    • pp.305-314
    • /
    • 2021
  • Currently, due to COVID-19, household waste has a lot of impact on the environment due to packaging of food delivery. In this paper, we design and implement Faster-RCNN, SSD, and YOLOv4 models for municipal waste detection and classification. The data set explores two types of plastics, which account for a large proportion of household waste, and the types of aluminum cans. To classify the plastic type and the aluminum can type, 1,083 aluminum can types and 1,003 plastic types were studied. In addition, in order to increase the accuracy, we compare and evaluate the loss value and the accuracy value for the detection of municipal waste classification using Faster-RCNN, SDD, and YoloV4 three models. As a final result of this paper, the average precision value of the SSD model is 99.99%, the average precision value of plastics is 97.65%, and the mAP value is 99.78%, which is the best result.

Research on Shellfish Recognition Based on Improved Faster RCNN

  • Feng, Yiran;Park, Sang-Yun;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.5
    • /
    • pp.695-700
    • /
    • 2021
  • The Faster RCNN-based shellfish recognition algorithm is introduced for shellfish recognition studies that currently do not have any deep learning-based algorithms in a practical setting. The original feature extraction module is replaced by DenseNet, which fuses multi-level feature data and optimises the NMS algorithm, network depth and merging method; overcoming the omission of shellfish overlap, multiple shellfish and insufficient light, effectively solving the problem of low shellfish classification accuracy. In the complexifier test environment, the test accuracy was improved by nearly 4%. Higher testing accuracy was achieved compared to the original testing algorithm. This provides favourable technical support for future applications of the improved Faster RCNN approach to seafood quality classification.

Thermal Image Processing and Synthesis Technique Using Faster-RCNN (Faster-RCNN을 이용한 열화상 이미지 처리 및 합성 기법)

  • Shin, Ki-Chul;Lee, Jun-Su;Kim, Ju-Sik;Kim, Ju-Hyung;Kwon, Jang-woo
    • Journal of Convergence for Information Technology
    • /
    • v.11 no.12
    • /
    • pp.30-38
    • /
    • 2021
  • In this paper, we propose a method for extracting thermal data from thermal image and improving detection of heating equipment using the data. The main goal is to read the data in bytes from the thermal image file to extract the thermal data and the real image, and to apply the composite image obtained by synthesizing the image and data to the deep learning model to improve the detection accuracy of the heating facility. Data of KHNP was used for evaluation data, and Faster-RCNN is used as a learning model to compare and evaluate deep learning detection performance according to each data group. The proposed method improved on average by 0.17 compared to the existing method in average precision evaluation.As a result, this study attempted to combine national data-based thermal image data and deep learning detection to improve effective data utilization.

Comparison of Deep Learning Networks in Voice-Guided System for The Blind (시각장애인을 위한 음성안내 네비게이션 시스템의 심층신경망 성능 비교)

  • An, Ryun-Hui;Um, Sung-Ho;Yu, Yun Seop
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.175-177
    • /
    • 2022
  • This paper introduces a system that assists the blind to move to their destination, and compares the performance of 3-types of deep learning network (DNN) used in the system. The system is made up with a smartphone application that finds route from current location to destination using GPS and navigation API and a bus station installation module that recognizes and informs the bus (type and number) being about the board at bus stop using 3-types of DNN and bus information API. To make the module recognize bus number to get on, We adopted faster-RCNN, YOLOv4, YOLOv5s and YOLOv5s showed best performance in accuracy and speed.

  • PDF

Equipment and Worker Recognition of Construction Site with Vision Feature Detection

  • Qi, Shaowen;Shan, Jiazeng;Xu, Lei
    • International Journal of High-Rise Buildings
    • /
    • v.9 no.4
    • /
    • pp.335-342
    • /
    • 2020
  • This article comes up with a new method which is based on the visual characteristic of the objects and machine learning technology to achieve semi-automated recognition of the personnel, machine & materials of the construction sites. Balancing the real-time performance and accuracy, using Faster RCNN (Faster Region-based Convolutional Neural Networks) with transfer learning method appears to be a rational choice. After fine-tuning an ImageNet pre-trained Faster RCNN and testing with it, the result shows that the precision ratio (mAP) has so far reached 67.62%, while the recall ratio (AR) has reached 56.23%. In other word, this recognizing method has achieved rational performance. Further inference with the video of the construction of Huoshenshan Hospital also indicates preliminary success.

Vehicle Manufacturer Recognition using Deep Learning and Perspective Transformation

  • Ansari, Israfil;Shim, Jaechang
    • Journal of Multimedia Information System
    • /
    • v.6 no.4
    • /
    • pp.235-238
    • /
    • 2019
  • In real world object detection is an active research topic for understanding different objects from images. There are different models presented in past and had significant results. In this paper we are presenting vehicle logo detection using previous object detection models such as You only look once (YOLO) and Faster Region-based CNN (F-RCNN). Both the front and rear view of the vehicles were used for training and testing the proposed method. Along with deep learning an image pre-processing algorithm called perspective transformation is proposed for all the test images. Using perspective transformation, the top view images were transformed into front view images. This algorithm has higher detection rate as compared to raw images. Furthermore, YOLO model has better result as compare to F-RCNN model.

A method based on Multi-Convolution layers Joint and Generative Adversarial Networks for Vehicle Detection

  • Han, Guang;Su, Jinpeng;Zhang, Chengwei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.4
    • /
    • pp.1795-1811
    • /
    • 2019
  • In order to achieve rapid and accurate detection of vehicle objects in complex traffic conditions, we propose a novel vehicle detection method. Firstly, more contextual and small-object vehicle information can be obtained by our Joint Feature Network (JFN). Secondly, our Evolved Region Proposal Network (EPRN) generates initial anchor boxes by adding an improved version of the region proposal network in this network, and at the same time filters out a large number of false vehicle boxes by soft-Non Maximum Suppression (NMS). Then, our Mask Network (MaskN) generates an example that includes the vehicle occlusion, the generator and discriminator can learn from each other in order to further improve the vehicle object detection capability. Finally, these candidate vehicle detection boxes are optimized to obtain the final vehicle detection boxes by the Fine-Tuning Network(FTN). Through the evaluation experiment on the DETRAC benchmark dataset, we find that in terms of mAP, our method exceeds Faster-RCNN by 11.15%, YOLO by 11.88%, and EB by 1.64%. Besides, our algorithm also has achieved top2 comaring with MS-CNN, YOLO-v3, RefineNet, RetinaNet, Faster-rcnn, DSSD and YOLO-v2 of vehicle category in KITTI dataset.