• Title/Summary/Keyword: YOLOv5s

Search Result 47, Processing Time 0.019 seconds

Vehicle Detection at Night Based on Style Transfer Image Enhancement

  • Jianing Shen;Rong Li
    • Journal of Information Processing Systems
    • /
    • v.19 no.5
    • /
    • pp.663-672
    • /
    • 2023
  • Most vehicle detection methods have poor vehicle feature extraction performance at night, and their robustness is reduced; hence, this study proposes a night vehicle detection method based on style transfer image enhancement. First, a style transfer model is constructed using cycle generative adversarial networks (cycleGANs). The daytime data in the BDD100K dataset were converted into nighttime data to form a style dataset. The dataset was then divided using its labels. Finally, based on a YOLOv5s network, a nighttime vehicle image is detected for the reliable recognition of vehicle information in a complex environment. The experimental results of the proposed method based on the BDD100K dataset show that the transferred night vehicle images are clear and meet the requirements. The precision, recall, mAP@.5, and mAP@.5:.95 reached 0.696, 0.292, 0.761, and 0.454, respectively.

Corroded and loosened bolt detection of steel bolted joints based on improved you only look once network and line segment detector

  • Youhao Ni;Jianxiao Mao;Hao Wang;Yuguang Fu;Zhuo Xi
    • Smart Structures and Systems
    • /
    • v.32 no.1
    • /
    • pp.23-35
    • /
    • 2023
  • Steel bolted joint is an important part of steel structure, and its damage directly affects the bearing capacity and durability of steel structure. Currently, the existing research mainly focuses on the identification of corroded bolts and corroded bolts respectively, and there are few studies on multiple states. A detection framework of corroded and loosened bolts is proposed in this study, and the innovations can be summarized as follows: (i) Vision Transformer (ViT) is introduced to replace the third and fourth C3 module of you-only-look-once version 5s (YOLOv5s) algorithm, which increases the attention weights of feature channels and the feature extraction capability. (ii) Three states of the steel bolts are considered, including corroded bolt, bolt missing and clean bolt. (iii) Line segment detector (LSD) is introduced for bolt rotation angle calculation, which realizes bolt looseness detection. The improved YOLOv5s model was validated on the dataset, and the mean average precision (mAP) was increased from 0.902 to 0.952. In terms of a lab-scale joint, the performance of the LSD algorithm and the Hough transform was compared from different perspective angles. The error value of bolt loosening angle of the LSD algorithm is controlled within 1.09%, less than 8.91% of the Hough transform. Furthermore, the proposed framework was applied to fullscale joints of a steel bridge in China. Synthetic images of loosened bolts were successfully identified and the multiple states were well detected. Therefore, the proposed framework can be alternative of monitoring steel bolted joints for management department.

Recognition of dog's front face using deep learning and machine learning (딥러닝 및 기계학습 활용 반려견 얼굴 정면판별 방법)

  • Kim, Jong-Bok;Jang, Dong-Hwa;Yang, Kayoung;Kwon, Kyeong-Seok;Kim, Jung-Kon;Lee, Joon-Whoan
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.12
    • /
    • pp.1-9
    • /
    • 2020
  • As pet dogs rapidly increase in number, abandoned and lost dogs are also increasing in number. In Korea, animal registration has been in force since 2014, but the registration rate is not high owing to safety and effectiveness issues. Biometrics is attracting attention as an alternative. In order to increase the recognition rate from biometrics, it is necessary to collect biometric images in the same form as much as possible-from the face. This paper proposes a method to determine whether a dog is facing front or not in a real-time video. The proposed method detects the dog's eyes and nose using deep learning, and extracts five types of directional face information through the relative size and position of the detected face. Then, a machine learning classifier determines whether the dog is facing front or not. We used 2,000 dog images for learning, verification, and testing. YOLOv3 and YOLOv4 were used to detect the eyes and nose, and Multi-layer Perceptron (MLP), Random Forest (RF), and the Support Vector Machine (SVM) were used as classifiers. When YOLOv4 and the RF classifier were used with all five types of the proposed face orientation information, the face recognition rate was best, at 95.25%, and we found that real-time processing is possible.

A Scene-Specific Object Detection System Utilizing the Advantages of Fixed-Location Cameras

  • Jin Ho Lee;In Su Kim;Hector Acosta;Hyeong Bok Kim;Seung Won Lee;Soon Ki Jung
    • Journal of information and communication convergence engineering
    • /
    • v.21 no.4
    • /
    • pp.329-336
    • /
    • 2023
  • This paper introduces an edge AI-based scene-specific object detection system for long-term traffic management, focusing on analyzing congestion and movement via cameras. It aims to balance fast processing and accuracy in traffic flow data analysis using edge computing. We adapt the YOLOv5 model, with four heads, to a scene-specific model that utilizes the fixed camera's scene-specific properties. This model selectively detects objects based on scale by blocking nodes, ensuring only objects of certain sizes are identified. A decision module then selects the most suitable object detector for each scene, enhancing inference speed without significant accuracy loss, as demonstrated in our experiments.

Application of Deep Learning-based Object Detection and Distance Estimation Algorithms for Driving to Urban Area (도심로 주행을 위한 딥러닝 기반 객체 검출 및 거리 추정 알고리즘 적용)

  • Seo, Juyeong;Park, Manbok
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.21 no.3
    • /
    • pp.83-95
    • /
    • 2022
  • This paper proposes a system that performs object detection and distance estimation for application to autonomous vehicles. Object detection is performed by a network that adjusts the split grid to the input image ratio using the characteristics of the recently actively used deep learning model YOLOv4, and is trained to a custom dataset. The distance to the detected object is estimated using a bounding box and homography. As a result of the experiment, the proposed method improved in overall detection performance and processing speed close to real-time. Compared to the existing YOLOv4, the total mAP of the proposed method increased by 4.03%. The accuracy of object recognition such as pedestrians, vehicles, construction sites, and PE drums, which frequently occur when driving to the city center, has been improved. The processing speed is approximately 55 FPS. The average of the distance estimation error was 5.25m in the X coordinate and 0.97m in the Y coordinate.

Deep Learning based Distress Awareness System for Small Boat (딥러닝 기반 소형선박 승선자 조난 인지 시스템)

  • Chon, Haemyung;Noh, Jackyou
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.17 no.5
    • /
    • pp.281-288
    • /
    • 2022
  • According to statistics conducted by the Korea Coast Guard, the number of accidents on small boats under 5 tons is increasing every year. This is because only a small number of people are on board. The previously developed maritime distress and safety systems are not well distributed because passengers must be equipped with additional remote equipment. The purpose of this study is to develop a distress awareness system that recognizes man over-board situations in real time. This study aims to present the part of the passenger tracking system among the small ship's distress awareness situational system that can generate passenger's location information in real time using deep learning based object detection and tracking technologies. The system consisted of the following steps. 1) the passenger location information is generated in the form of Bounding box using its detection model (YOLOv3). 2) Based on the Bounding box data, Deep SORT predicts the Bounding box's position in the next frame of the image with Kalman filter. 3) When the actual Bounding Box is created within the range predicted by Kalman-filter, Deep SORT repeats the process of recognizing it as the same object. 4) If the Bounding box deviates the ship's area or an error occurs in the number of tracking occupant, the system is decided the distress situation and issues an alert. This study is expected to complement the problems of existing technologies and ensure the safety of individuals aboard small boats.

A Study on SNS Reviews Analysis based on Deep Learning for User Tendency (개인 성향 추출을 위한 딥러닝 기반 SNS 리뷰 분석 방법에 관한 연구)

  • Park, Woo-Jin;Lee, Ju-Oh;Lee, Hyung-Geol;Kim, Ah-Yeon;Heo, Seung-Yeon;Ahn, Yong-Hak
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.11
    • /
    • pp.9-17
    • /
    • 2020
  • In this paper, we proposed an SNS review analysis method based on deep learning for user tendency. The existing SNS review analysis method has a problem that does not reflect a variety of opinions on various interests because most are processed based on the highest weight. To solve this problem, the proposed method is to extract the user's personal tendency from the SNS review for food. It performs classification using the YOLOv3 model, and after performing a sentiment analysis through the BiLSTM model, it extracts various personal tendencies through a set algorithm. Experiments showed that the performance of Top-1 accuracy 88.61% and Top-5 90.13% for the YOLOv3 model, and 90.99% accuracy for the BiLSTM model. Also, it was shown that diversity of the individual tendencies in the SNS review classification through the heat map. In the future, it is expected to extract personal tendencies from various fields and be used for customized service or marketing.

Multi-Human Behavior Recognition Based on Improved Posture Estimation Model

  • Zhang, Ning;Park, Jin-Ho;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.5
    • /
    • pp.659-666
    • /
    • 2021
  • With the continuous development of deep learning, human behavior recognition algorithms have achieved good results. However, in a multi-person recognition environment, the complex behavior environment poses a great challenge to the efficiency of recognition. To this end, this paper proposes a multi-person pose estimation model. First of all, the human detectors in the top-down framework mostly use the two-stage target detection model, which runs slow down. The single-stage YOLOv3 target detection model is used to effectively improve the running speed and the generalization of the model. Depth separable convolution, which further improves the speed of target detection and improves the model's ability to extract target proposed regions; Secondly, based on the feature pyramid network combined with context semantic information in the pose estimation model, the OHEM algorithm is used to solve difficult key point detection problems, and the accuracy of multi-person pose estimation is improved; Finally, the Euclidean distance is used to calculate the spatial distance between key points, to determine the similarity of postures in the frame, and to eliminate redundant postures.

Implementation of an Intelligent Video Detection System using Deep Learning in the Manufacturing Process of Tungsten Hexafluoride (딥러닝을 이용한 육불화텅스텐(WF6) 제조 공정의 지능형 영상 감지 시스템 구현)

  • Son, Seung-Yong;Kim, Young Mok;Choi, Doo-Hyun
    • Korean Journal of Materials Research
    • /
    • v.31 no.12
    • /
    • pp.719-726
    • /
    • 2021
  • Through the process of chemical vapor deposition, Tungsten Hexafluoride (WF6) is widely used by the semiconductor industry to form tungsten films. Tungsten Hexafluoride (WF6) is produced through manufacturing processes such as pulverization, wet smelting, calcination and reduction of tungsten ores. The manufacturing process of Tungsten Hexafluoride (WF6) is required thorough quality control to improve productivity. In this paper, a real-time detection system for oxidation defects that occur in the manufacturing process of Tungsten Hexafluoride (WF6) is proposed. The proposed system is implemented by applying YOLOv5 based on Convolutional Neural Network (CNN); it is expected to enable more stable management than existing management, which relies on skilled workers. The implementation method of the proposed system and the results of performance comparison are presented to prove the feasibility of the method for improving the efficiency of the WF6 manufacturing process in this paper. The proposed system applying YOLOv5s, which is the most suitable material in the actual production environment, demonstrates high accuracy (mAP@0.5 99.4 %) and real-time detection speed (FPS 46).

Detection and Recognition of Vehicle License Plates using Deep Learning in Video Surveillance

  • Farooq, Muhammad Umer;Ahmed, Saad;Latif, Mustafa;Jawaid, Danish;Khan, Muhammad Zofeen;Khan, Yahya
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.11
    • /
    • pp.121-126
    • /
    • 2022
  • The number of vehicles has increased exponentially over the past 20 years due to technological advancements. It is becoming almost impossible to manually control and manage the traffic in a city like Karachi. Without license plate recognition, traffic management is impossible. The Framework for License Plate Detection & Recognition to overcome these issues is proposed. License Plate Detection & Recognition is primarily performed in two steps. The first step is to accurately detect the license plate in the given image, and the second step is to successfully read and recognize each character of that license plate. Some of the most common algorithms used in the past are based on colour, texture, edge-detection and template matching. Nowadays, many researchers are proposing methods based on deep learning. This research proposes a framework for License Plate Detection & Recognition using a custom YOLOv5 Object Detector, image segmentation techniques, and Tesseract's optical character recognition OCR. The accuracy of this framework is 0.89.