• Title/Summary/Keyword: Real Time Object Detection

Search Result 535, Processing Time 0.024 seconds

Traveling Direction Estimation of Autonomous Vehicle using Vision System (비젼 시스템을 이용한 자율 주행 차량의 실시간 주행 방향 추정)

  • 강준필;정길도
    • Proceedings of the IEEK Conference
    • /
    • 2001.06e
    • /
    • pp.127-130
    • /
    • 2001
  • In this paper, we describes a method of estimating traveling direction of a autonomous vehicle. For the development of autonomous vehicle, it is important to detect road lane and to reckon traveling direction. The object of a propose algorithm is to perform lane detection in real-time for standalone vision system. And we calculate efficent traveling direction to find steering angie for lateral control system. Therefore autonomous vehicle go forward the center of lane by adjusting the current steering angle using traveling direction.

  • PDF

Development of visitor counter system for disaster situations and marketing based on real-time object recognition technology (재난상황과 마케팅을 위한 실시간 객체인식 기술기반 출입자 카운터시스템 개발)

  • Kim, Young-gwon;Jeong, Jae-hoon;Kim, Jae-hyeon;Kang, Myeung-jin;Kang, Min-sung;Ju, Hui-je;Jang, Woo-hyun;Yun, Tae-jin
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2021.01a
    • /
    • pp.187-188
    • /
    • 2021
  • 최근 COVID19 상황에서 생활 속 거리두기가 강조되면서 관광지나 다중이용시설 등의 이용객 수와 밀집도를 파악하는 것이 중요해지고 있다. 따라서, CCTV 영상을 활용하여 저렴한 비용으로 다중이용시설의 출입자수에 대한 정보를 실시간으로 모니터링할 수 있는 시스템이 필요하다. 이를 위해 본 논문에서는 딥러닝 실시간 객체인식기술을 활용한 출입자의 수와 동선을 측정하여 출입자에 대한 통계정보를 웹브라우저를 통해 제공하는 시스템을 개발하였다. 실시간 객체인식기술인 YOLOv4와 YOLOv4-tiny 알고리즘을 Nvidia사의 Jetson AGX Xavier 와 데스크톱PC에 적용하여 각 알고리즘의 FPS와 객체 인식률을 비교 분석 하여 알고리즘을 적용하였다.

  • PDF

Automatic Recognition of Symbol Objects in P&IDs using Artificial Intelligence (인공지능 기반 플랜트 도면 내 심볼 객체 자동화 검출)

  • Shin, Ho-Jin;Jeon, Eun-Mi;Kwon, Do-kyung;Kwon, Jun-Seok;Lee, Chul-Jin
    • Plant Journal
    • /
    • v.17 no.3
    • /
    • pp.37-41
    • /
    • 2021
  • P&ID((Piping and Instrument Diagram) is a key drawing in the engineering industry because it contains information about the units and instrumentation of the plant. Until now, simple repetitive tasks like listing symbols in P&ID drawings have been done manually, consuming lots of time and manpower. Currently, a deep learning model based on CNN(Convolutional Neural Network) is studied for drawing object detection, but the detection time is about 30 minutes and the accuracy is about 90%, indicating performance that is not sufficient to be implemented in the real word. In this study, the detection of symbols in a drawing is performed using 1-stage object detection algorithms that process both region proposal and detection. Specifically, build the training data using the image labeling tool, and show the results of recognizing the symbol in the drawing which are trained in the deep learning model.

A New Object Region Detection and Classification Method using Multiple Sensors on the Driving Environment (다중 센서를 사용한 주행 환경에서의 객체 검출 및 분류 방법)

  • Kim, Jung-Un;Kang, Hang-Bong
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.8
    • /
    • pp.1271-1281
    • /
    • 2017
  • It is essential to collect and analyze target information around the vehicle for autonomous driving of the vehicle. Based on the analysis, environmental information such as location and direction should be analyzed in real time to control the vehicle. In particular, obstruction or cutting of objects in the image must be handled to provide accurate information about the vehicle environment and to facilitate safe operation. In this paper, we propose a method to simultaneously generate 2D and 3D bounding box proposals using LiDAR Edge generated by filtering LiDAR sensor information. We classify the classes of each proposal by connecting them with Region-based Fully-Covolutional Networks (R-FCN), which is an object classifier based on Deep Learning, which uses two-dimensional images as inputs. Each 3D box is rearranged by using the class label and the subcategory information of each class to finally complete the 3D bounding box corresponding to the object. Because 3D bounding boxes are created in 3D space, object information such as space coordinates and object size can be obtained at once, and 2D bounding boxes associated with 3D boxes do not have problems such as occlusion.

Face Detection Using Shapes and Colors in Various Backgrounds

  • Lee, Chang-Hyun;Lee, Hyun-Ji;Lee, Seung-Hyun;Oh, Joon-Taek;Park, Seung-Bo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.7
    • /
    • pp.19-27
    • /
    • 2021
  • In this paper, we propose a method for detecting characters in images and detecting facial regions, which consists of two tasks. First, we separate two different characters to detect the face position of the characters in the frame. For fast detection, we use You Only Look Once (YOLO), which finds faces in the image in real time, to extract the location of the face and mark them as object detection boxes. Second, we present three image processing methods to detect accurate face area based on object detection boxes. Each method uses HSV values extracted from the region estimated by the detection figure to detect the face region of the characters, and changes the size and shape of the detection figure to compare the accuracy of each method. Each face detection method is compared and analyzed with comparative data and image processing data for reliability verification. As a result, we achieved the highest accuracy of 87% when using the split rectangular method among circular, rectangular, and split rectangular methods.

Detection and Recognition of Illegally Parked Vehicles Based on an Adaptive Gaussian Mixture Model and a Seed Fill Algorithm

  • Sarker, Md. Mostafa Kamal;Weihua, Cai;Song, Moon Kyou
    • Journal of information and communication convergence engineering
    • /
    • v.13 no.3
    • /
    • pp.197-204
    • /
    • 2015
  • In this paper, we present an algorithm for the detection of illegally parked vehicles based on a combination of some image processing algorithms. A digital camera is fixed in the illegal parking region to capture the video frames. An adaptive Gaussian mixture model (GMM) is used for background subtraction in a complex environment to identify the regions of moving objects in our test video. Stationary objects are detected by using the pixel-level features in time sequences. A stationary vehicle is detected by using the local features of the object, and thus, information about illegally parked vehicles is successfully obtained. An automatic alarm system can be utilized according to the different regulations of different illegal parking regions. The results of this study obtained using a test video sequence of a real-time traffic scene show that the proposed method is effective.

A Study on the Real-time Recognition Methodology for IoT-based Traffic Accidents (IoT 기반 교통사고 실시간 인지방법론 연구)

  • Oh, Sung Hoon;Jeon, Young Jun;Kwon, Young Woo;Jeong, Seok Chan
    • The Journal of Bigdata
    • /
    • v.7 no.1
    • /
    • pp.15-27
    • /
    • 2022
  • In the past five years, the fatality rate of single-vehicle accidents has been 4.7 times higher than that of all accidents, so it is necessary to establish a system that can detect and respond to single-vehicle accidents immediately. The IoT(Internet of Thing)-based real-time traffic accident recognition system proposed in this study is as following. By attaching an IoT sensor which detects the impact and vehicle ingress to the guardrail, when an impact occurs to the guardrail, the image of the accident site is analyzed through artificial intelligence technology and transmitted to a rescue organization to perform quick rescue operations to damage minimization. An IoT sensor module that recognizes vehicles entering the monitoring area and detects the impact of a guardrail and an AI-based object detection module based on vehicle image data learning were implemented. In addition, a monitoring and operation module that imanages sensor information and image data in integrate was also implemented. For the validation of the system, it was confirmed that the target values were all met by measuring the shock detection transmission speed, the object detection accuracy of vehicles and people, and the sensor failure detection accuracy. In the future, we plan to apply it to actual roads to verify the validity using real data and to commercialize it. This system will contribute to improving road safety.

Enhancing Automated Recognition of Small-Sized Construction Tools Using Synthetic Images: Validating Practical Applicability Through Confidence Scores

  • Soeun HAN;Choongwan KOO
    • International conference on construction engineering and project management
    • /
    • 2024.07a
    • /
    • pp.1308-1308
    • /
    • 2024
  • Computer vision techniques have been widely employed in automated construction management to enhance safety and prevent accidents at construction sites. However, previous research in the field of vision-based approaches has often overlooked small-sized construction tools. These tools present unique challenges in data collection due to their diverse shapes and sizes, as well as in improving model performance to accurately detect and classify them. To address these challenges, this study aimed to enhance the performance of vision-based classifiers for small-sized construction tools, including bucket, cord reel, hammer, and tacker, by leveraging synthetic images generated from a 3D virtual environment. Three classifiers were developed using the YOLOv8 algorithm, each differing in the composition of the training dataset: (i) 'Real-4000', trained on 4,000 authentic images collected through web crawling methods (1,000 images per object); (ii) 'Hybrid-4000', consisting of 2,000 authentic images and 2,000 synthetic images; and (iii) 'Hybrid-8000', incorporating 4,000 authentic images and 4,000 synthetic images. To validate the performance of the classifiers, 144 directly-captured images for each object were collected from real construction sites as the test dataset. The mean Average Precision at an IoU threshold of 0.5 (mAP_0.5) for the classifiers was 79.6%, 90.8%, and 94.8%, respectively, with the 'Hybrid-8000' model demonstrating the highest performance. Notably, for objects with significant shape variations, the use of synthetic images led to the enhanced performance of the vision-based classifiers. Moreover, the practical applicability of the proposed classifiers was validated through confidence scores, particularly between the 'Hybrid-4000' and 'Hybrid-8000' models. Statistical analysis using t-tests indicated that the performance of the 'Hybrid-4000' model would either matched or exceeded that of the 'Hybrid-8000'model based on confidence scores. Thus, employing the 'Hybrid-4000' model may be preferable in terms of data collection efficiency and processing time, contributing to enhanced safety and real-time automation and robotics in construction practices.

Viola-Jones Object Detection Algorithm Using Rectangular Feature (사각 특징을 추가한 Viola-Jones 물체 검출 알고리즘)

  • Seo, Ji-Won;Lee, Ji-Eun;Kwak, No-Jun
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.49 no.3
    • /
    • pp.18-29
    • /
    • 2012
  • Viola-Jones algorithm, a very effective real-time object detection method, uses Haar-like features to constitute weak classifiers. A Haar-like feature is made up of at least two rectangles each of which corresponds to either positive or negative areas and the feature value is computed by subtracting the sum of pixel values in the negative area from that of pixel values in the positive area. Compared to the conventional Haar-like feature which is made up of more than one rectangle, in this paper, we present a couple of new rectangular features whose feature values are computed either by the sum or by the variance of pixel values in a rectangle. By the use of these rectangular features in combination with the conventional Haar-like features, we can select additional features which have been excluded in the conventional Viola-Jones algorithm where every features are the combination of contiguous bright and dark areas of an object. In doing so, we can enhance the performance of object detection without any computational overhead.

Anomalous Event Detection in Traffic Video Based on Sequential Temporal Patterns of Spatial Interval Events

  • Ashok Kumar, P.M.;Vaidehi, V.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.1
    • /
    • pp.169-189
    • /
    • 2015
  • Detection of anomalous events from video streams is a challenging problem in many video surveillance applications. One such application that has received significant attention from the computer vision community is traffic video surveillance. In this paper, a Lossy Count based Sequential Temporal Pattern mining approach (LC-STP) is proposed for detecting spatio-temporal abnormal events (such as a traffic violation at junction) from sequences of video streams. The proposed approach relies mainly on spatial abstractions of each object, mining frequent temporal patterns in a sequence of video frames to form a regular temporal pattern. In order to detect each object in every frame, the input video is first pre-processed by applying Gaussian Mixture Models. After the detection of foreground objects, the tracking is carried out using block motion estimation by the three-step search method. The primitive events of the object are represented by assigning spatial and temporal symbols corresponding to their location and time information. These primitive events are analyzed to form a temporal pattern in a sequence of video frames, representing temporal relation between various object's primitive events. This is repeated for each window of sequences, and the support for temporal sequence is obtained based on LC-STP to discover regular patterns of normal events. Events deviating from these patterns are identified as anomalies. Unlike the traditional frequent item set mining methods, the proposed method generates maximal frequent patterns without candidate generation. Furthermore, experimental results show that the proposed method performs well and can detect video anomalies in real traffic video data.