• Title/Summary/Keyword: Vehicle video analysis

Search Result 76, Processing Time 0.039 seconds

A Basic Investigation on the Characteristics of Traffic Flow for the Capacity Analysis of Signalized Intersections (교차로 용량분석을 위한 교통류 특성 기초조사)

  • 이승환
    • Journal of Korean Society of Transportation
    • /
    • v.7 no.2
    • /
    • pp.89-111
    • /
    • 1989
  • This study concentrates on a basic investigation research related to some of parameters to be used for the analysis of capacity and the level of service for signalized intersections. The parameters to be studied are ideal saturation flow rate, large vehicle's passenger car equivalent(PCE) ane the lane utilization factors of through and left turn vehicles. The field data were collected at six intersections in Seoul using video cameras so as to reflect conditions in urban areas. In this study discharge headway based on a rear bumper of each vehicle was used and all the parameters were estimated using a regression technique. The findings of this research are as follows : 1. The saturation headway and saturation flow rate on a single lane with the lane width of 3.1m are 1.652 seconds and 2,180 pcphgpl. It was found that the frist 5 vehicles in the queue experience some start-up lost time. 2. It was confirmed that the new method adopted for the estimate of large vehicle's PCE gives larger PCE values than those derived from the method commonly used. 3. For the estimate of lane utilization factors of through and left turn vehicles, a relationship was established and the corresponding formulas were developed.

  • PDF

Optical Flow Based Vehicle Counting and Speed Estimation in CCTV Videos (Optical Flow 기반 CCTV 영상에서의 차량 통행량 및 통행 속도 추정에 관한 연구)

  • Kim, Jihae;Shin, Dokyung;Kim, Jaekyung;Kwon, Cheolhee;Byun, Hyeran
    • Journal of Broadcast Engineering
    • /
    • v.22 no.4
    • /
    • pp.448-461
    • /
    • 2017
  • This paper proposes a vehicle counting and speed estimation method for traffic situation analysis in road CCTV videos. The proposed method removes a distortion in the images using Inverse perspective Mapping, and obtains specific region for vehicle counting and speed estimation using lane detection algorithm. Then, we can obtain vehicle counting and speed estimation results from using optical flow at specific region. The proposed method achieves stable accuracy of 88.94% from several CCTV images by regional groups and it totally applied at 106,993 frames, about 3 hours video.

Head/Rear Lamp Detection for Stop and Wrong Way Vehicle in the Tunnel (터널 내 정차 및 역주행 차량 인식을 위한 전조등과 후미등 검출 알고리즘)

  • Kim, Gyu-Yeong;Do, Jin-Kyu;Park, Jang-Sik;Kim, Hyun-Tae;Yu, Yun-Sik
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2011.10a
    • /
    • pp.601-602
    • /
    • 2011
  • In this paper, we propose head/rear lamp detection algorithm for stopped and wrong way vehicle recognition. It is shown that our algorithm detected vehicles based on the experimental analysis about the color information of vehicle's lamps. The simulation results show the detection rate about stopped and wrong way vehicles is achieved over 94% and 96% in the tunnel HD(High Definition) video image.

  • PDF

The analysis of data structure to digital forensic of dashboard camera (차량용 블랙박스 포렌식을 위한 분석 절차 및 저장 구조 분석)

  • An, Hwihang;Lee, Sangjin
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.25 no.6
    • /
    • pp.1495-1502
    • /
    • 2015
  • Dashboard camera is important system to store the variable data that not only video but also non-visual information that state of vehicle such as accelerometer, speed, direction. Non-visual information include variable data that can't visualization, so it used important evidence to figure out the situation in accident. It could be missed to non-visual information what can be prove the case in the just digital video forensic procedure. In this paper, We proposal the digital forensic analysis procedure for dashboard camera to all data in dashboard camera extract and analysis data for investigating traffic accident case. And I analyze to some products in with this digital forensic analysis procedure.

Deep Learning-Based Roundabout Traffic Analysis System Using Unmanned Aerial Vehicle Videos (드론 영상을 이용한 딥러닝 기반 회전 교차로 교통 분석 시스템)

  • Janghoon Lee;Yoonho Hwang;Heejeong Kwon;Ji-Won Choi;Jong Taek Lee
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.18 no.3
    • /
    • pp.125-132
    • /
    • 2023
  • Roundabouts have strengths in traffic flow and safety but can present difficulties for inexperienced drivers. Demand to acquire and analyze drone images has increased to enhance a traffic environment allowing drivers to deal with roundabouts easily. In this paper, we propose a roundabout traffic analysis system that detects, tracks, and analyzes vehicles using a deep learning-based object detection model (YOLOv7) in drone images. About 3600 images for object detection model learning and testing were extracted and labeled from 1 hour of drone video. Through training diverse conditions and evaluating the performance of object detection models, we achieved an average precision (AP) of up to 97.2%. In addition, we utilized SORT (Simple Online and Realtime Tracking) and OC-SORT (Observation-Centric SORT), a real-time object tracking algorithm, which resulted in an average MOTA (Multiple Object Tracking Accuracy) of up to 89.2%. By implementing a method for measuring roundabout entry speed, we achieved an accuracy of 94.5%.

Design and Implementation of UAV System for Autonomous Tracking

  • Cho, Eunsung;Ryoo, Intae
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.2
    • /
    • pp.829-842
    • /
    • 2018
  • Unmanned Aerial Vehicle (UAV) is diversely utilized in our lives such as daily hobbies, specialized video image taking and disaster prevention activities. New ways of UAV application have been explored recently such as UAV-based delivery. However, most UAV systems are being utilized in a passive form such as real-time video image monitoring, filmed image ground analysis and storage. For more proactive UAV utilization, there should be higher-performance UAV and large-capacity memory than those presently utilized. Against this backdrop, this study described the general matters on proactive software platform and high-performance UAV hardware for real-time target tracking; implemented research on its design and implementation, and described its implementation method. Moreover, in its established platform, this study measured and analyzed the core-specific CPU consumption.

A CPU-GPU Hybrid System of Environment Perception and 3D Terrain Reconstruction for Unmanned Ground Vehicle

  • Song, Wei;Zou, Shuanghui;Tian, Yifei;Sun, Su;Fong, Simon;Cho, Kyungeun;Qiu, Lvyang
    • Journal of Information Processing Systems
    • /
    • v.14 no.6
    • /
    • pp.1445-1456
    • /
    • 2018
  • Environment perception and three-dimensional (3D) reconstruction tasks are used to provide unmanned ground vehicle (UGV) with driving awareness interfaces. The speed of obstacle segmentation and surrounding terrain reconstruction crucially influences decision making in UGVs. To increase the processing speed of environment information analysis, we develop a CPU-GPU hybrid system of automatic environment perception and 3D terrain reconstruction based on the integration of multiple sensors. The system consists of three functional modules, namely, multi-sensor data collection and pre-processing, environment perception, and 3D reconstruction. To integrate individual datasets collected from different sensors, the pre-processing function registers the sensed LiDAR (light detection and ranging) point clouds, video sequences, and motion information into a global terrain model after filtering redundant and noise data according to the redundancy removal principle. In the environment perception module, the registered discrete points are clustered into ground surface and individual objects by using a ground segmentation method and a connected component labeling algorithm. The estimated ground surface and non-ground objects indicate the terrain to be traversed and obstacles in the environment, thus creating driving awareness. The 3D reconstruction module calibrates the projection matrix between the mounted LiDAR and cameras to map the local point clouds onto the captured video images. Texture meshes and color particle models are used to reconstruct the ground surface and objects of the 3D terrain model, respectively. To accelerate the proposed system, we apply the GPU parallel computation method to implement the applied computer graphics and image processing algorithms in parallel.

Development of a Multi-disciplinary Video Identification System for Autonomous Driving (자율주행을 위한 융복합 영상 식별 시스템 개발)

  • Sung-Youn Cho;Jeong-Joon Kim
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.1
    • /
    • pp.65-74
    • /
    • 2024
  • In recent years, image processing technology has played a critical role in the field of autonomous driving. Among them, image recognition technology is essential for the safety and performance of autonomous vehicles. Therefore, this paper aims to develop a hybrid image recognition system to enhance the safety and performance of autonomous vehicles. In this paper, various image recognition technologies are utilized to construct a system that recognizes and tracks objects in the vehicle's surroundings. Machine learning and deep learning algorithms are employed for this purpose, and objects are identified and classified in real-time through image processing and analysis. Furthermore, this study aims to fuse image processing technology with vehicle control systems to improve the safety and performance of autonomous vehicles. To achieve this, the identified object's information is transmitted to the vehicle control system to enable appropriate autonomous driving responses. The developed hybrid image recognition system in this paper is expected to significantly improve the safety and performance of autonomous vehicles. This is expected to accelerate the commercialization of autonomous vehicles.

A ballistic lead-computation method to improve firing accuracy of army combat vehicles (전투차량의 사격통제 성능향상을 위한 탄도해 리드 계산 기법)

  • Jeoun, Young-Mi
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.10 no.2
    • /
    • pp.31-37
    • /
    • 2007
  • This paper presents a ballistic lead-computation method which utilizes automatic video tracking, tracking assistance and roll uncoupling. The method is able to improve the firing accuracy of army fighting vehicles such as main battle tanks. In the experiment, the efficiency of the proposed method is evaluated by an error analysis in real operating environment. The proposed method has been applied to the fire control system of a military vehicle and proved through the development test of the vehicle.