• Title/Summary/Keyword: vehicle-mounted camera

Search Result 63, Processing Time 0.036 seconds

Vehicle-Level Traffic Accident Detection on Vehicle-Mounted Camera Based on Cascade Bi-LSTM

  • Son, Hyeon-Cheol;Kim, Da-Seul;Kim, Sung-Young
    • Journal of Advanced Information Technology and Convergence
    • /
    • v.10 no.2
    • /
    • pp.167-175
    • /
    • 2020
  • In this paper, we propose a traffic accident detection on vehicle-mounted camera. In the proposed method, the minimum bounding box coordinates the central coordinates on the bird's eye view and motion vectors of each vehicle object, and ego-motions of the vehicle equipped with dash-cam are extracted from the dash-cam video. By using extracted 4 kinds features as the input of Bi-LSTM (bidirectional LSTM), the accident probability (score) is predicted. To investigate the effect of each input feature on the probability of an accident, we analyze the performance of the detection the case of using a single feature input and the case of using a combination of features as input, respectively. And in these two cases, different detection models are defined and used. Bi-LSTM is used as a cascade, especially when a combination of the features is used as input. The proposed method shows 76.1% precision and 75.6% recall, which is superior to our previous work.

Implementation of a Helmet Azimuth Tracking System in the Vehicle (이동체 내의 헬멧 방위각 추적 시스템 구현)

  • Lee, Ji-Hoon;Chung, Hae
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.4
    • /
    • pp.529-535
    • /
    • 2020
  • It is important to secure the driver's external field view in armored vehicles surrounded by iron armor for preparation for the enemy's firepower. For this purpose, a 360 degree rotatable surveillance camera is mounted on the vehicle. In this case, the key idea is to recognize the head of the driver wearing a helmet so that the external camera rotated in exactly the same direction. In this paper, we introduce a method that uses a MEMS-based AHRS sensor and a illuminance sensor to compensate for the disadvantages of the existing optical method and implements it with low cost. The key idea is to set the direction of the camera by using the difference between the Euler angles detected by two sensors mounted on the camera and the helmet, and to adjust the direction with illuminance sensor from time to time to remove the drift error of sensors. The implemented prototype will show the camera's direction matches exactly in driver's one.

A Vehicle Tracking Algorithm Focused on the Initialization of Vehicle Detection-and Distance Estimation (초기 차량 검출 및 거리 추정을 중심으로 한 차량 추적 알고리즘)

  • 이철헌;설성욱;김효성;남기곤;주재흠
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.11
    • /
    • pp.1496-1504
    • /
    • 2004
  • In this paper, we propose an algorithm for initializing a target vehicle detection, tracking the vehicle and estimating the distance from it on the stereo images acquired from a forward-looking stereo camera mounted on a road driving vehicle. The process of vehicle detection extracts road region using lane recognition and searches vehicle feature from road region. The distance of tracking vehicle is estimated by TSS correlogram matching from stereo Images. Through the simulation, this paper shows that the proposed method segments, matches and tracks vehicles robustly from image sequences obtained by moving stereo camera.

A Posture Based Control Interface for Quadrotor Aerial Video System Using Head-Mounted Display (HMD를 이용한 사용자 자세 기반 항공 촬영용 쿼드로터 시스템 제어 인터페이스 개발)

  • Kim, Jaeseung;Jeong, Jong Min;Kim, Han Sol;Hwang, Nam Eung;Choi, Yoon Ho;Park, Jin Bae
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.64 no.7
    • /
    • pp.1056-1063
    • /
    • 2015
  • In this paper, we develop an interface for aerial photograph platform which consists of a quadrotor and a gimbal using the human body and the head posture. As quadrotors have been widely adopted in many industries such as aerial photography, remote surveillance, and maintenance of infrastructures, the demand of aerial video and photograph has been increasing remarkably. Stick type remote controllers are widely used to control a quadrotor, but this method is not an intuitive way of controlling the aerial vehicle and the camera simultaneously. Therefore, a new interface which controls the serial photograph platform is presented. The presented interface uses the human head movement measured by head-mounted display as a reference for controlling the camera angle, and the human body posture measured from Kinect for controlling the attitude of the quadrotor. As the image captured by the camera is displayed on the head-mounted display simultaneously, the user can feel flying experience and intuitively control the quadrotor and the camera. Finally, the performance of the developed system shown to verify the effectiveness and superiority of the presented interface.

A Visual Servo Algorithm for Underwater Docking of an Autonomous Underwater Vehicle (AUV) (자율무인잠수정의 수중 도킹을 위한 비쥬얼 서보 제어 알고리즘)

  • 이판묵;전봉환;이종무
    • Journal of Ocean Engineering and Technology
    • /
    • v.17 no.1
    • /
    • pp.1-7
    • /
    • 2003
  • Autonomous underwater vehicles (AUVs) are unmanned, underwater vessels that are used to investigate sea environments in the study of oceanography. Docking systems are required to increase the capability of the AUVs, to recharge the batteries, and to transmit data in real time for specific underwater works, such as repented jobs at sea bed. This paper presents a visual :em control system used to dock an AUV into an underwater station. A camera mounted at the now center of the AUV is used to guide the AUV into dock. To create the visual servo control system, this paper derives an optical flow model of a camera, where the projected motions of the image plane are described with the rotational and translational velocities of the AUV. This paper combines the optical flow equation of the camera with the AUVs equation of motion, and deriver a state equation for the visual servo AUV. Further, this paper proposes a discrete-time MIMO controller, minimizing a cost function. The control inputs of the AUV are automatically generated with the projected target position on the CCD plane of the camera and with the AUVs motion. To demonstrate the effectiveness of the modeling and the control law of the visual servo AUV simulations on docking the AUV to a target station are performed with the 6-dof nonlinear equations of REMUS AUV and a CCD camera.

Vision-Based Indoor Localization Using Artificial Landmarks and Natural Features on the Ceiling with Optical Flow and a Kalman Filter

  • Rusdinar, Angga;Kim, Sungshin
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.13 no.2
    • /
    • pp.133-139
    • /
    • 2013
  • This paper proposes a vision-based indoor localization method for autonomous vehicles. A single upward-facing digital camera was mounted on an autonomous vehicle and used as a vision sensor to identify artificial landmarks and any natural corner features. An interest point detector was used to find the natural features. Using an optical flow detection algorithm, information related to the direction and vehicle translation was defined. This information was used to track the vehicle movements. Random noise related to uneven light disrupted the calculation of the vehicle translation. Thus, to estimate the vehicle translation, a Kalman filter was used to calculate the vehicle position. These algorithms were tested on a vehicle in a real environment. The image processing method could recognize the landmarks precisely, while the Kalman filter algorithm could estimate the vehicle's position accurately. The experimental results confirmed that the proposed approaches can be implemented in practical situations.

Recognition of a Close Leading Vehicle Using the Contour of the Vehicles Wheels (차량 뒷바퀴 윤곽선을 이용한 근거리 전방차량인식)

  • Park, Kwang-Hyun;Han, Min-Hong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.7 no.3
    • /
    • pp.238-245
    • /
    • 2001
  • This paper describes a method for detecting a close leading vehicle using the contour of the vehi-cles rear wheels. The contour of a leading vehicles rear wheels in 속 front road image from a B/W CCD camera mounted on the central front bumper of the vehicle, has vertical components and can be discerned clearly in contrast to the road surface. After extracting positive edges and negative edges using the Sobel op-erator in the raw image, every point that can be recognized as a feature of the contour of the leading vehicle wheel is determined. This process can detect the presence of a close leading vehicle, and it is also possible to calculate the distance to the leading vehicle and the lateral deviation angle. This method might be useful for developing and LSA (Low Speed Automation) system that can relieve drivers stress in the stop-and-go traffic conditions encoun-tered on urban roads.

  • PDF

Development of Vision-based Lateral Control System for an Autonomous Navigation Vehicle (자율주행차량을 위한 비젼 기반의 횡방향 제어 시스템 개발)

  • Rho Kwanghyun;Steux Bruno
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.13 no.4
    • /
    • pp.19-25
    • /
    • 2005
  • This paper presents a lateral control system for the autonomous navigation vehicle that was developed and tested by Robotics Centre of Ecole des Mines do Paris in France. A robust lane detection algorithm was developed for detecting different types of lane marker in the images taken by a CCD camera mounted on the vehicle. $^{RT}Maps$ that is a software framework far developing vision and data fusion applications, especially in a car was used for implementing lane detection and lateral control. The lateral control has been tested on the urban road in Paris and the demonstration has been shown to the public during IEEE Intelligent Vehicle Symposium 2002. Over 100 people experienced the automatic lateral control. The demo vehicle could run at a speed of 130km1h in the straight road and 50km/h in high curvature road stably.

Real time Omni-directional Object Detection Using Background Subtraction of Fisheye Image (어안 이미지의 배경 제거 기법을 이용한 실시간 전방향 장애물 감지)

  • Choi, Yun-Won;Kwon, Kee-Koo;Kim, Jong-Hyo;Na, Kyung-Jin;Lee, Suk-Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.8
    • /
    • pp.766-772
    • /
    • 2015
  • This paper proposes an object detection method based on motion estimation using background subtraction in the fisheye images obtained through omni-directional camera mounted on the vehicle. Recently, most of the vehicles installed with rear camera as a standard option, as well as various camera systems for safety. However, differently from the conventional object detection using the image obtained from the camera, the embedded system installed in the vehicle is difficult to apply a complicated algorithm because of its inherent low processing performance. In general, the embedded system needs system-dependent algorithm because it has lower processing performance than the computer. In this paper, the location of object is estimated from the information of object's motion obtained by applying a background subtraction method which compares the previous frames with the current ones. The real-time detection performance of the proposed method for object detection is verified experimentally on embedded board by comparing the proposed algorithm with the object detection based on LKOF (Lucas-Kanade optical flow).