• Title/Summary/Keyword: vehicle-mounted camera

Search Result 63, Processing Time 0.025 seconds

A Study on Visual Servoing Image Information for Stabilization of Line-of-Sight of Unmanned Helicopter (무인헬기의 시선안정화를 위한 시각제어용 영상정보에 관한 연구)

  • 신준영;이현정;이민철
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2004.10a
    • /
    • pp.600-603
    • /
    • 2004
  • UAV (Unmanned Aerial Vehicle) is an aerial vehicle that can accomplish the mission without pilot. UAV was developed for a military purpose such as a reconnaissance in an early stage. Nowadays usage of UAV expands into a various field of civil industry such as a drawing a map, broadcasting, observation of environment. These UAV, need vision system to offer accurate information to person who manages on ground and to control the UAV itself. Especially LOS(Line-of-Sight) system wants to precisely control direction of system which wants to tracking object using vision sensor like an CCD camera, so it is very important in vision system. In this paper, we propose a method to recognize object from image which is acquired from camera mounted on gimbals and offer information of displacement between center of monitor and center of object.

  • PDF

Visual Servoing Control of a Docking System for an Autonomous Underwater Vehicle (AUV)

  • Lee, Pan-Mook;Jeon, Bong-Hwan;Lee, Chong-Moo;Hong, Young-Hwa;Oh, Jun-Ho
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2002.10a
    • /
    • pp.109.5-109
    • /
    • 2002
  • Autonomous underwater vehicles (AUVs) are unmanned underwater vessels to investigate sea environments, oceanography and deep-sea resources autonomously. Docking systems are required to increase the capability of the AUVs to recharge the batteries and to transmit data in real time in underwater. This paper presents a visual servo control system for an AUV to dock into an underwater station with a camera. To make the visual servo control system , this paper derives an optical flow model of a camera mounted on an AUV, where a CCD camera is installed at the nose center of the AUV to monitor the docking condition. This paper combines the optical flow equation of the camera with the AUV's equation o...

  • PDF

The calibration of a laser profiling system for seafloor micro-topography measurements

  • Loeffler, Kathryn R.;Chotiros, Nicholas P.
    • Ocean Systems Engineering
    • /
    • v.1 no.3
    • /
    • pp.195-205
    • /
    • 2011
  • A method for calibrating a laser profiling system for seafloor micro-topography measurements is described. The system consists of a digital camera and an arrangement of six red lasers that are mounted as a unit on a remotely operated vehicle (ROV). The lasers project as parallel planes onto the seafloor, creating profiles of the local topography that are interpreted from the digital camera image. The goal of the calibration was to determine the plane equations for the six lasers relative to the camera. This was accomplished in two stages. First, distortions in the digital image were corrected using an interpolation method based on a virtual pinhole camera model. Then, the laser planes were determined according to their intersections with a calibration target. The position and orientation of the target were obtained by a registration process. The selection of the target shape and size was found to be critical to a successful calibration at sea, due to the limitations in the manoeuvrability of the ROV.

A Moving Camera Localization using Perspective Transform and Klt Tracking in Sequence Images (순차영상에서 투영변환과 KLT추적을 이용한 이동 카메라의 위치 및 방향 산출)

  • Jang, Hyo-Jong;Cha, Jeong-Hee;Kim, Gye-Young
    • The KIPS Transactions:PartB
    • /
    • v.14B no.3 s.113
    • /
    • pp.163-170
    • /
    • 2007
  • In autonomous navigation of a mobile vehicle or a mobile robot, localization calculated from recognizing its environment is most important factor. Generally, we can determine position and pose of a camera equipped mobile vehicle or mobile robot using INS and GPS but, in this case, we must use enough known ground landmark for accurate localization. hi contrast with homography method to calculate position and pose of a camera by only using the relation of two dimensional feature point between two frames, in this paper, we propose a method to calculate the position and the pose of a camera using relation between the location to predict through perspective transform of 3D feature points obtained by overlaying 3D model with previous frame using GPS and INS input and the location of corresponding feature point calculated using KLT tracking method in current frame. For the purpose of the performance evaluation, we use wireless-controlled vehicle mounted CCD camera, GPS and INS, and performed the test to calculate the location and the rotation angle of the camera with the video sequence stream obtained at 15Hz frame rate.

On-road Vehicle Tracking using Laser Scanner with Multiple Hypothesis Assumption

  • Ryu, Kyung-Jin;Park, Seong-Keun;Hwang, Jae-Pil;Kim, Eun-Tai;Park, Mignon
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.9 no.3
    • /
    • pp.232-237
    • /
    • 2009
  • Active safety vehicle devices are getting more attention recently. To prevent traffic accidents, the environment in front and even around the vehicle must be checked and monitored. In the present applications, mainly camera and radar based systems are used as sensing devices. Laser scanner, one of the sensing devices, has the advantage of obtaining accurate measurement of the distance and the geometric information about the objects in the field of view of the laser scanner. However, there is a problem that detecting object occluded by a foreground one is difficult. In this paper, criterions are proposed to manage this problem. Simulation is conducted by vehicle mounted the laser scanner and multiple-hypothesis algorithm tracks the candidate objects. We compare the running times as multi-hypothesis algorithm parameter varies.

Developed Ethernet based image control system for deep-sea ROV (심해용 ROV를 위한 수중 원격 영상제어 시스템 개발)

  • Kim, Hyun-Hee;Jeong, Ki-Min;Park, Chul-Soo;Lee, Kyung-Chang;Hwang, Yeong-Yeun
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.21 no.6
    • /
    • pp.389-394
    • /
    • 2018
  • Remotely operated vehicle(ROV) and autonomous underwater vehicle(AUV) have been used for underwater surveys, underwater exploration, resource harvesting, offshore plant maintenance and repair, and underwater construction. It is hard for people to work in the deep sea. Therefore, we need a vision control system of underwater submersible that can replace human eyes. However, many people have difficulty in developing a deep-sea image control system due to the deep sea special environment such as high pressure, brine, waterproofing and communication. In this paper, we will develop an Ethernet based remote image control system that can control the image mounted on ROV.

A Lane Departure Warning Algorithm Based on an Edge Distribution Function (에지분포함수 기반의 차선이탈경보 알고리즘)

  • 이준웅;이성웅
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.9 no.3
    • /
    • pp.143-154
    • /
    • 2001
  • An algorithm for estimating the lane departure of a vehicle is derived and implemented based on an EDF(edge distribution function) obtained from gray-level images taken by a CCD camera mounted on a vehicle. As the function of edge direction, the EDF is aimed to show the distribution of edge direction and to estimate the possibility of lane departure with respect to its symmetric axis and local mamma. The EDF plays important roles: 1) It reduces noisy effects caused by dynamic road scene. 2) It makes possible lane identification without camera modeling. 3) It also leads LDW(lane departure warning) problem to a mathematical approach. When the situations of lane departure such that the vehicle approaches to lane marks or runs in the vicinity of the lane marks are occurred, the orientation of lane marks in images is changed, and then the situations are immediately reflected to the EDF. Accordingly, the lane departure is estimated by studying the shape of the EDF. The proposed EDF-based algorithm enhanced the adaptability to cope with the random and dynamic road environments, and eventually led to the reliable LDW system.

  • PDF

Vision Sensor-Based Driving Algorithm for Indoor Automatic Guided Vehicles

  • Quan, Nguyen Van;Eum, Hyuk-Min;Lee, Jeisung;Hyun, Chang-Ho
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.13 no.2
    • /
    • pp.140-146
    • /
    • 2013
  • In this paper, we describe a vision sensor-based driving algorithm for indoor automatic guided vehicles (AGVs) that facilitates a path tracking task using two mono cameras for navigation. One camera is mounted on vehicle to observe the environment and to detect markers in front of the vehicle. The other camera is attached so the view is perpendicular to the floor, which compensates for the distance between the wheels and markers. The angle and distance from the center of the two wheels to the center of marker are also obtained using these two cameras. We propose five movement patterns for AGVs to guarantee smooth performance during path tracking: starting, moving straight, pre-turning, left/right turning, and stopping. This driving algorithm based on two vision sensors gives greater flexibility to AGVs, including easy layout change, autonomy, and even economy. The algorithm was validated in an experiment using a two-wheeled mobile robot.

On-Road Succeeding Vehicle Detection using Characteristic Visual Features (시각적 특징들을 이용한 도로 상의 후방 추종 차량 인식)

  • Adhikari, Shyam Prasad;Cho, Hi-Tek;Yoo, Hyeon-Joong;Yang, Chang-Ju;Kim, Hyong-Suk
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.59 no.3
    • /
    • pp.636-644
    • /
    • 2010
  • A method for the detection of on-road succeeding vehicles using visual characteristic features like horizontal edges, shadow, symmetry and intensity is proposed. The proposed method uses the prominent horizontal edges along with the shadow under the vehicle to generate an initial estimate of the vehicle-road surface contact. Fast symmetry detection, utilizing the edge pixels, is then performed to detect the presence of vertically symmetric object, possibly vehicle, in the region above the initially estimated vehicle-road surface contact. A window defined by the horizontal and the vertical line obtained from above along with local perspective information provides a narrow region for the final search of the vehicle. A bounding box around the vehicle is extracted from the horizontal edges, symmetry histogram and a proposed squared difference of intensity measure. Experiments have been performed on natural traffic scenes obtained from a camera mounted on the side view mirror of a host vehicle demonstrate good and reliable performance of the proposed method.

New Vehicle Verification Scheme for Blind Spot Area Based on Imaging Sensor System

  • Hong, Gwang-Soo;Lee, Jong-Hyeok;Lee, Young-Woon;Kim, Byung-Gyu
    • Journal of Multimedia Information System
    • /
    • v.4 no.1
    • /
    • pp.9-18
    • /
    • 2017
  • Ubiquitous computing is a novel paradigm that is rapidly gaining in the scenario of wireless communications and telecommunications for realizing smart world. As rapid development of sensor technology, smart sensor system becomes more popular in automobile or vehicle. In this study, a new vehicle detection mechanism in real-time for blind spot area is proposed based on imaging sensors. To determine the position of other vehicles on the road is important for operation of driver assistance systems (DASs) to increase driving safety. As the result, blind spot detection of vehicles is addressed using an automobile detection algorithm for blind spots. The proposed vehicle verification utilizes the height and angle of a rear-looking vehicle mounted camera. Candidate vehicle information is extracted using adaptive shadow detection based on brightness values of an image of a vehicle area. The vehicle is verified using a training set with Haar-like features of candidate vehicles. Using these processes, moving vehicles can be detected in blind spots. The detection ratio of true vehicles was 91.1% in blind spots based on various experimental results.