• Title/Summary/Keyword: monocular camera

Search Result 111, Processing Time 0.026 seconds

Efficient Lane Detection for Preceding Vehicle Extraction by Limiting Search Area of Sequential Images (전방의 차량포착을 위한 연속영상의 대상영역을 제한한 효율적인 차선 검출)

  • Han, Sang-Hoon;Cho, Hyung-Je
    • The KIPS Transactions:PartB
    • /
    • v.8B no.6
    • /
    • pp.705-717
    • /
    • 2001
  • In this paper, we propose a rapid lane detection method to extract a preceding vehicle from sequential images captured by a single monocular CCD camera. We detect positions of lanes for an individual image within the limited area that would not be hidden and thereby compute the slopes of the detected lanes. Then we find a search area where vehicles would exist and extract the position of the preceding vehicle within the area with edge component by applying a structured method. To verify the effects of the proposed method, we capture the road images with a notebook PC and a CCD camera for PC and present the results such as processing time for lane detection, accuracy and vehicles detection against the images.

  • PDF

Detection of Objects Temporally Stop Moving with Spatio-Temporal Segmentation (시공간 영상분할을 이용한 이동 및 이동 중 정지물체 검출)

  • Kim, Do-Hyung;Kim, Gyeong-Hwan
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.1
    • /
    • pp.142-151
    • /
    • 2015
  • This paper proposes a method for detection of objects temporally stop moving in video sequences taken by a moving camera. Even though the consequence of missed detection of those objects could be catastrophic in terms of application level requirements, not much attention has been paid in conventional approaches. In the proposed method, we introduce cues for consistent detection and tracking of objects: motion potential, position potential, and color distribution similarity. Integration of the three cues in the graph-cut algorithm makes possible to detect objects that temporally stop moving and are newly appearing. Experiment results prove that the proposed method can not only detect moving objects but also track objects stop moving.

A Region Depth Estimation Algorithm using Motion Vector from Monocular Video Sequence (단안영상에서 움직임 벡터를 이용한 영역의 깊이추정)

  • 손정만;박영민;윤영우
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.5 no.2
    • /
    • pp.96-105
    • /
    • 2004
  • The recovering 3D image from 2D requires the depth information for each picture element. The manual creation of those 3D models is time consuming and expensive. The goal in this paper is to estimate the relative depth information of every region from single view image with camera translation. The paper is based on the fact that the motion of every point within image which taken from camera translation depends on the depth. Motion vector using full-search motion estimation is compensated for camera rotation and zooming. We have developed a framework that estimates the average frame depth by analyzing motion vector and then calculates relative depth of region to average frame depth. Simulation results show that the depth of region belongs to a near or far object is consistent accord with relative depth that man recognizes.

  • PDF

A Framework for Real Time Vehicle Pose Estimation based on synthetic method of obtaining 2D-to-3D Point Correspondence

  • Yun, Sergey;Jeon, Moongu
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2014.04a
    • /
    • pp.904-907
    • /
    • 2014
  • In this work we present a robust and fast approach to estimate 3D vehicle pose that can provide results under a specific traffic surveillance conditions. Such limitations are expressed by single fixed CCTV camera that is located relatively high above the ground, its pitch axes is parallel to the reference plane and the camera focus assumed to be known. The benefit of our framework that it does not require prior training, camera calibration and does not heavily rely on 3D model shape as most common technics do. Also it deals with a bad shape condition of the objects as we focused on low resolution surveillance scenes. Pose estimation task is presented as PnP problem to solve it we use well known "POSIT" algorithm [1]. In order to use this algorithm at least 4 non coplanar point's correspondence is required. To find such we propose a set of techniques based on model and scene geometry. Our framework can be applied in real time video sequence. Results for estimated vehicle pose are shown in real image scene.

Improved Object Recognition using Multi-view Camera for ADAS (ADAS용 다중화각 카메라를 이용한 객체 인식 향상)

  • Park, Dong-hun;Kim, Hakil
    • Journal of Broadcast Engineering
    • /
    • v.24 no.4
    • /
    • pp.573-579
    • /
    • 2019
  • To achieve fully autonomous driving, the perceptual skills of the surrounding environment must be superior to those of humans. The $60^{\circ}$ angle, $120^{\circ}$ wide angle cameras, which are used primarily in autonomous driving, have their disadvantages depending on the viewing angle. This paper uses a multi-angle object recognition system to overcome each of the disadvantages of wide and narrow-angle cameras. Also, the aspect ratio of data acquired with wide and narrow-angle cameras was analyzed to modify the SSD(Single Shot Detector) algorithm, and the acquired data was learned to achieve higher performance than when using only monocular cameras.

Fast, Accurate Vehicle Detection and Distance Estimation

  • Ma, QuanMeng;Jiang, Guang;Lai, DianZhi;cui, Hua;Song, Huansheng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.2
    • /
    • pp.610-630
    • /
    • 2020
  • A large number of people suffered from traffic accidents each year, so people pay more attention to traffic safety. However, the traditional methods use laser sensors to calculate the vehicle distance at a very high cost. In this paper, we propose a method based on deep learning to calculate the vehicle distance with a monocular camera. Our method is inexpensive and quite convenient to deploy on the mobile platforms. This paper makes two contributions. First, based on Light-Head RCNN, we propose a new vehicle detection framework called Light-Car Detection which can be used on the mobile platforms. Second, the planar homography of projective geometry is used to calculate the distance between the camera and the vehicles ahead. The results show that our detection system achieves 13FPS detection speed and 60.0% mAP on the Adreno 530 GPU of Samsung Galaxy S7, while only requires 7.1MB of storage space. Compared with the methods existed, the proposed method achieves a better performance.

Real Time Discrimination of 3 Dimensional Face Pose (실시간 3차원 얼굴 방향 식별)

  • Kim, Tae-Woo
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.3 no.1
    • /
    • pp.47-52
    • /
    • 2010
  • In this paper, we introduce a new approach for real-time 3D face pose discrimination based on active IR illumination from a monocular view of the camera. Under the IR illumination, the pupils appear bright. We develop algorithms for efficient and robust detection and tracking pupils in real time. Based on the geometric distortions of pupils under different face orientations, an eigen eye feature space is built based on training data that captures the relationship between 3D face orientation and the geometric features of the pupils. The 3D face pose for an input query image is subsequently classified using the eigen eye feature space. From the experiment, we obtained the range of results of discrimination from the subjects which close to the camera are from 94,67%, minimum from 100%, maximum.

  • PDF

Implementation of a sensor fusion system for autonomous guided robot navigation in outdoor environments (실외 자율 로봇 주행을 위한 센서 퓨전 시스템 구현)

  • Lee, Seung-H.;Lee, Heon-C.;Lee, Beom-H.
    • Journal of Sensor Science and Technology
    • /
    • v.19 no.3
    • /
    • pp.246-257
    • /
    • 2010
  • Autonomous guided robot navigation which consists of following unknown paths and avoiding unknown obstacles has been a fundamental technique for unmanned robots in outdoor environments. The unknown path following requires techniques such as path recognition, path planning, and robot pose estimation. In this paper, we propose a novel sensor fusion system for autonomous guided robot navigation in outdoor environments. The proposed system consists of three monocular cameras and an array of nine infrared range sensors. The two cameras equipped on the robot's right and left sides are used to recognize unknown paths and estimate relative robot pose on these paths through bayesian sensor fusion method, and the other camera equipped at the front of the robot is used to recognize abrupt curves and unknown obstacles. The infrared range sensor array is used to improve the robustness of obstacle avoidance. The forward camera and the infrared range sensor array are fused through rule-based method for obstacle avoidance. Experiments in outdoor environments show the mobile robot with the proposed sensor fusion system performed successfully real-time autonomous guided navigation.

Real Time 3D Face Pose Discrimination Based On Active IR Illumination (능동적 적외선 조명을 이용한 실시간 3차원 얼굴 방향 식별)

  • 박호식;배철수
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.3
    • /
    • pp.727-732
    • /
    • 2004
  • In this paper, we introduce a new approach for real-time 3D face pose discrimination based on active IR illumination from a monocular view of the camera. Under the IR illumination, the pupils appear bright. We develop algorithms for efficient and robust detection and tracking pupils in real time. Based on the geometric distortions of pupils under different face orientations, an eigen eye feature space is built based on training data that captures the relationship between 3D face orientation and the geometric features of the pupils. The 3D face pose for an input query image is subsequently classified using the eigen eye feature space. From the experiment, we obtained the range of results of discrimination from the subjects which close to the camera are from 94,67%, minimum from 100%, maximum.

Long Distance Vehicle Recognition and Tracking using Shadow (그림자를 이용한 원거리 차량 인식 및 추적)

  • Ahn, Young-Sun;Kwak, Seong-Woo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.14 no.1
    • /
    • pp.251-256
    • /
    • 2019
  • This paper presents an algorithm for recognizing and tracking a vehicle at a distance using a monocular camera installed at the center of the windshield of a vehicle to operate an autonomous vehicle in a racing. The vehicle is detected using the Haar feature, and the size and position of the vehicle are determined by detecting the shadows at the bottom of the vehicle. The region around the recognized vehicle is determined as ROI (Region Of Interest) and the vehicle shadow within the ROI is found and tracked in the next frame. Then the position, relative speed and direction of the vehicle are predicted. Experimental results show that the vehicle is recognized with a recognition rate of over 90% at a distance of more than 100 meters.