• Title/Summary/Keyword: Camera localization

Search Result 199, Processing Time 0.024 seconds

3D Environment Perception using Stereo Infrared Light Sources and a Camera (스테레오 적외선 조명 및 단일카메라를 이용한 3차원 환경인지)

  • Lee, Soo-Yong;Song, Jae-Bok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.5
    • /
    • pp.519-524
    • /
    • 2009
  • This paper describes a new sensor system for 3D environment perception using stereo structured infrared light sources and a camera. Environment and obstacle sensing is the key issue for mobile robot localization and navigation. Laser scanners and infrared scanners cover $180^{\circ}$ and are accurate but too expensive. Those sensors use rotating light beams so that the range measurements are constrained on a plane. 3D measurements are much more useful in many ways for obstacle detection, map building and localization. Stereo vision is very common way of getting the depth information of 3D environment. However, it requires that the correspondence should be clearly identified and it also heavily depends on the light condition of the environment. Instead of using stereo camera, monocular camera and two projected infrared light sources are used in order to reduce the effects of the ambient light while getting 3D depth map. Modeling of the projected light pattern enabled precise estimation of the range. Two successive captures of the image with left and right infrared light projection provide several benefits, which include wider area of depth measurement, higher spatial resolution and the visibility perception.

A Real-time Audio Surveillance System Detecting and Localizing Dangerous Sounds for PTZ Camera Surveillance (PTZ 카메라 감시를 위한 실시간 위험 소리 검출 및 음원 방향 추정 소리 감시 시스템)

  • Nguyen, Viet Quoc;Kang, HoSeok;Chung, Sun-Tae;Cho, Seongwon
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.11
    • /
    • pp.1272-1280
    • /
    • 2013
  • In this paper, we propose an audio surveillance system which can detect and localize dangerous sounds in real-time. The location information about dangerous sounds can render a PTZ camera to be directed so as to catch a snapshot image about the dangerous sound source area and send it to clients instantly. The proposed audio surveillance system firstly detects foreground sounds based on adaptive Gaussian mixture background sound model, and classifies it into one of pre-trained classes of foreground dangerous sounds. For detected dangerous sounds, a sound source localization algorithm based on Dual delay-line algorithm is applied to localize the sound sources. Finally, the proposed system renders a PTZ camera to be oriented towards the dangerous sound source region, and take a snapshot against over the sound source region. Experiment results show that the proposed system can detect foreground dangerous sounds stably and classifies the detected foreground dangerous sounds into correct classes with a precision of 79% while the sound source localization can estimate orientation of the sound source with acceptably small error.

Localization of 3D Spatial Information from Single Omni-Directional Image (단일 전방향 영상을 이용한 공간 정보의 측정)

  • Kang Hyun-Deok;Jo Kang-Hyun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.12 no.7
    • /
    • pp.686-692
    • /
    • 2006
  • This paper shows the calculation of 3D geometric information such as height, direction and distance under the constraints of a catadioptric camera system. The catadioptric camera system satisfies the single viewpoint constraints adopting hyperboloidal mirror. To calculate the 3D information with a single omni-directional image, the points are assumed to lie in perpendicular to the ground. The infinite plane is also detected as a circle from the structure of the mirror and camera. The analytic experiments verify the correctness of theory using real images taken in indoor environments like rooms or corridors. Thus, the experimental results show the applicability to calculate the 3D geometric information using single omni-directional images.

Bayesian Sensor Fusion of Monocular Vision and Laser Structured Light Sensor for Robust Localization of a Mobile Robot (이동 로봇의 강인 위치 추정을 위한 단안 비젼 센서와 레이저 구조광 센서의 베이시안 센서융합)

  • Kim, Min-Young;Ahn, Sang-Tae;Cho, Hyung-Suck
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.4
    • /
    • pp.381-390
    • /
    • 2010
  • This paper describes a procedure of the map-based localization for mobile robots by using a sensor fusion technique in structured environments. A combination of various sensors with different characteristics and limited sensibility has advantages in view of complementariness and cooperation to obtain better information on the environment. In this paper, for robust self-localization of a mobile robot with a monocular camera and a laser structured light sensor, environment information acquired from two sensors is combined and fused by a Bayesian sensor fusion technique based on the probabilistic reliability function of each sensor predefined through experiments. For the self-localization using the monocular vision, the robot utilizes image features consisting of vertical edge lines from input camera images, and they are used as natural landmark points in self-localization process. However, in case of using the laser structured light sensor, it utilizes geometrical features composed of corners and planes as natural landmark shapes during this process, which are extracted from range data at a constant height from the navigation floor. Although only each feature group of them is sometimes useful to localize mobile robots, all features from the two sensors are simultaneously used and fused in term of information for reliable localization under various environment conditions. To verify the advantage of using multi-sensor fusion, a series of experiments are performed, and experimental results are discussed in detail.

The 3 Dimensional Triangulation Scheme based on the Space Segmentation in WPAN

  • Lee, Dong Myung;Lee, Ho Chul
    • Journal of Engineering Education Research
    • /
    • v.15 no.5
    • /
    • pp.93-97
    • /
    • 2012
  • Most of ubiquitous computing devices such as stereo camera, ultrasonic sensor based MIT cricket system and other wireless sensor network devices are widely applied to the 2 Dimensional(2D) localization system in today. Because stereo camera cannot estimate the optimal location between moving node and beacon node in Wireless Personal Area Network(WPAN) under Non Line Of Sight(NLOS) environment, it is a great weakness point to the design of the 2D localization system in indoor environment. But the conventional 2D triangulation scheme that is adapted to the MIT cricket system cannot estimate the 3 Dimensional(3D) coordinate values for estimation of the optimal location of the moving node generally. Therefore, the 3D triangulation scheme based on the space segmentation in WPAN is suggested in this paper. The measuring data in the suggested scheme by computer simulation is compared with that of the geographic measuring data in the AutoCAD software system. The average error of coordinates values(x,y,z) of the moving node is calculated to 0.008m by the suggested scheme. From the results, it can be seen that the location correctness of the suggested scheme is very excellent for using the localization system in WPAN.

Localization for Mobile Robot Using Line Segments (라인 세그먼트를 이용한 이동 로봇의 자기 위치 추정)

  • 강창훈;안현식
    • Proceedings of the IEEK Conference
    • /
    • 2003.07c
    • /
    • pp.2581-2584
    • /
    • 2003
  • In this paper, we propose a self-localization algorithm using vertical line segments. Indoor environment is consist of horizontal and vertical line features such as doors, furniture, and so on. From the input image, vertical line edges are detected by an edge operator, Then, line segments are obtained by projecting edge image vertically and detecting local maximum from the projected histogram. From the relation of horizontal position of line segments and the location of the robot, nonlinear equations are come out Localization is done by solving the equations by using Newton's method. Experimental results show that the proposed algorithm using one camera is simple and applicable to indoor environment.

  • PDF

Mobile Robot Navigation in an Indoor Environment

  • Choi, Sung-Yug;Lee, Jang-Myung
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.1456-1459
    • /
    • 2005
  • To compensate the drawbacks, a new localization method that estimates the global position of the mobile robot by using a camera set on ceiling in the corridor is proposed. This scheme is not a relative localization, which decreases the position error through algorithms with noisy sensor data. The effectiveness of the proposed localization scheme is demonstrated by the experiments.

  • PDF

The GEO-Localization of a Mobile Mapping System (모바일 매핑 시스템의 GEO 로컬라이제이션)

  • Chon, Jae-Choon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.27 no.5
    • /
    • pp.555-563
    • /
    • 2009
  • When a mobile mapping system or a robot is equipped with only a GPS (Global Positioning System) and multiple stereo camera system, a transformation from a local camera coordinate system to GPS coordinate system is required to link camera poses and 3D data by V-SLAM (Vision based Simultaneous Localization And Mapping) to GIS data or remove the accumulation error of those camera poses. In order to satisfy the requirements, this paper proposed a novel method that calculates a camera rotation in the GPS coordinate system using the three pairs of camera positions by GPS and V-SLAM, respectively. The propose method is composed of four simple steps; 1) calculate a quaternion for two plane's normal vectors based on each three camera positions to be parallel, 2) transfer the three camera positions by V-SLAM with the calculated quaternion 3) calculate an additional quaternion for mapping the second or third point among the transferred positions to a camera position by GPS, and 4) determine a final quaternion by multiplying the two quaternions. The final quaternion can directly transfer from a local camera coordinate system to the GPS coordinate system. Additionally, an update of the 3D data of captured objects based on view angles from the object to cameras is proposed. This paper demonstrated the proposed method through a simulation and an experiment.

3D Range Measurement using Infrared Light and a Camera (적외선 조명 및 단일카메라를 이용한 입체거리 센서의 개발)

  • Kim, In-Cheol;Lee, Soo-Yong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.10
    • /
    • pp.1005-1013
    • /
    • 2008
  • This paper describes a new sensor system for 3D range measurement using the structured infrared light. Environment and obstacle sensing is the key issue for mobile robot localization and navigation. Laser scanners and infrared scanners cover $180^{\circ}$ and are accurate but too expensive. Those sensors use rotating light beams so that the range measurements are constrained on a plane. 3D measurements are much more useful in many ways for obstacle detection, map building and localization. Stereo vision is very common way of getting the depth information of 3D environment. However, it requires that the correspondence should be clearly identified and it also heavily depends on the light condition of the environment. Instead of using stereo camera, monocular camera and the projected infrared light are used in order to reduce the effects of the ambient light while getting 3D depth map. Modeling of the projected light pattern enabled precise estimation of the range. Identification of the cells from the pattern is the key issue in the proposed method. Several methods of correctly identifying the cells are discussed and verified with experiments.

Appearance Based Object Identification for Mobile Robot Localization in Intelligent Space with Distributed Vision Sensors

  • Jin, TaeSeok;Morioka, Kazuyuki;Hashimoto, Hideki
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.4 no.2
    • /
    • pp.165-171
    • /
    • 2004
  • Robots will be able to coexist with humans and support humans effectively in near future. One of the most important aspects in the development of human-friendly robots is to cooperation between humans and robots. In this paper, we proposed a method for multi-object identification in order to achieve such human-centered system and robot localization in intelligent space. The intelligent space is the space where many intelligent devices, such as computers and sensors, are distributed. The Intelligent Space achieves the human centered services by accelerating the physical and psychological interaction between humans and intelligent devices. As an intelligent device of the Intelligent Space, a color CCD camera module, which includes processing and networking part, has been chosen. The Intelligent Space requires functions of identifying and tracking the multiple objects to realize appropriate services to users under the multi-camera environments. In order to achieve seamless tracking and location estimation many camera modules are distributed. They causes some errors about object identification among different camera modules. This paper describes appearance based object representation for the distributed vision system in Intelligent Space to achieve consistent labeling of all objects. Then, we discuss how to learn the object color appearance model and how to achieve the multi-object tracking under occlusions.