• Title/Summary/Keyword: Camera localization

Search Result 200, Processing Time 0.024 seconds

Real-Time Individual Tracking of Multiple Moving Objects for Projection based Augmented Visualization (다중 동적객체의 실시간 독립추적을 통한 프로젝션 증강가시화)

  • Lee, June-Hyung;Kim, Ki-Hong
    • Journal of Digital Convergence
    • /
    • v.12 no.11
    • /
    • pp.357-364
    • /
    • 2014
  • AR contents, if markers to be tracked move fast, show flickering while updating images captured from cameras. Conventional methods employing image based markers and SLAM algorithms for tracking objects have the problem that they do not allow more than 2 objects to be tracked simultaneously and interacted with each other in the same camera scene. In this paper, an improved SLAM type algorithm for tracking dynamic objects is proposed and investigated to solve the problem described above. To this end, method using 2 virtual cameras for one physical camera is adopted, which makes the tracked 2 objects interacted with each other. This becomes possible because 2 objects are perceived separately by single physical camera. Mobile robots used as dynamic objects are synchronized with virtual robots in the well-designed contents, proving usefulness of applying the result of individual tracking for multiple moving objects to augmented visualization of objects.

ARVisualizer : A Markerless Augmented Reality Approach for Indoor Building Information Visualization System

  • Kim, Albert Hee-Kwan;Cho, Hyeon-Dal
    • Spatial Information Research
    • /
    • v.16 no.4
    • /
    • pp.455-465
    • /
    • 2008
  • Augmented reality (AR) has tremendous potential in visualizing geospatial information, especially on the actual physical scenes. However, to utilize augmented reality in mobile system, many researches have undergone with GPS or ubiquitous marker based approaches. Although there are several papers written with vision based markerless tracking, previous approaches provide fairly good results only in largely under "controlled environments." Localization and tracking of current position become more complex problem when it is used in indoor environments. Many proposed Radio Frequency (RF) based tracking and localization. However, it does cause deployment problems of large RF-based sensors and readers. In this paper, we present a noble markerless AR approach for indoor (possible outdoor, too) navigation system only using monoSLAM (Monocular Simultaneous Localization and Map building) algorithm to full-fill our grand effort to develop mobile seamless indoor/outdoor u-GIS system. The paper briefly explains the basic SLAM algorithm, then the implementation of our system.

  • PDF

Indoor Localization by Matching of the Types of Vertices (모서리 유형의 정합을 이용한 실내 환경에서의 자기위치검출)

  • Ahn, Hyun-Sik
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.46 no.6
    • /
    • pp.65-72
    • /
    • 2009
  • This paper presents a vision based localization method for indoor mobile robots using the types of vertices from a monocular image. In the images captured from a camera of a robot, the types of vertices are determined by searching vertical edges and their branch edges with a geometric constraints. For obtaining correspondence between the comers of a 2-D map and the vertex of images, the type of vertices and geometrical constraints induced from a geometric analysis. The vertices are matched with the comers by a heuristic method using the type and position of the vertices and the comers. With the matched pairs, nonlinear equations derived from the perspective and rigid transformations are produced. The pose of the robot is computed by solving the equations using a least-squares optimization technique. Experimental results show that the proposed localization method is effective and applicable to the localization of indoor environments.

Object Detection and Localization on Map using Multiple Camera and Lidar Point Cloud

  • Pansipansi, Leonardo John;Jang, Minseok;Lee, Yonsik
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.422-424
    • /
    • 2021
  • In this paper, it leads the approach of fusing multiple RGB cameras for visual objects recognition based on deep learning with convolution neural network and 3D Light Detection and Ranging (LiDAR) to observe the environment and match into a 3D world in estimating the distance and position in a form of point cloud map. The goal of perception in multiple cameras are to extract the crucial static and dynamic objects around the autonomous vehicle, especially the blind spot which assists the AV to navigate according to the goal. Numerous cameras with object detection might tend slow-going the computer process in real-time. The computer vision convolution neural network algorithm to use for eradicating this problem use must suitable also to the capacity of the hardware. The localization of classified detected objects comes from the bases of a 3D point cloud environment. But first, the LiDAR point cloud data undergo parsing, and the used algorithm is based on the 3D Euclidean clustering method which gives an accurate on localizing the objects. We evaluated the method using our dataset that comes from VLP-16 and multiple cameras and the results show the completion of the method and multi-sensor fusion strategy.

  • PDF

Multi-camera System Calibration with Built-in Relative Orientation Constraints (Part 2) Automation, Implementation, and Experimental Results

  • Lari, Zahra;Habib, Ayman;Mazaheri, Mehdi;Al-Durgham, Kaleel
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.32 no.3
    • /
    • pp.205-216
    • /
    • 2014
  • Multi-camera systems have been widely used as cost-effective tools for the collection of geospatial data for various applications. In order to fully achieve the potential accuracy of these systems for object space reconstruction, careful system calibration should be carried out prior to data collection. Since the structural integrity of the involved cameras' components and system mounting parameters cannot be guaranteed over time, multi-camera system should be frequently calibrated to confirm the stability of the estimated parameters. Therefore, automated techniques are needed to facilitate and speed up the system calibration procedure. The automation of the multi-camera system calibration approach, which was proposed in the first part of this paper, is contingent on the automated detection, localization, and identification of the object space signalized targets in the images. In this paper, the automation of the proposed camera calibration procedure through automatic target extraction and labelling approaches will be presented. The introduced automated system calibration procedure is then implemented for a newly-developed multi-camera system while considering the optimum configuration for the data collection. Experimental results from the implemented system calibration procedure are finally presented to verify the feasibility the proposed automated procedure. Qualitative and quantitative evaluation of the estimated system calibration parameters from two-calibration sessions is also presented to confirm the stability of the cameras' interior orientation and system mounting parameters.

Direct Depth and Color-based Environment Modeling and Mobile Robot Navigation (스테레오 비전 센서의 깊이 및 색상 정보를 이용한 환경 모델링 기반의 이동로봇 주행기술)

  • Park, Soon-Yong;Park, Mignon;Park, Sung-Kee
    • The Journal of Korea Robotics Society
    • /
    • v.3 no.3
    • /
    • pp.194-202
    • /
    • 2008
  • This paper describes a new method for indoor environment mapping and localization with stereo camera. For environmental modeling, we directly use the depth and color information in image pixels as visual features. Furthermore, only the depth and color information at horizontal centerline in image is used, where optical axis passes through. The usefulness of this method is that we can easily build a measure between modeling and sensing data only on the horizontal centerline. That is because vertical working volume between model and sensing data can be changed according to robot motion. Therefore, we can build a map about indoor environment as compact and efficient representation. Also, based on such nodes and sensing data, we suggest a method for estimating mobile robot positioning with random sampling stochastic algorithm. With basic real experiments, we show that the proposed method can be an effective visual navigation algorithm.

  • PDF

Localization and Mapping System using Single Camera and PSD Sensors (단일 카메라와 PSD 센서를 이용한 로봇 위치추적 및 맵핑 시스템)

  • Yoo, Sung-Goo;Chong, Kil-To
    • Proceedings of the KIEE Conference
    • /
    • 2008.10b
    • /
    • pp.339-340
    • /
    • 2008
  • 로봇의 현재 위지 추적은 무인 로봇 자동 항법시스템의 중요 기술로 센서 데이터로부터 로봇의 위치를 결정하고 환경맵을 구성하는 것이다. 기존 방법으로는 초음파, 레이저 등의 거리 측정 센서를 이용해 로봇의 전역 위치를 찾는 방법과 스테레오 비전을 통한 방법이 개발되었다. 거리 측정 센서만으로 로봇위치 추적 알고리즘은 계산량이 감소하고 비용이 적게 들지만 센서오차율 및 환경장애에 따른 오류가 크다. 이에 반해 스테레오 비전 시스템은 3차원 공간영역을 정확히 측정할 수 있지만 계산량이 많아 고사양의 시스템을 요구하고 알고리즘 구현에 어려움이 있다. 따라서 본 논문에서는 단일 카메라 영상과 PSD(position sensitive device) 센서를 사용하여 로봇의 현재 위치를 추적하고 환경맵을 구성하여 자율이동이 가능한 시스템을 제안한다.

  • PDF

Visual SLAM using Local Bundle Optimization in Unstructured Seafloor Environment (국소 집단 최적화 기법을 적용한 비정형 해저면 환경에서의 비주얼 SLAM)

  • Hong, Seonghun;Kim, Jinwhan
    • The Journal of Korea Robotics Society
    • /
    • v.9 no.4
    • /
    • pp.197-205
    • /
    • 2014
  • As computer vision algorithms are developed on a continuous basis, the visual information from vision sensors has been widely used in the context of simultaneous localization and mapping (SLAM), called visual SLAM, which utilizes relative motion information between images. This research addresses a visual SLAM framework for online localization and mapping in an unstructured seabed environment that can be applied to a low-cost unmanned underwater vehicle equipped with a single monocular camera as a major measurement sensor. Typically, an image motion model with a predefined dimensionality can be corrupted by errors due to the violation of the model assumptions, which may lead to performance degradation of the visual SLAM estimation. To deal with the erroneous image motion model, this study employs a local bundle optimization (LBO) scheme when a closed loop is detected. The results of comparison between visual SLAM estimation with LBO and the other case are presented to validate the effectiveness of the proposed methodology.

Development of Localization Sensor System for Intelligent Robots (지능로봇용 위치인식 시스템 개발)

  • You, Ki-Sung;Choi, Chin-Tae
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.2
    • /
    • pp.116-124
    • /
    • 2011
  • A service robot can identify its own position relative to landmarks, the locations of which are known in advance. The main contribution of this research is that it gives various ways of making the self-localizing error smaller by referring to special landmarks which are developed as high gain reflection material and coded array associations. In this paper, the authors propose a set of indices to evaluate the accuracy of self-localizing methods using the selective reflection landmark and infrared projector, and the indices are derived from the sensitivity enhancement using 3D distortion calibration of camera. And then, the accurarcy of self-localizing a mobile robot with landmarks based on the indices is evaluated, and a rational way to minimize to reduce the computational cost of selecting the best self-localizing method. The simulation results show a high accuracy and a good performance.

Localization of a Mobile Robot Using Multiple Ceiling Lights (여러 개의 조명등을 이용한 이동 로봇의 위치 추정)

  • Han, Yeon-Ju;Park, Tae-Hyoung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.4
    • /
    • pp.379-384
    • /
    • 2013
  • We propose a new global positioning method for the indoor mobile robots. The multiple indoor lights fixed in ceiling are used as the landmarks of positioning system. The ceiling images are acquired by the fisheye lens camera mounted on the moving robot. The position and orientation of the lights are extracted by binarization and labeling techniques. Also the boundary lines between ceiling and walls are extracted to identify the order of each light. The robot position is then calculated from the extracted position and known position of the lights. The proposed system can increase the accuracy and reduce the computation time comparing with the other positioning methods using natural landmark. Experimental results are presented to show the performance of the method.