• 제목/요약/키워드: Camera localization

검색결과 199건 처리시간 0.024초

특징점 기반 확률 맵을 이용한 단일 카메라의 위치 추정방법 (Localization of a Monocular Camera using a Feature-based Probabilistic Map)

  • 김형진;이동화;오택준;명현
    • 제어로봇시스템학회논문지
    • /
    • 제21권4호
    • /
    • pp.367-371
    • /
    • 2015
  • In this paper, a novel localization method for a monocular camera is proposed by using a feature-based probabilistic map. The localization of a camera is generally estimated from 3D-to-2D correspondences between a 3D map and an image plane through the PnP algorithm. In the computer vision communities, an accurate 3D map is generated by optimization using a large number of image dataset for camera pose estimation. In robotics communities, a camera pose is estimated by probabilistic approaches with lack of feature. Thus, it needs an extra system because the camera system cannot estimate a full state of the robot pose. Therefore, we propose an accurate localization method for a monocular camera using a probabilistic approach in the case of an insufficient image dataset without any extra system. In our system, features from a probabilistic map are projected into an image plane using linear approximation. By minimizing Mahalanobis distance between the projected features from the probabilistic map and extracted features from a query image, the accurate pose of the monocular camera is estimated from an initial pose obtained by the PnP algorithm. The proposed algorithm is demonstrated through simulations in a 3D space.

Landmark를 이용한 localization 문제 접근에 관한 연구 (A study on approach of localization problem using landmarks)

  • 김태우;이쾌희
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1997년도 한국자동제어학술회의논문집; 한국전력공사 서울연수원; 17-18 Oct. 1997
    • /
    • pp.44-47
    • /
    • 1997
  • Building a reliable mobile robot - one that can navigate without failures for long periods of time - requires that the uncertainty which results from control and sensing is bounded. This paper proposes a new mobile robot localization method using artificial landmarks. For a mobile robot localization, the proposed method uses a camera calibration(only extrinsic parameters). We use the FANUC arc mate to estimate the posture error, and the result shows that the position error is less than 1 cm and the orientation error less than 1 degrees.

  • PDF

실내 환경에서의 이동로봇의 위치추정을 위한 카메라 센서 네트워크 기반의 실내 위치 확인 시스템 (Indoor Positioning System Based on Camera Sensor Network for Mobile Robot Localization in Indoor Environments)

  • 지용훈
    • 제어로봇시스템학회논문지
    • /
    • 제22권11호
    • /
    • pp.952-959
    • /
    • 2016
  • This paper proposes a novel indoor positioning system (IPS) that uses a calibrated camera sensor network and dense 3D map information. The proposed IPS information is obtained by generating a bird's-eye image from multiple camera images; thus, our proposed IPS can provide accurate position information when objects (e.g., the mobile robot or pedestrians) are detected from multiple camera views. We evaluate the proposed IPS in a real environment with moving objects in a wireless camera sensor network. The results demonstrate that the proposed IPS can provide accurate position information for moving objects. This can improve the localization performance for mobile robot operation.

기준 평면의 설정에 의한 확장 칼만 필터 SLAM 기반 카메라 추적 방법 (EKF SLAM-based Camera Tracking Method by Establishing the Reference Planes)

  • 남보담;홍현기
    • 한국게임학회 논문지
    • /
    • 제12권3호
    • /
    • pp.87-96
    • /
    • 2012
  • 본 논문에서는 시퀀스 상에서 확장 칼만필터(Extended Kalman Filter) 기반의 SLAM(Simultaneous Localization And Mapping) 시스템의 안정적인 카메라 추적과 재위치(re-localization) 방법이 제안된다. SLAM으로 얻어진 3차원 특징점에 들로네(Delaunay) 삼각화를 적용하여 기준(reference) 평면을 설정하며, 평면상에 존재하는 특징점의 BRISK(Binary Robust Invariant Scalable Keypoints) 기술자(descriptor)를 생성한다. 기존 확장 칼만필터의 오차가 누적되는 경우를 판단하여 기준 평면의 호모그래피로부터 카메라 정보를 해석한다. 또한 카메라가 급격하게 이동해서 특징점 추적이 실패하면, 저장된 강건한 기술자 정보를 매칭하여 카메라의 위치를 다시 추정한다.

옴니 카메라의 전방향 영상을 이용한 이동 로봇의 위치 인식 시스템 (Omni Camera Vision-Based Localization for Mobile Robots Navigation Using Omni-Directional Images)

  • 김종록;임미섭;임준홍
    • 제어로봇시스템학회논문지
    • /
    • 제17권3호
    • /
    • pp.206-210
    • /
    • 2011
  • Vision-based robot localization is challenging due to the vast amount of visual information available, requiring extensive storage and processing time. To deal with these challenges, we propose the use of features extracted from omni-directional panoramic images and present a method for localization of a mobile robot equipped with an omni-directional camera. The core of the proposed scheme may be summarized as follows : First, we utilize an omni-directional camera which can capture instantaneous $360^{\circ}$ panoramic images around a robot. Second, Nodes around the robot are extracted by the correlation coefficients of Circular Horizontal Line between the landmark and the current captured image. Third, the robot position is determined from the locations by the proposed correlation-based landmark image matching. To accelerate computations, we have assigned the node candidates using color information and the correlation values are calculated based on Fast Fourier Transforms. Experiments show that the proposed method is effective in global localization of mobile robots and robust to lighting variations.

EpiLoc: Deep Camera Localization Under Epipolar Constraint

  • Xu, Luoyuan;Guan, Tao;Luo, Yawei;Wang, Yuesong;Chen, Zhuo;Liu, WenKai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권6호
    • /
    • pp.2044-2059
    • /
    • 2022
  • Recent works have shown that the geometric constraint can be harnessed to boost the performance of CNN-based camera localization. However, the existing strategies are limited to imposing image-level constraint between pose pairs, which is weak and coarse-gained. In this paper, we introduce a pixel-level epipolar geometry constraint to vanilla localization framework without the ground-truth 3D information. Dubbed EpiLoc, our method establishes the geometric relationship between pixels in different images by utilizing the epipolar geometry thus forcing the network to regress more accurate poses. We also propose a variant called EpiSingle to cope with non-sequential training images, which can construct the epipolar geometry constraint based on a single image in a self-supervised manner. Extensive experiments on the public indoor 7Scenes and outdoor RobotCar datasets show that the proposed pixel-level constraint is valuable, and helps our EpiLoc achieve state-of-the-art results in the end-to-end camera localization task.

센서융합을 이용한 모바일로봇 실내 위치인식 기법 (An Indoor Localization of Mobile Robot through Sensor Data Fusion)

  • 김윤구;이기동
    • 로봇학회논문지
    • /
    • 제4권4호
    • /
    • pp.312-319
    • /
    • 2009
  • This paper proposes a low-complexity indoor localization method of mobile robot under the dynamic environment by fusing the landmark image information from an ordinary camera and the distance information from sensor nodes in an indoor environment, which is based on sensor network. Basically, the sensor network provides an effective method for the mobile robot to adapt to environmental changes and guides it across a geographical network area. To enhance the performance of localization, we used an ordinary CCD camera and the artificial landmarks, which are devised for self-localization. Experimental results show that the real-time localization of mobile robot can be achieved with robustness and accurateness using the proposed localization method.

  • PDF

DEM과 산영상을 이용한 비전기반 카메라 위치인식 (Vision-based Camera Localization using DEM and Mountain Image)

  • 차정희
    • 한국컴퓨터정보학회논문지
    • /
    • 제10권6호
    • /
    • pp.177-186
    • /
    • 2005
  • 본 논문에서는 DEM(Digital Elevation Model)과 산 영상을 매핑하여 3차원 정보를 생성하고 이를 이용한 비전기반 카메라 위치인식방법을 제안한다. 일반적으로 인식에 사용된 영상의 특징들은 카메라뷰에 따라 내용이 변해 정보양이 증가하는 단점이 있다. 본 논문에서는 카메라뷰에 무관한 기하학의 불변특징을 추출하고 제안하는 유사도 평가함수와 Graham 탐색방법을 사용한 정확한 대응점을 산출하여 카메라 외부인수를 계산하였다. 또한 그래픽이론과 시각적 단서를 이용한 3차원 정보생성 방법을 제안하였다. 제안하는 방법은 불변 점 특징 추출단계, 3차원 정보 생성단계, 외부인수 산출단계의 3단계로 구성된다. 실험에서는 제안한 방법과 기존방법을 비교, 분석함으로써 제안한 방법의 우월성을 입증하였다.

  • PDF

전자 나침반과 적외선 광원 추적을 이용한 이동로봇용 위치 인식 시스템 (Localization System for Mobile Robot Using Electric Compass and Tracking IR Light Source)

  • 손창우;이승희;이민철
    • 제어로봇시스템학회논문지
    • /
    • 제14권8호
    • /
    • pp.767-773
    • /
    • 2008
  • This paper presents a localization system based on the use of electric compass and tracking IR light source. Digital RGB(Red, Green, Blue)signal of digital CMOS Camera is sent to CPLD which converts the color image to binary image at 30 frames per second. CMOS camera has IR filter and UV filter in front of CMOS cell. The filters cut off above 720nm light source. Binary output data of CPLD is sent to DSP that rapidly tracks the IR light source by moving Camera tilt DC motor. At a robot toward north, electric compass signals and IR light source angles which are used for calculating the data of the location system. Because geomagnetic field is linear in local position, this location system is possible. Finally, it is shown that position error is within ${\pm}1.3cm$ in this system.

다중 카메라 시스템을 위한 전방위 Visual-LiDAR SLAM (Omni-directional Visual-LiDAR SLAM for Multi-Camera System)

  • 지샨 자비드;김곤우
    • 로봇학회논문지
    • /
    • 제17권3호
    • /
    • pp.353-358
    • /
    • 2022
  • Due to the limited field of view of the pinhole camera, there is a lack of stability and accuracy in camera pose estimation applications such as visual SLAM. Nowadays, multiple-camera setups and large field of cameras are used to solve such issues. However, a multiple-camera system increases the computation complexity of the algorithm. Therefore, in multiple camera-assisted visual simultaneous localization and mapping (vSLAM) the multi-view tracking algorithm is proposed that can be used to balance the budget of the features in tracking and local mapping. The proposed algorithm is based on PanoSLAM architecture with a panoramic camera model. To avoid the scale issue 3D LiDAR is fused with omnidirectional camera setup. The depth is directly estimated from 3D LiDAR and the remaining features are triangulated from pose information. To validate the method, we collected a dataset from the outdoor environment and performed extensive experiments. The accuracy was measured by the absolute trajectory error which shows comparable robustness in various environments.