• 제목/요약/키워드: Camera pose estimation

검색결과 122건 처리시간 0.02초

모바일 머니퓰레이터의 작업을 위한 카메라 보정 및 포즈 추정 (Camera Calibration and Pose Estimation for Tasks of a Mobile Manipulator)

  • 최지훈;김해창;송재복
    • 로봇학회논문지
    • /
    • 제15권4호
    • /
    • pp.350-356
    • /
    • 2020
  • Workers have been replaced by mobile manipulators for factory automation in recent years. One of the typical tasks for automation is that a mobile manipulator moves to a target location and picks and places an object on the worktable. However, due to the pose estimation error of the mobile platform, the robot cannot reach the exact target position, which prevents the manipulator from being able to accurately pick and place the object on the worktable. In this study, we developed an automatic alignment system using a low-cost camera mounted on the end-effector of a collaborative robot. Camera calibration and pose estimation methods were also proposed for the automatic alignment system. This algorithm uses a markerboard composed of markers to calibrate the camera and then precisely estimate the camera pose. Experimental results demonstrate that the mobile manipulator can perform successful pick and place tasks on various conditions.

Camera pose estimation framework for array-structured images

  • Shin, Min-Jung;Park, Woojune;Kim, Jung Hee;Kim, Joonsoo;Yun, Kuk-Jin;Kang, Suk-Ju
    • ETRI Journal
    • /
    • 제44권1호
    • /
    • pp.10-23
    • /
    • 2022
  • Despite the significant progress in camera pose estimation and structure-from-motion reconstruction from unstructured images, methods that exploit a priori information on camera arrangements have been overlooked. Conventional state-of-the-art methods do not exploit the geometric structure to recover accurate camera poses from a set of patch images in an array for mosaic-based imaging that creates a wide field-of-view image by sewing together a collection of regular images. We propose a camera pose estimation framework that exploits the array-structured image settings in each incremental reconstruction step. It consists of the two-way registration, the 3D point outlier elimination and the bundle adjustment with a constraint term for consistent rotation vectors to reduce reprojection errors during optimization. We demonstrate that by using individual images' connected structures at different camera pose estimation steps, we can estimate camera poses more accurately from all structured mosaic-based image sets, including omnidirectional scenes.

Robust 2D human upper-body pose estimation with fully convolutional network

  • Lee, Seunghee;Koo, Jungmo;Kim, Jinki;Myung, Hyun
    • Advances in robotics research
    • /
    • 제2권2호
    • /
    • pp.129-140
    • /
    • 2018
  • With the increasing demand for the development of human pose estimation, such as human-computer interaction and human activity recognition, there have been numerous approaches to detect the 2D poses of people in images more efficiently. Despite many years of human pose estimation research, the estimation of human poses with images remains difficult to produce satisfactory results. In this study, we propose a robust 2D human body pose estimation method using an RGB camera sensor. Our pose estimation method is efficient and cost-effective since the use of RGB camera sensor is economically beneficial compared to more commonly used high-priced sensors. For the estimation of upper-body joint positions, semantic segmentation with a fully convolutional network was exploited. From acquired RGB images, joint heatmaps accurately estimate the coordinates of the location of each joint. The network architecture was designed to learn and detect the locations of joints via the sequential prediction processing method. Our proposed method was tested and validated for efficient estimation of the human upper-body pose. The obtained results reveal the potential of a simple RGB camera sensor for human pose estimation applications.

사각형 특징 기반 Visual SLAM을 위한 자세 추정 방법 (A Camera Pose Estimation Method for Rectangle Feature based Visual SLAM)

  • 이재민;김곤우
    • 로봇학회논문지
    • /
    • 제11권1호
    • /
    • pp.33-40
    • /
    • 2016
  • In this paper, we propose a method for estimating the pose of the camera using a rectangle feature utilized for the visual SLAM. A warped rectangle feature as a quadrilateral in the image by the perspective transformation is reconstructed by the Coupled Line Camera algorithm. In order to fully reconstruct a rectangle in the real world coordinate, the distance between the features and the camera is needed. The distance in the real world coordinate can be measured by using a stereo camera. Using properties of the line camera, the physical size of the rectangle feature can be induced from the distance. The correspondence between the quadrilateral in the image and the rectangle in the real world coordinate can restore the relative pose between the camera and the feature through obtaining the homography. In order to evaluate the performance, we analyzed the result of proposed method with its reference pose in Gazebo robot simulator.

특징점 기반 확률 맵을 이용한 단일 카메라의 위치 추정방법 (Localization of a Monocular Camera using a Feature-based Probabilistic Map)

  • 김형진;이동화;오택준;명현
    • 제어로봇시스템학회논문지
    • /
    • 제21권4호
    • /
    • pp.367-371
    • /
    • 2015
  • In this paper, a novel localization method for a monocular camera is proposed by using a feature-based probabilistic map. The localization of a camera is generally estimated from 3D-to-2D correspondences between a 3D map and an image plane through the PnP algorithm. In the computer vision communities, an accurate 3D map is generated by optimization using a large number of image dataset for camera pose estimation. In robotics communities, a camera pose is estimated by probabilistic approaches with lack of feature. Thus, it needs an extra system because the camera system cannot estimate a full state of the robot pose. Therefore, we propose an accurate localization method for a monocular camera using a probabilistic approach in the case of an insufficient image dataset without any extra system. In our system, features from a probabilistic map are projected into an image plane using linear approximation. By minimizing Mahalanobis distance between the projected features from the probabilistic map and extracted features from a query image, the accurate pose of the monocular camera is estimated from an initial pose obtained by the PnP algorithm. The proposed algorithm is demonstrated through simulations in a 3D space.

Markerless camera pose estimation framework utilizing construction material with standardized specification

  • Harim Kim;Heejae Ahn;Sebeen Yoon;Taehoon Kim;Thomas H.-K. Kang;Young K. Ju;Minju Kim;Hunhee Cho
    • Computers and Concrete
    • /
    • 제33권5호
    • /
    • pp.535-544
    • /
    • 2024
  • In the rapidly advancing landscape of computer vision (CV) technology, there is a burgeoning interest in its integration with the construction industry. Camera calibration is the process of deriving intrinsic and extrinsic parameters that affect when the coordinates of the 3D real world are projected onto the 2D plane, where the intrinsic parameters are internal factors of the camera, and extrinsic parameters are external factors such as the position and rotation of the camera. Camera pose estimation or extrinsic calibration, which estimates extrinsic parameters, is essential information for CV application at construction since it can be used for indoor navigation of construction robots and field monitoring by restoring depth information. Traditionally, camera pose estimation methods for cameras relied on target objects such as markers or patterns. However, these methods, which are marker- or pattern-based, are often time-consuming due to the requirement of installing a target object for estimation. As a solution to this challenge, this study introduces a novel framework that facilitates camera pose estimation using standardized materials found commonly in construction sites, such as concrete forms. The proposed framework obtains 3D real-world coordinates by referring to construction materials with certain specifications, extracts the 2D coordinates of the corresponding image plane through keypoint detection, and derives the camera's coordinate through the perspective-n-point (PnP) method which derives the extrinsic parameters by matching 3D and 2D coordinate pairs. This framework presents a substantial advancement as it streamlines the extrinsic calibration process, thereby potentially enhancing the efficiency of CV technology application and data collection at construction sites. This approach holds promise for expediting and optimizing various construction-related tasks by automating and simplifying the calibration procedure.

원근투영법 기반의 PTZ 카메라를 이용한 머리자세 추정 (Head Pose Estimation Based on Perspective Projection Using PTZ Camera)

  • 김진서;이경주;김계영
    • 정보처리학회논문지:소프트웨어 및 데이터공학
    • /
    • 제7권7호
    • /
    • pp.267-274
    • /
    • 2018
  • 본 논문에서는 PTZ 카메라를 이용한 머리자세추정 방법에 대하여 서술한다. 회전 또는 이동에 의하여 카메라의 외부인자가 변경되면, 추정된 얼굴자세도 변한다. 본 논문에는 PTZ 카메라의 회전과 위치 변화에 독립적으로 머리자세를 추정하는 새로운 방법을 제안한다. 제안하는 방법은 얼굴검출, 특징추출 그리고 자세추정으로 이루어진다. 얼굴검출은 MCT특징을 이용해 검출하고, 얼굴 특징추출은 회귀트리 방법을 이용해 추출하고, 머리자세 추정은 POSIT 알고리즘을 사용한다. 기존의 POSIT 알고리즘은 카메라의 회전을 고려하지 않지만, 카메라의 외부인자 변화에도 강건하게 머리자세를 추정하기 위하여 본 논문은 원근투영법에 기반하여 POSIT를 개선한다. 실험을 통하여 본 논문에서 제안하는 방법이 기존의 방법 보다 RMSE가 약 $0.6^{\circ}$ 개선되는 것을 확인했다.

차량 안전 제어를 위한 파티클 필터 기반의 강건한 다중 인체 3차원 자세 추정 (Particle Filter Based Robust Multi-Human 3D Pose Estimation for Vehicle Safety Control)

  • 박준상;박형욱
    • 자동차안전학회지
    • /
    • 제14권3호
    • /
    • pp.71-76
    • /
    • 2022
  • In autonomous driving cars, 3D pose estimation can be one of the effective methods to enhance safety control for OOP (Out of Position) passengers. There have been many studies on human pose estimation using a camera. Previous methods, however, have limitations in automotive applications. Due to unexplainable failures, CNN methods are unreliable, and other methods perform poorly. This paper proposes robust real-time multi-human 3D pose estimation architecture in vehicle using monocular RGB camera. Using particle filter, our approach integrates CNN 2D/3D pose measurements with available information in vehicle. Computer simulations were performed to confirm the accuracy and robustness of the proposed algorithm.

차선검출 기반 카메라 포즈 추정 (Lane Detection-based Camera Pose Estimation)

  • 정호기;서재규
    • 한국자동차공학회논문집
    • /
    • 제23권5호
    • /
    • pp.463-470
    • /
    • 2015
  • When a camera installed on a vehicle is used, estimation of the camera pose including tilt, roll, and pan angle with respect to the world coordinate system is important to associate camera coordinates with world coordinates. Previous approaches using huge calibration patterns have the disadvantage that the calibration patterns are costly to make and install. And, previous approaches exploiting multiple vanishing points detected in a single image are not suitable for automotive applications as a scene where multiple vanishing points can be captured by a front camera is hard to find in our daily environment. This paper proposes a camera pose estimation method. It collects multiple images of lane markings while changing the horizontal angle with respect to the markings. One vanishing point, the cross point of the left and right lane marking, is detected in each image, and vanishing line is estimated based on the detected vanishing points. Finally, camera pose is estimated from the vanishing line. The proposed method is based on the fact that planar motion does not change the vanishing line of the plane and the normal vector of the plane can be estimated by the vanishing line. Experiments with large and small tilt and roll angle show that the proposed method outputs accurate estimation results respectively. It is verified by checking the lane markings are up right in the bird's eye view image when the pan angle is compensated.

가상 객체 합성을 위한 단일 프레임에서의 안정된 카메라 자세 추정 (Reliable Camera Pose Estimation from a Single Frame with Applications for Virtual Object Insertion)

  • 박종승;이범종
    • 정보처리학회논문지B
    • /
    • 제13B권5호
    • /
    • pp.499-506
    • /
    • 2006
  • 본 논문에서는 실시간 증강현실 시스템에서의 가상 객체 삽입을 위한 빠르고 안정된 카메라 자세 추정 방법을 제안한다. 단일 프레임에서 마커의 특징점 추출을 통해 카메라의 회전행렬과 이동벡터를 추정한다. 카메라 자세 추정을 위해 정사영 투영모델에서의 분해기법을 사용한다. 정사영 투영모델에서의 분해기법은 객체의 모든 특징점의 깊이좌표가 동일하다고 가정하기 때문에 깊이좌표의 기준이 되는 참조점의 설정과 점의 분포에 따라 카메라 자세 계산의 정확도가 달라진다. 본 논문에서는 실제 환경에서 일반적으로 잘 동작하고 융통성 있는 참조점 설정 방법과 이상점 제거 방법을 제안한다. 제안된 카메라 자세추정 방법에 기반하여 탐색된 마커 위치에 가상객체를 삽입하기 위한 비디오 증강 시스템을 구현하였다. 실 환경에서의 다양한 비디오에 대한 실험 결과, 제안된 카메라 자세 추정 기법은 기존의 자세추정 기법만큼 빠르고 기존의 방법보다 안정적이고 다양한 증강현실 시스템 응용에 적용될 수 있음을 보여주었다.