• Title/Summary/Keyword: Camera pose

Search Result 271, Processing Time 0.032 seconds

A Study on the Improvement of Pose Information of Objects by Using Trinocular Vision System (Trinocular Vision System을 이용한 물체 자세정보 인식 향상방안)

  • Kim, Jong Hyeong;Jang, Kyoungjae;Kwon, Hyuk-dong
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.26 no.2
    • /
    • pp.223-229
    • /
    • 2017
  • Recently, robotic bin-picking tasks have drawn considerable attention, because flexibility is required in robotic assembly tasks. Generally, stereo camera systems have been used widely for robotic bin-picking, but these have two limitations: First, computational burden for solving correspondence problem on stereo images increases calculation time. Second, errors in image processing and camera calibration reduce accuracy. Moreover, the errors in robot kinematic parameters directly affect robot gripping. In this paper, we propose a method of correcting the bin-picking error by using trinocular vision system which consists of two stereo cameras andone hand-eye camera. First, the two stereo cameras, with wide viewing angle, measure object's pose roughly. Then, the 3rd hand-eye camera approaches the object, and corrects the previous measurement of the stereo camera system. Experimental results show usefulness of the proposed method.

The Object 3D Pose Recognition Using Stereo Camera (스테레오 카메라를 이용한 물체의 3D 포즈 인식)

  • Yoo, Sung-Hoon;Kang, Hyo-Seok;Cho, Young-Wan;Kim, Eun-Tai;Park, Mig-Non
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.1123-1124
    • /
    • 2008
  • In this paper, we develop a program that recognition of the object 3D pose using stereo camera. In order to detect the object, this paper is applied to canny edge detection algorithm and also used stereo camera to get the 3D point about the object and applied to recognize the pose of the object using iterative closest point(ICP) algorithm.

  • PDF

The Position/Orientation Determination of a Mobile-Task Robot Using an Active Calibration Scheme

  • Jin, Tae-Seok;Lee, Jang-Myung
    • Journal of Mechanical Science and Technology
    • /
    • v.17 no.10
    • /
    • pp.1431-1442
    • /
    • 2003
  • A new method of estimating the pose of a mobile-task robot is developed based upon an active calibration scheme. The utility of a mobile-task robot is widely recognized, which is formed by the serial connection of a mobile robot and a task robot. To be an efficient and precise mobile-task robot, the control uncertainties in the mobile robot should be resolved. Unless the mobile robot provides an accurate and stable base, the task robot cannot perform various tasks. For the control of the mobile robot, an absolute position sensor is necessary. However, on account of rolling and slippage of wheels on the ground, there does not exist any reliable position sensor for the mobile robot. This paper proposes an active calibration scheme to estimate the pose of a mobile robot that carries a task robot on the top. The active calibration scheme is to estimate a pose of the mobile robot using the relative position/orientation to a known object whose location, size, and shape are known a priori. For this calibration, a camera is attached on the top of the task robot to capture the images of the objects. These images are used to estimate the pose of the camera itself with respect to the known objects. Through the homogeneous transformation, the absolute position/orientation of the camera is calculated and propagated to get the pose of a mobile robot. Two types of objects are used here as samples of work-pieces: a polygonal and a cylindrical object. With these two samples, the proposed active calibration scheme is verified experimentally.

Accurate Pose Measurement of Label-attached Small Objects Using a 3D Vision Technique (3차원 비전 기술을 이용한 라벨부착 소형 물체의 정밀 자세 측정)

  • Kim, Eung-su;Kim, Kye-Kyung;Wijenayake, Udaya;Park, Soon-Yong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.10
    • /
    • pp.839-846
    • /
    • 2016
  • Bin picking is a task of picking a small object from a bin. For accurate bin picking, the 3D pose information, position, and orientation of a small object is required because the object is mixed with other objects of the same type in the bin. Using this 3D pose information, a robotic gripper can pick an object using exact distance and orientation measurements. In this paper, we propose a 3D vision technique for accurate measurement of 3D position and orientation of small objects, on which a paper label is stuck to the surface. We use a maximally stable extremal regions (MSERs) algorithm to detect the label areas in a left bin image acquired from a stereo camera. In each label area, image features are detected and their correlation with a right image is determined by a stereo vision technique. Then, the 3D position and orientation of the objects are measured accurately using a transformation from the camera coordinate system to the new label coordinate system. For stable measurement during a bin picking task, the pose information is filtered by averaging at fixed time intervals. Our experimental results indicate that the proposed technique yields pose accuracy between 0.4~0.5mm in positional measurements and $0.2-0.6^{\circ}$ in angle measurements.

Vision-based Small UAV Indoor Flight Test Environment Using Multi-Camera (멀티카메라를 이용한 영상정보 기반의 소형무인기 실내비행시험환경 연구)

  • Won, Dae-Yeon;Oh, Hyon-Dong;Huh, Sung-Sik;Park, Bong-Gyun;Ahn, Jong-Sun;Shim, Hyun-Chul;Tahk, Min-Jea
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.37 no.12
    • /
    • pp.1209-1216
    • /
    • 2009
  • This paper presents the pose estimation of a small UAV utilizing visual information from low cost cameras installed indoor. To overcome the limitation of the outside flight experiment, the indoor flight test environment based on multi-camera systems is proposed. Computer vision algorithms for the proposed system include camera calibration, color marker detection, and pose estimation. The well-known extended Kalman filter is used to obtain an accurate position and pose estimation for the small UAV. This paper finishes with several experiment results illustrating the performance and properties of the proposed vision-based indoor flight test environment.

Fine-Motion Estimation Using Ego/Exo-Cameras

  • Uhm, Taeyoung;Ryu, Minsoo;Park, Jong-Il
    • ETRI Journal
    • /
    • v.37 no.4
    • /
    • pp.766-771
    • /
    • 2015
  • Robust motion estimation for human-computer interactions played an important role in a novel method of interaction with electronic devices. Existing pose estimation using a monocular camera employs either ego-motion or exo-motion, both of which are not sufficiently accurate for estimating fine motion due to the motion ambiguity of rotation and translation. This paper presents a hybrid vision-based pose estimation method for fine-motion estimation that is specifically capable of extracting human body motion accurately. The method uses an ego-camera attached to a point of interest and exo-cameras located in the immediate surroundings of the point of interest. The exo-cameras can easily track the exact position of the point of interest by triangulation. Once the position is given, the ego-camera can accurately obtain the point of interest's orientation. In this way, any ambiguity between rotation and translation is eliminated and the exact motion of a target point (that is, ego-camera) can then be obtained. The proposed method is expected to provide a practical solution for robustly estimating fine motion in a non-contact manner, such as in interactive games that are designed for special purposes (for example, remote rehabilitation care systems).

Flexible camera series network for deformation measurement of large scale structures

  • Yu, Qifeng;Guan, Banglei;Shang, Yang;Liu, Xiaolin;Li, Zhang
    • Smart Structures and Systems
    • /
    • v.24 no.5
    • /
    • pp.587-595
    • /
    • 2019
  • Deformation measurement of large scale structures, such as the ground beds of high-rise buildings, tunnels, bridge, and railways, are important for insuring service quality and safety. The pose-relay videometrics method and displacement-relay videometrics method have already presented to measure the pose of non-intervisible objects and vertical subsidence of unstable areas, respectively. Both methods combine the cameras and cooperative markers to form the camera series networks. Based on these two networks, we propose two novel videometrics methods with closed-loop camera series network for deformation measurement of large scale structures. The closed-loop camera series network offers "closed-loop constraints" for the camera series network: the deformation of the reference points observed by different measurement stations is identical. The closed-loop constraints improve the measurement accuracy using camera series network. Furthermore, multiple closed-loops and the flexible combination of camera series network are introduced to facilitate more complex deformation measurement tasks. Simulated results show that the closed-loop constraints can enhance the measurement accuracy of camera series network effectively.

Real-time Monocular Camera Pose Estimation using a Particle Filiter Intergrated with UKF (UKF와 연동된 입자필터를 이용한 실시간 단안시 카메라 추적 기법)

  • Seok-Han Lee
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.16 no.5
    • /
    • pp.315-324
    • /
    • 2023
  • In this paper, we propose a real-time pose estimation method for a monocular camera using a particle filter integrated with UKF (unscented Kalman filter). While conventional camera tracking techniques combine camera images with data from additional devices such as gyroscopes and accelerometers, the proposed method aims to use only two-dimensional visual information from the camera without additional sensors. This leads to a significant simplification in the hardware configuration. The proposed approach is based on a particle filter integrated with UKF. The pose of the camera is estimated using UKF, which is defined individually for each particle. Statistics regarding the camera state are derived from all particles of the particle filter, from which the real-time camera pose information is computed. The proposed method demonstrates robust tracking, even in the case of rapid camera shakes and severe scene occlusions. The experiments show that our method remains robust even when most of the feature points in the image are obscured. In addition, we verify that when the number of particles is 35, the processing time per frame is approximately 25ms, which confirms that there are no issues with real-time processing.

Efficient Circular Object Pose Determination

  • Kim, Sungbok;Kim, Byungho
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.276-276
    • /
    • 2000
  • This paper presents the efficient algorithms for the pose determination of a circular object with and without a priori knowledge of the object radius. The developed algorithms valid for a circular object are the result of the elaboration of Ma's work [2], which determines the pose of a conic object from two perspective views. First, the geometric constraint of a circular object and its projection on the image plane of a camera is described. The number of perspective views required for the object pose determination with and without a priori knowledge of the object radius is also discussed. Second, with a priori knowledge of the object radius, the pose of a circular object is determined from a single perspective view. The object pose information, expressed by two surface normal vectors and one position vector, is given in a closed form and with no ambiguity. Third, without a priori knowledge of the object radius, the pose of a circular object is determined from two perspective views. While the surface normal vectors are obtained from the first view, the position vector is obtained from the two views.

  • PDF

2.5D human pose estimation for shadow puppet animation

  • Liu, Shiguang;Hua, Guoguang;Li, Yang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.4
    • /
    • pp.2042-2059
    • /
    • 2019
  • Digital shadow puppet has traditionally relied on expensive motion capture equipments and complex design. In this paper, a low-cost driven technique is presented, that captures human pose estimation data with simple camera from real scenarios, and use them to drive virtual Chinese shadow play in a 2.5D scene. We propose a special method for extracting human pose data for driving virtual Chinese shadow play, which is called 2.5D human pose estimation. Firstly, we use the 3D human pose estimation method to obtain the initial data. In the process of the following transformation, we treat the depth feature as an implicit feature, and map body joints to the range of constraints. We call the obtain pose data as 2.5D pose data. However, the 2.5D pose data can not better control the shadow puppet directly, due to the difference in motion pattern and composition structure between real pose and shadow puppet. To this end, the 2.5D pose data transformation is carried out in the implicit pose mapping space based on self-network and the final 2.5D pose expression data is produced for animating shadow puppets. Experimental results have demonstrated the effectiveness of our new method.