• Title/Summary/Keyword: camera pose

Search Result 270, Processing Time 0.02 seconds

An Efficient Camera Calibration Method for Head Pose Tracking (머리의 자세를 추적하기 위한 효율적인 카메라 보정 방법에 관한 연구)

  • Park, Gyeong-Su;Im, Chang-Ju;Lee, Gyeong-Tae
    • Journal of the Ergonomics Society of Korea
    • /
    • v.19 no.1
    • /
    • pp.77-90
    • /
    • 2000
  • The aim of this study is to develop and evaluate an efficient camera calibration method for vision-based head tracking. Tracking head movements is important in the design of an eye-controlled human/computer interface. A vision-based head tracking system was proposed to allow the user's head movements in the design of the eye-controlled human/computer interface. We proposed an efficient camera calibration method to track the 3D position and orientation of the user's head accurately. We also evaluated the performance of the proposed method. The experimental error analysis results showed that the proposed method can provide more accurate and stable pose (i.e. position and orientation) of the camera than the conventional direct linear transformation method which has been used in camera calibration. The results of this study can be applied to the tracking head movements related to the eye-controlled human/computer interface and the virtual reality technology.

  • PDF

Lane Detection-based Camera Pose Estimation (차선검출 기반 카메라 포즈 추정)

  • Jung, Ho Gi;Suhr, Jae Kyu
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.23 no.5
    • /
    • pp.463-470
    • /
    • 2015
  • When a camera installed on a vehicle is used, estimation of the camera pose including tilt, roll, and pan angle with respect to the world coordinate system is important to associate camera coordinates with world coordinates. Previous approaches using huge calibration patterns have the disadvantage that the calibration patterns are costly to make and install. And, previous approaches exploiting multiple vanishing points detected in a single image are not suitable for automotive applications as a scene where multiple vanishing points can be captured by a front camera is hard to find in our daily environment. This paper proposes a camera pose estimation method. It collects multiple images of lane markings while changing the horizontal angle with respect to the markings. One vanishing point, the cross point of the left and right lane marking, is detected in each image, and vanishing line is estimated based on the detected vanishing points. Finally, camera pose is estimated from the vanishing line. The proposed method is based on the fact that planar motion does not change the vanishing line of the plane and the normal vector of the plane can be estimated by the vanishing line. Experiments with large and small tilt and roll angle show that the proposed method outputs accurate estimation results respectively. It is verified by checking the lane markings are up right in the bird's eye view image when the pan angle is compensated.

Reliable Camera Pose Estimation from a Single Frame with Applications for Virtual Object Insertion (가상 객체 합성을 위한 단일 프레임에서의 안정된 카메라 자세 추정)

  • Park, Jong-Seung;Lee, Bum-Jong
    • The KIPS Transactions:PartB
    • /
    • v.13B no.5 s.108
    • /
    • pp.499-506
    • /
    • 2006
  • This Paper describes a fast and stable camera pose estimation method for real-time augmented reality systems. From the feature tracking results of a marker on a single frame, we estimate the camera rotation matrix and the translation vector. For the camera pose estimation, we use the shape factorization method based on the scaled orthographic Projection model. In the scaled orthographic factorization method, all feature points of an object are assumed roughly at the same distance from the camera, which means the selected reference point and the object shape affect the accuracy of the estimation. This paper proposes a flexible and stable selection method for the reference point. Based on the proposed method, we implemented a video augmentation system that inserts virtual 3D objects into the input video frames. Experimental results showed that the proposed camera pose estimation method is fast and robust relative to the previous methods and it is applicable to various augmented reality applications.

Markerless camera pose estimation framework utilizing construction material with standardized specification

  • Harim Kim;Heejae Ahn;Sebeen Yoon;Taehoon Kim;Thomas H.-K. Kang;Young K. Ju;Minju Kim;Hunhee Cho
    • Computers and Concrete
    • /
    • v.33 no.5
    • /
    • pp.535-544
    • /
    • 2024
  • In the rapidly advancing landscape of computer vision (CV) technology, there is a burgeoning interest in its integration with the construction industry. Camera calibration is the process of deriving intrinsic and extrinsic parameters that affect when the coordinates of the 3D real world are projected onto the 2D plane, where the intrinsic parameters are internal factors of the camera, and extrinsic parameters are external factors such as the position and rotation of the camera. Camera pose estimation or extrinsic calibration, which estimates extrinsic parameters, is essential information for CV application at construction since it can be used for indoor navigation of construction robots and field monitoring by restoring depth information. Traditionally, camera pose estimation methods for cameras relied on target objects such as markers or patterns. However, these methods, which are marker- or pattern-based, are often time-consuming due to the requirement of installing a target object for estimation. As a solution to this challenge, this study introduces a novel framework that facilitates camera pose estimation using standardized materials found commonly in construction sites, such as concrete forms. The proposed framework obtains 3D real-world coordinates by referring to construction materials with certain specifications, extracts the 2D coordinates of the corresponding image plane through keypoint detection, and derives the camera's coordinate through the perspective-n-point (PnP) method which derives the extrinsic parameters by matching 3D and 2D coordinate pairs. This framework presents a substantial advancement as it streamlines the extrinsic calibration process, thereby potentially enhancing the efficiency of CV technology application and data collection at construction sites. This approach holds promise for expediting and optimizing various construction-related tasks by automating and simplifying the calibration procedure.

A Study on the Gesture Matching Method for the Development of Gesture Contents (체감형 콘텐츠 개발을 위한 연속동작 매칭 방법에 관한 연구)

  • Lee, HyoungGu
    • Journal of Korea Game Society
    • /
    • v.13 no.6
    • /
    • pp.75-84
    • /
    • 2013
  • The recording and matching method of pose and gesture based on PC-window platform is introduced in this paper. The method uses the gesture detection camera, Xtion which is for the Windows PC. To develop the method, the API is first developed which processes and compares the depth data, RGB image data, and skeleton data obtained using the camera. The pose matching method which selectively compares only valid joints is developed. For the gesture matching, the recognition method which can differentiate the wrong pose between poses is developed. The tool which records and tests the sample data to extract the specified pose and gesture is developed. 6 different pose and gesture were captured and tested. Pose was recognized 100% and gesture was recognized 99%, so the proposed method was validated.

Pose-invariant Face Recognition using a Cylindrical Model and Stereo Camera (원통 모델과 스테레오 카메라를 이용한 포즈 변화에 강인한 얼굴인식)

  • 노진우;홍정화;고한석
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.7
    • /
    • pp.929-938
    • /
    • 2004
  • This paper proposes a pose-invariant face recognition method using cylindrical model and stereo camera. We divided this paper into two parts. One is single input image case, the other is stereo input image case. In single input image case, we normalized a face's yaw pose using cylindrical model, and in stereo input image case, we normalized a face's pitch pose using cylindrical model with previously estimated pitch pose angle by the stereo geometry. Also, since we have an advantage that we can utilize two images acquired at the same time, we can increase overall recognition performance by decision-level fusion. Through representative experiments, we achieved an increased recognition rate from 61.43% to 94.76% by the yaw pose transform, and the recognition rate with the proposed method achieves as good as that of the more complicated 3D face model. Also, by using stereo camera system we achieved an increased recognition rate 5.24% more for the case of upper face pose, and 3.34% more by decision-level fusion.

Robot Posture Estimation Using Circular Image of Inner-Pipe (원형관로 영상을 이용한 관로주행 로봇의 자세 추정)

  • Yoon, Ji-Sup;Kang , E-Sok
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.51 no.6
    • /
    • pp.258-266
    • /
    • 2002
  • This paper proposes the methodology of the image processing algorithm that estimates the pose of the inner-pipe crawling robot. The inner-pipe crawling robot is usually equipped with a lighting device and a camera on its head for monitoring and inspection purpose of defects on the pipe wall and/or the maintenance operation. The proposed methodology is using these devices without introducing the extra sensors and is based on the fact that the position and the intensity of the reflected light from the inner wall of the pipe vary with the robot posture and the camera. The proposed algorithm is divided into two parts, estimating the translation and rotation angle of the camera, followed by the actual pose estimation of the robot . Based on the fact that the vanishing point of the reflected light moves into the opposite direction from the camera rotation, the camera rotation angle can be estimated. And, based on the fact that the most bright parts of the reflected light moves into the same direction with the camera translation, the camera position most bright parts of the reflected light moves into the same direction with the camera translation, the camera position can be obtained. To investigate the performance of the algorithm, the algorithm is applied to a sewage maintenance robot.

Hand Raising Pose Detection in the Images of a Single Camera for Mobile Robot (주행 로봇을 위한 단일 카메라 영상에서 손든 자세 검출 알고리즘)

  • Kwon, Gi-Il
    • The Journal of Korea Robotics Society
    • /
    • v.10 no.4
    • /
    • pp.223-229
    • /
    • 2015
  • This paper proposes a novel method for detection of hand raising poses from images acquired from a single camera attached to a mobile robot that navigates unknown dynamic environments. Due to unconstrained illumination, a high level of variance in human appearances and unpredictable backgrounds, detecting hand raising gestures from an image acquired from a camera attached to a mobile robot is very challenging. The proposed method first detects faces to determine the region of interest (ROI), and in this ROI, we detect hands by using a HOG-based hand detector. By using the color distribution of the face region, we evaluate each candidate in the detected hand region. To deal with cases of failure in face detection, we also use a HOG-based hand raising pose detector. Unlike other hand raising pose detector systems, we evaluate our algorithm with images acquired from the camera and images obtained from the Internet that contain unknown backgrounds and unconstrained illumination. The level of variance in hand raising poses in these images is very high. Our experiment results show that the proposed method robustly detects hand raising poses in complex backgrounds and unknown lighting conditions.

Mobile Augmented Visualization Technology Using Vive Tracker (포즈 추적 센서를 활용한 모바일 증강 가시화 기술)

  • Lee, Dong-Chun;Kim, Hang-Kee;Lee, Ki-Suk
    • Journal of Korea Game Society
    • /
    • v.21 no.5
    • /
    • pp.41-48
    • /
    • 2021
  • This paper introduces a mobile augmented visualization technology that augments a three-dimensional virtual human body on a mannequin model using two pose(position and rotation) tracking sensors. The conventional camera tracking technology used for augmented visualization has the disadvantage of failing to calculate the camera pose when the camera shakes or moves quickly because it uses the camera image, but using a pose tracking sensor can overcome this disadvantage. Also, even if the position of the mannequin is changed or rotated, augmented visualization is possible using the data of the pose tracking sensor attached to the mannequin, and above all there is no load for camera tracking.

Visual Servoing of Robotic Manipulators for Moving Objects (동적 물체에 대한 로봇 매니퓰레이터의 Visual Servoing)

  • Sim, Kwee-Bo;Oh, Seung-Wook
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.1
    • /
    • pp.15-24
    • /
    • 1996
  • This paper presents a new method for visual servoing to control the pose(position and orientation) of the robotic manipulators for grasping the 3-D moving object whose initial pose and moving informations are unknown by using the stereo camera. The stereo camera is mounted on the end-effector of robotic manipulator. In order to track the current pose of robotic manipulator to the desired pose, we use the image Jacobian, which is described by the differential transform, relating the change in image feature point to the change in the object's pose with respect to the camera. In this paper the simple PD controller is adopted for the robotic manipulator to track the desired pose. Finally, the effectiveness of the proposed method is confirmed by some computer simulations.

  • PDF