• Title/Summary/Keyword: camera pose

Search Result 270, Processing Time 0.04 seconds

Optimal Camera Arrangement for Automatic Recognition of Steel Material based on Augmented Reality in Outdoor Environment (실외 환경에서의 증강 현실 기반의 자재 인식을 위한 최적의 카메라 배치)

  • Do, Hyun-Min;Kim, Bong-Keun
    • The Journal of Korea Robotics Society
    • /
    • v.5 no.2
    • /
    • pp.143-151
    • /
    • 2010
  • Automation and robotization has been required in construction for several decades and construction industry has become one of the important research areas in the field of service robotics. Especially in the steel construction, automatic recognition of structural steel members in the stockyard is emphasized. However, since the pose of steel frame in the stockyard is site dependent and also the stockyard is usually in the outdoor environment, it is difficult to determine the pose automatically. This paper adopts the recognition method based on the augmented reality to cope with this problem. Particularly focusing on the light condition of the outdoor environment, we formulated the optimization problem with the constraint and suggested the methodology to evaluate the optimal camera arrangement. From simulation results, sub-optimal solution for the position of the camera can be obtained.

Camera Parameter Extraction Method for Virtual Studio Applications by Tracking the Location of TV Camera (가상스튜디오에서 실사 TV 카메라의 3-D 기준 좌표와 추적 영상을 이용한 카메라 파라메타 추출 방법)

  • 한기태;김회율
    • Journal of Broadcast Engineering
    • /
    • v.4 no.2
    • /
    • pp.176-186
    • /
    • 1999
  • In order to produce an image that lends realism to audience in the virtual studio system. it is important to synchronize precisely between foreground objects and background image provided by computer graphics. In this paper, we propose a method of camera parameter extraction for the synchronization by tracking the pose of TV camera. We derive an equation for extracting camera parameters from inverse perspective equations for tracking the pose of the camera and 3-D transformation between base coordinates and estimated coordinates. We show the validity of the proposed method in terms of the accuracy ratio between the parameters computed from the equation and the real parameters that applied to a TV camera.

  • PDF

A New 3D Active Camera System for Robust Face Recognition by Correcting Pose Variation

  • Kim, Young-Ouk;Jang, Sung-Ho;Park, Chang-Woo;Sung, Ha-Gyeong;Kwon, Oh-Yun;Paik, Joon-Ki
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1485-1490
    • /
    • 2004
  • Recently, we have remarkable developments in intelligent robot systems. The remarkable features of intelligent robot are that it can track user, does face recognition and vital for many surveillance based systems. Advantage of face recognition when compared with other biometrics recognition is that coerciveness and contact that usually exist when we acquire characteristics do not exist in face recognition. However, the accuracy of face recognition is lower than other biometric recognition due to decrease in dimension from of image acquisition step and various changes associated with face pose and background. Factors that deteriorate performance of face recognition are many such as distance from camera to face, lighting change, pose change, and change of facial expression. In this paper, we implement a new 3D active camera system to prevent various pose variation that influence face recognition performance and propose face recognition algorithm for intelligent surveillance system and mobile robot system.

  • PDF

A Moving Camera Localization using Perspective Transform and Klt Tracking in Sequence Images (순차영상에서 투영변환과 KLT추적을 이용한 이동 카메라의 위치 및 방향 산출)

  • Jang, Hyo-Jong;Cha, Jeong-Hee;Kim, Gye-Young
    • The KIPS Transactions:PartB
    • /
    • v.14B no.3 s.113
    • /
    • pp.163-170
    • /
    • 2007
  • In autonomous navigation of a mobile vehicle or a mobile robot, localization calculated from recognizing its environment is most important factor. Generally, we can determine position and pose of a camera equipped mobile vehicle or mobile robot using INS and GPS but, in this case, we must use enough known ground landmark for accurate localization. hi contrast with homography method to calculate position and pose of a camera by only using the relation of two dimensional feature point between two frames, in this paper, we propose a method to calculate the position and the pose of a camera using relation between the location to predict through perspective transform of 3D feature points obtained by overlaying 3D model with previous frame using GPS and INS input and the location of corresponding feature point calculated using KLT tracking method in current frame. For the purpose of the performance evaluation, we use wireless-controlled vehicle mounted CCD camera, GPS and INS, and performed the test to calculate the location and the rotation angle of the camera with the video sequence stream obtained at 15Hz frame rate.

3D Map Generation System for Indoor Autonomous Navigation (실내 자율 주행을 위한 3D Map 생성 시스템)

  • Moon, SungTae;Han, Sang-Hyuck;Eom, Wesub;Kim, Youn-Kyu
    • Aerospace Engineering and Technology
    • /
    • v.11 no.2
    • /
    • pp.140-148
    • /
    • 2012
  • For autonomous navigation, map, pose tracking, and finding the shortest path are required. Because there is no GPS signal in indoor environment, the current position should be recognized in the 3D map by using image processing or something. In this paper, we explain 3D map creation technology by using depth camera like Kinect and pose tracking in 3D map by using 2D image taking from camera. In addition, the mechanism of avoiding obstacles is discussed.

Augmented Reality Service Based on Object Pose Prediction Using PnP Algorithm

  • Kim, In-Seon;Jung, Tae-Won;Jung, Kye-Dong
    • International Journal of Advanced Culture Technology
    • /
    • v.9 no.4
    • /
    • pp.295-301
    • /
    • 2021
  • Digital media technology is gradually developing with the development of convergence quaternary industrial technology and mobile devices. The combination of deep learning and augmented reality can provide more convenient and lively services through the interaction of 3D virtual images with the real world. We combine deep learning-based pose prediction with augmented reality technology. We predict the eight vertices of the bounding box of the object in the image. Using the predicted eight vertices(x,y), eight vertices(x,y,z) of 3D mesh, and the intrinsic parameter of the smartphone camera, we compute the external parameters of the camera through the PnP algorithm. We calculate the distance to the object and the degree of rotation of the object using the external parameter and apply to AR content. Our method provides services in a web environment, making it highly accessible to users and easy to maintain the system. As we provide augmented reality services using consumers' smartphone cameras, we can apply them to various business fields.

Vision-based Navigation for VTOL Unmanned Aerial Vehicle Landing (수직이착륙 무인항공기 자동 착륙을 위한 영상기반 항법)

  • Lee, Sang-Hoon;Song, Jin-Mo;Bae, Jong-Sue
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.18 no.3
    • /
    • pp.226-233
    • /
    • 2015
  • Pose estimation is an important operation for many vision tasks. This paper presents a method of estimating the camera pose, using a known landmark for the purpose of autonomous vertical takeoff and landing(VTOL) unmanned aerial vehicle(UAV) landing. The proposed method uses a distinctive methodology to solve the pose estimation problem. We propose to combine extrinsic parameters from known and unknown 3-D(three-dimensional) feature points, and inertial estimation of camera 6-DOF(Degree Of Freedom) into one linear inhomogeneous equation. This allows us to use singular value decomposition(SVD) to neatly solve the given optimization problem. We present experimental results that demonstrate the ability of the proposed method to estimate camera 6DOF with the ease of implementation.

Combining Shape and SIFT Features for 3-D Object Detection and Pose Estimation (효과적인 3차원 객체 인식 및 자세 추정을 위한 외형 및 SIFT 특징 정보 결합 기법)

  • Tak, Yoon-Sik;Hwang, Een-Jun
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.59 no.2
    • /
    • pp.429-435
    • /
    • 2010
  • Three dimensional (3-D) object detection and pose estimation from a single view query image has been an important issue in various fields such as medical applications, robot vision, and manufacturing automation. However, most of the existing methods are not appropriate in a real time environment since object detection and pose estimation requires extensive information and computation. In this paper, we present a fast 3-D object detection and pose estimation scheme based on surrounding camera view-changed images of objects. Our scheme has two parts. First, we detect images similar to the query image from the database based on the shape feature, and calculate candidate poses. Second, we perform accurate pose estimation for the candidate poses using the scale invariant feature transform (SIFT) method. We earned out extensive experiments on our prototype system and achieved excellent performance, and we report some of the results.

Real Time Discrimination of 3 Dimensional Face Pose (실시간 3차원 얼굴 방향 식별)

  • Kim, Tae-Woo
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.3 no.1
    • /
    • pp.47-52
    • /
    • 2010
  • In this paper, we introduce a new approach for real-time 3D face pose discrimination based on active IR illumination from a monocular view of the camera. Under the IR illumination, the pupils appear bright. We develop algorithms for efficient and robust detection and tracking pupils in real time. Based on the geometric distortions of pupils under different face orientations, an eigen eye feature space is built based on training data that captures the relationship between 3D face orientation and the geometric features of the pupils. The 3D face pose for an input query image is subsequently classified using the eigen eye feature space. From the experiment, we obtained the range of results of discrimination from the subjects which close to the camera are from 94,67%, minimum from 100%, maximum.

  • PDF

Real Time 3D Face Pose Discrimination Based On Active IR Illumination (능동적 적외선 조명을 이용한 실시간 3차원 얼굴 방향 식별)

  • 박호식;배철수
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.3
    • /
    • pp.727-732
    • /
    • 2004
  • In this paper, we introduce a new approach for real-time 3D face pose discrimination based on active IR illumination from a monocular view of the camera. Under the IR illumination, the pupils appear bright. We develop algorithms for efficient and robust detection and tracking pupils in real time. Based on the geometric distortions of pupils under different face orientations, an eigen eye feature space is built based on training data that captures the relationship between 3D face orientation and the geometric features of the pupils. The 3D face pose for an input query image is subsequently classified using the eigen eye feature space. From the experiment, we obtained the range of results of discrimination from the subjects which close to the camera are from 94,67%, minimum from 100%, maximum.