• Title/Summary/Keyword: Paraperspective Camera Model

Search Result 3, Processing Time 0.015 seconds

3D Object's shape and motion recovery using stereo image and Paraperspective Camera Model (스테레오 영상과 준원근 카메라 모델을 이용한 객체의 3차원 형태 및 움직임 복원)

  • Kim, Sang-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.10B no.2
    • /
    • pp.135-142
    • /
    • 2003
  • Robust extraction of 3D object's features, shape and global motion information from 2D image sequence is described. The object's 21 feature points on the pyramid type synthetic object are extracted automatically using color transform technique. The extracted features are used to recover the 3D shape and global motion of the object using stereo paraperspective camera model and sequential SVD(Singuiar Value Decomposition) factorization method. An inherent error of depth recovery due to the paraperspective camera model was removed by using the stereo image analysis. A 30 synthetic object with 21 features reflecting various position was designed and tested to show the performance of proposed algorithm by comparing the recovered shape and motion data with the measured values.

3-D shape and motion recovery using SVD from image sequence (동영상으로부터 3차원 물체의 모양과 움직임 복원)

  • 정병오;김병곤;고한석
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.35S no.3
    • /
    • pp.176-184
    • /
    • 1998
  • We present a sequential factorization method using singular value decomposition (SVD) for recovering both the three-dimensional shape of an object and the motion of camera from a sequence of images. We employ paraperpective projection [6] for camera model to handle significant translational motion toward the camera or across the image. The proposed mthod not only quickly gives robust and accurate results, but also provides results at each frame becauseit is a sequential method. These properties make our method practically applicable to real time applications. Considerable research has been devoted to the problem of recovering motion and shape of object from image [2] [3] [4] [5] [6] [7] [8] [9]. Among many different approaches, we adopt a factorization method using SVD because of its robustness and computational efficiency. The factorization method based on batch-type computation, originally proposed by Tomasi and Kanade [1] proposed the feature trajectory information using singular value decomposition (SVD). Morita and Kanade [10] have extenened [1] to asequential type solution. However, Both methods used an orthographic projection and they cannot be applied to image sequences containing significant translational motion toward the camera or across the image. Poleman and Kanade [11] have developed a batch-type factorization method using paraperspective camera model is a sueful technique, the method cannot be employed for real-time applications because it is based on batch-type computation. This work presents a sequential factorization methodusing SVD for paraperspective projection. Initial experimental results show that the performance of our method is almost equivalent to that of [11] although it is sequential.

  • PDF

Facial Features and Motion Recovery using multi-modal information and Paraperspective Camera Model (다양한 형식의 얼굴정보와 준원근 카메라 모델해석을 이용한 얼굴 특징점 및 움직임 복원)

  • Kim, Sang-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.9B no.5
    • /
    • pp.563-570
    • /
    • 2002
  • Robust extraction of 3D facial features and global motion information from 2D image sequence for the MPEG-4 SNHC face model encoding is described. The facial regions are detected from image sequence using multi-modal fusion technique that combines range, color and motion information. 23 facial features among the MPEG-4 FDP (Face Definition Parameters) are extracted automatically inside the facial region using color transform (GSCD, BWCD) and morphological processing. The extracted facial features are used to recover the 3D shape and global motion of the object using paraperspective camera model and SVD (Singular Value Decomposition) factorization method. A 3D synthetic object is designed and tested to show the performance of proposed algorithm. The recovered 3D motion information is transformed into global motion parameters of FAP (Face Animation Parameters) of the MPEG-4 to synchronize a generic face model with a real face.