• Title/Summary/Keyword: Camera Model

Search Result 1,509, Processing Time 0.032 seconds

3D Active Appearance Model for Face Recognition (얼굴인식을 위한 3D Active Appearance Model)

  • Cho, Kyoung-Sic;Kim, Yong-Guk
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.1006-1011
    • /
    • 2007
  • Active Appearance Models은 객체의 모델링에 널리 사용되며, 특히 얼굴 모델은 얼굴 추적, 포즈 인식, 표정 인식, 그리고 얼굴 인식에 널리 사용되고 있다. 최초의 AAM은 Shape과 Appearance가 하나의 계수에 의해서 만들어 지는 Combined AAM이였고, 이후 Shape과 Appearance의 계수가 분리된 Independent AAM과 3D를 표현할 수 있는 Combined 2D+3D AAM이 개발 되었다. 비록 Combined 2D+3D AAM이 3D를 표현 할 수 있을지라도 이들은 공통적으로 2D 영상을 사용하여 모델을 생산한다. 본 논문에서 우리는 stereo-camera based 3D face capturing device를 통해 획득한 3D 데이터를 기반으로 하는 3D AAM을 제안한다. 우리의 3D AAM은 3D정보를 이용해 모델을 생산하므로 기존의 AAM보다 정확한 3D표현이 가능하고 Alignment Algorithm으로 Inverse Compositional Image Alignment(ICIA)를 사용하여 빠르게 Model Instance를 생산할 수 있다. 우리는 3D AAM을 평가하기 위해 stereo-camera based 3D face capturing device로 촬영해 수집한 한국인 얼굴 데이터베이스[9]로 얼굴인식을 수행하였다.

  • PDF

Geometric Assessment and Correction of SPOT5 Imagery

  • Kwoh, Leong Keong;Xiong,, Zhen;Shi, Fusheng
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.286-288
    • /
    • 2003
  • In this paper, we present our implementation of the direct camera model (image to ground) for SPOT5 and use it to assess the geometric accuracy of SPOT5 imagery. Our assessment confirms the location accuracy of SPOT5 imagery (without use of GCPs) is less than 50m. We further introduce a few attitude parameters to refine the camera model with GCPs. The model is applied to two SPOT5 supermode images, one near vertical, incidence angle of 3 degrees, and one far oblique, incidence angle of 27 degrees. The results show that accuracy (rms of check points) of about one pixel (2.5m) can be achieved with about 4 GCPs by using only 3 parameters to correct the yaw, pitch and roll of the satellite.

  • PDF

Automated texture mapping for 3D modeling of objects with complex shapes --- a case study of archaeological ruins

  • Fujiwara, Hidetomo;Nakagawa, Masafumi;Shibasaki, Ryosuke
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.1177-1179
    • /
    • 2003
  • Recently, the ground-based laser profiler is used for acquisition of 3D spatial information of a rchaeological objects. However, it is very difficult to measure complicated objects, because of a relatively low-resolution. On the other hand, texture mapping can be a solution to complement the low resolution, and to generate 3D model with higher fidelity. But, a huge cost is required for the construction of textured 3D model, because huge labor is demanded, and the work depends on editor's experiences and skills . Moreover, the accuracy of data would be lost during the editing works. In this research, using the laser profiler and a non-calibrated digital camera, a method is proposed for the automatic generation of 3D model by integrating these data. At first, region segmentation is applied to laser range data to extract geometric features of an object in the laser range data. Various information such as normal vectors of planes, distances from a sensor and a sun-direction are used in this processing. Next, an image segmentation is also applied to the digital camera images, which include the same object. Then, geometrical relations are determined by corresponding the features extracted in the laser range data and digital camera’ images. By projecting digital camera image onto the surface data reconstructed from laser range image, the 3D texture model was generated automatically.

  • PDF

Assessing the Applicability of Sea Cliff Monitoring Using Multi-Camera and SfM Method (멀티 카메라와 SfM 기법을 활용한 해식애 모니터링 적용가능성 평가)

  • Yu, Jae Jin;Park, Hyun-Su;Kim, Dong Woo;Yoon, Jeong-Ho;Son, Seung-Woo
    • Journal of The Geomorphological Association of Korea
    • /
    • v.25 no.1
    • /
    • pp.67-80
    • /
    • 2018
  • This study used aerial and terrestrial images to build a three-dimensional model of cliffs located in Pado beach using SfM (Structure from Motion) techniques. Using both images, the study purposed to reduce the shadow areas that were found when using only aerial images. Accuracy of the two campaigns was assessed by root mean square error, and monitored by M3C2 (Multiscale Model to Model Cloud Comparison) method. The result of the M3C2 in closed areas such as sea cave and notch did not express the landforms partly. However, eroded debris on sea cliffs were detected as eroded area by M3C2, as well as in captured pictures by multi-camera. The result of this study showed the applicability of multi-camera and SfM in monitoring changes of sea cliffs.

MTF Measurement for Flight Model of MAC, a 2.5m GSD Earth Observation Camera (2.5m 해상도 지구관측 카메라 MAC 비행모델의 지상 MTF 성능 측정)

  • Kim, Eugene-D.;Choi, Young-Wan;Yang, Ho-Soon
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.33 no.10
    • /
    • pp.98-103
    • /
    • 2005
  • The Flight Model of MAC (Medium-sized Aperture Camera), a 2.5m GSD class earth observation camera has been aligned and assembled. Topics discussed in this paper include the ground MTF performance of the MAC system, and the alignment of the focal plane assembly. MTF was measured by a knife-edge scanning technique, and a 450 mm diameter Cassegrain collimator with diffraction-limited performance was made and used for the MTF measurements. System MTF was used as the figure-of-merit to find the best focus of the focal plane assembly.

Point Cloud Registration Algorithm Based on RGB-D Camera for Shooting Volumetric Objects (체적형 객체 촬영을 위한 RGB-D 카메라 기반의 포인트 클라우드 정합 알고리즘)

  • Kim, Kyung-Jin;Park, Byung-Seo;Kim, Dong-Wook;Seo, Young-Ho
    • Journal of Broadcast Engineering
    • /
    • v.24 no.5
    • /
    • pp.765-774
    • /
    • 2019
  • In this paper, we propose a point cloud matching algorithm for multiple RGB-D cameras. In general, computer vision is concerned with the problem of precisely estimating camera position. Existing 3D model generation methods require a large number of cameras or expensive 3D cameras. In addition, the conventional method of obtaining the camera external parameters through the two-dimensional image has a large estimation error. In this paper, we propose a method to obtain coordinate transformation parameters with an error within a valid range by using depth image and function optimization method to generate omni-directional three-dimensional model using 8 low-cost RGB-D cameras.

Geometric Modelling and Coordinate Transformation of Satellite-Based Linear Pushbroom-Type CCD Camera Images (선형 CCD카메라 영상의 기하학적 모델 수립 및 좌표 변환)

  • 신동석;이영란
    • Korean Journal of Remote Sensing
    • /
    • v.13 no.2
    • /
    • pp.85-98
    • /
    • 1997
  • A geometric model of pushbroom-type linear CCD camera images is proposed in this paper. At present, this type of cameras are used for obtaining almost all kinds of high-resolution optical images from satellites. The proposed geometric model includes not only a forward transformation which is much more efficient. An inverse transformation function cannot be derived analytically in a closed form because the focal point of an image varies with time. In this paper, therefore, an iterative algorithm in which a focal point os converged to a given pixel position is proposed. Although the proposed model can be applied to any pushbroom-type linear CCD camera images, the geometric model of the high-resolution multi-spectral camera on-board KITSAT-3 is used in this paper as an example. The flight model of KITSAT-3 is in development currently and it is due to be launched late 1998.

Omnidirectional Camera Motion Estimation Using Projected Contours (사영 컨투어를 이용한 전방향 카메라의 움직임 추정 방법)

  • Hwang, Yong-Ho;Lee, Jae-Man;Hong, Hyun-Ki
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.5
    • /
    • pp.35-44
    • /
    • 2007
  • Since the omnidirectional camera system with a very large field of view could take many information about environment scene from few images, various researches for calibration and 3D reconstruction using omnidirectional image have been presented actively. Most of line segments of man-made objects we projected to the contours by using the omnidirectional camera model. Therefore, the corresponding contours among images sequences would be useful for computing the camera transformations including rotation and translation. This paper presents a novel two step minimization method to estimate the extrinsic parameters of the camera from the corresponding contours. In the first step, coarse camera parameters are estimated by minimizing an angular error function between epipolar planes and back-projected vectors from each corresponding point. Then we can compute the final parameters minimizing a distance error of the projected contours and the actual contours. Simulation results on the synthetic and real images demonstrated that our algorithm can achieve precise contour matching and camera motion estimation.

Low Cost Digital X-Ray Image Capture System Using CCD Camera (CCD 카메라를 사용한 저가형 Digital X-Ray 영상취득 시스템)

  • Kang, Yong-Chul
    • The Transactions of the Korean Institute of Electrical Engineers P
    • /
    • v.56 no.1
    • /
    • pp.19-22
    • /
    • 2007
  • We developed a low cost digital X-Ray image capturing system using a CCD camera, instead of using the high cost image plate and image intensifier. In order to reduce the system volume, we directly made the dark box shorter than the previous model. Using the graphic language, we developed a program in order for post-processing the images captured by the CCD camera. This program improves the image resolving power.

The Camera Tracking of Real-Time Moving Object on UAV Using the Color Information (컬러 정보를 이용한 무인항공기에서 실시간 이동 객체의 카메라 추적)

  • Hong, Seung-Beom
    • Journal of the Korean Society for Aviation and Aeronautics
    • /
    • v.18 no.2
    • /
    • pp.16-22
    • /
    • 2010
  • This paper proposes the real-time moving object tracking system UAV using color information. Case of object tracking, it have studied to recognizing the moving object or moving multiple objects on the fixed camera. And it has recognized the object in the complex background environment. But, this paper implements the moving object tracking system using the pan/tilt function of the camera after the object's region extraction. To do this tracking system, firstly, it detects the moving object of RGB/HSI color model and obtains the object coordination in acquired image using the compact boundary box. Secondly, the camera origin coordination aligns to object's top&left coordination in compact boundary box. And it tracks the moving object using the pan/tilt function of camera. It is implemented by the Labview 8.6 and NI Vision Builder AI of National Instrument co. It shows the good performance of camera trace in laboratory environment.