• Title/Summary/Keyword: camera projective modeling

Search Result 7, Processing Time 0.017 seconds

3D Motion of Objects in an Image Using Vanishing Points (소실점을 이용한 2차원 영상의 물체 변환)

  • 김대원;이동훈;정순기
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.30 no.11
    • /
    • pp.621-628
    • /
    • 2003
  • This paper addresses a method of enabling objects in an image to have apparent 3D motion. Many researchers have solved this issue by reconstructing 3D model from several images using image-based modeling techniques, or building a cube-modeled scene from camera calibration using vanishing points. This paper, however, presents the possibility of image-based motion without exact 3D information of scene geometry and camera calibration. The proposed system considers the image plane as a projective plane with respect to a view point and models a 2D frame of a projected 3D object using only lines and points. And a modeled frame refers to its vanishing points as local coordinates when it is transformed.

Projective Reconstruction Method for 3D modeling from Un-calibrated Image Sequence (비교정 영상 시퀀스로부터 3차원 모델링을 위한 프로젝티브 재구성 방법)

  • Hong Hyun-Ki;Jung Yoon-Yong;Hwang Yong-Ho
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.2 s.302
    • /
    • pp.113-120
    • /
    • 2005
  • 3D reconstruction of a scene structure from un-calibrated image sequences has been long one of the central problems in computer vision. For 3D reconstruction in Euclidean space, projective reconstruction, which is classified into the merging method and the factorization, is needed as a preceding step. By calculating all camera projection matrices and structures at the same time, the factorization method suffers less from dia and error accumulation than the merging. However, the factorization is hard to analyze precisely long sequences because it is based on the assumption that all correspondences must remain in all views from the first frame to the last. This paper presents a new projective reconstruction method for recovery of 3D structure over long sequences. We break a full sequence into sub-sequences based on a quantitative measure considering the number of matching points between frames, the homography error, and the distribution of matching points on the frame. All of the projective reconstructions of sub-sequences are registered into the same coordinate frame for a complete description of the scene. no experimental results showed that the proposed method can recover more precise 3D structure than the merging method.

Video-based Height Measurements of Multiple Moving Objects

  • Jiang, Mingxin;Wang, Hongyu;Qiu, Tianshuang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.9
    • /
    • pp.3196-3210
    • /
    • 2014
  • This paper presents a novel video metrology approach based on robust tracking. From videos acquired by an uncalibrated stationary camera, the foreground likelihood map is obtained by using the Codebook background modeling algorithm, and the multiple moving objects are tracked by a combined tracking algorithm. Then, we compute vanishing line of the ground plane and the vertical vanishing point of the scene, and extract the head feature points and the feet feature points in each frame of video sequences. Finally, we apply a single view mensuration algorithm to each of the frames to obtain height measurements and fuse the multi-frame measurements using RANSAC algorithm. Compared with other popular methods, our proposed algorithm does not require calibrating the camera, and can track the multiple moving objects when occlusion occurs. Therefore, it reduces the complexity of calculation and improves the accuracy of measurement simultaneously. The experimental results demonstrate that our method is effective and robust to occlusion.

Controlling robot by image-based visual servoing with stereo cameras

  • Fan, Jun-Min;Won, Sang-Chul
    • Proceedings of the Korea Society of Information Technology Applications Conference
    • /
    • 2005.11a
    • /
    • pp.229-232
    • /
    • 2005
  • In this paper, an image-based "approach-align -grasp" visual servo control design is proposed for the problem of object grasping, which is based on the binocular stand-alone system. The basic idea consists of considering a vision system as a specific sensor dedicated a task and included in a control servo loop, and we perform automatic grasping follows the classical approach of splitting the task into preparation and execution stages. During the execution stage, once the image-based control modeling is established, the control task can be performed automatically. The proposed visual servoing control scheme ensures the convergence of the image-features to desired trajectories by using the Jacobian matrix, which is proved by the Lyapunov stability theory. And we also stress the importance of projective invariant object/gripper alignment. The alignment between two solids in 3-D projective space can be represented with view-invariant, more precisely; it can be easily mapped into an image set-point without any knowledge about the camera parameters. The main feature of this method is that the accuracy associated with the task to be performed is not affected by discrepancies between the Euclidean setups at preparation and at task execution stages. Then according to the projective alignment, the set point can be computed. The robot gripper will move to the desired position with the image-based control law. In this paper we adopt a constant Jacobian online. Such method describe herein integrate vision system, robotics and automatic control to achieve its goal, it overcomes disadvantages of discrepancies between the different Euclidean setups and proposes control law in binocular-stand vision case. The experimental simulation shows that such image-based approach is effective in performing the precise alignment between the robot end-effector and the object.

  • PDF

Realistic 3D Scene Reconstruction from an Image Sequence (연속적인 이미지를 이용한 3차원 장면의 사실적인 복원)

  • Jun, Hee-Sung
    • The KIPS Transactions:PartB
    • /
    • v.17B no.3
    • /
    • pp.183-188
    • /
    • 2010
  • A factorization-based 3D reconstruction system is realized to recover 3D scene from an image sequence. The image sequence is captured from uncalibrated perspective camera from several views. Many matched feature points over all images are obtained by feature tracking method. Then, these data are supplied to the 3D reconstruction module to obtain the projective reconstruction. Projective reconstruction is converted to Euclidean reconstruction by enforcing several metric constraints. After many triangular meshes are obtained, realistic reconstruction of 3D models are finished by texture mapping. The developed system is implemented in C++, and Qt library is used to implement the system user interface. OpenGL graphics library is used to realize the texture mapping routine and the model visualization program. Experimental results using synthetic and real image data are included to demonstrate the effectiveness of the developed system.

X3D Based Web Visualization by Data Fusion of 3D Spatial Information and Video Sequence (3D 공간정보와 비디오 융합에 의한 X3D기반 웹 가시화)

  • Sohn, Hong-Gyoo;Kim, Seong-Sam;Yoo, Byoung-Hyun;Kim, Sang-Min
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.17 no.4
    • /
    • pp.95-103
    • /
    • 2009
  • Global interests for construction of 3 dimensional spatial information has risen due to development of measurement sensors and data processing technologies. In spite of criticism for the violation of personal privacy, CCTV cameras equipped in outdoor public space of urban area are used as a fundamental sensor for traffic management, crime prevention or hazard monitoring. For safety guarantee in urban environment and disaster prevention, a surveillance system integrating pre-constructed 3 dimensional spatial information with CCTV data or video sequence is needed for monitoring and observing emergent situation interactively in real time. In this study, we proposed applicability of the prototype system for web visualization based on X3D, an international standard of real time web visualization, by integrating 3 dimensional spatial information with video sequence.

  • PDF

Integrated editing system for 3D stereoscopic contents production (3차원 입체 콘텐츠 제작을 위한 통합 저작 시스템)

  • Yun, Chang-Ok;Yun, Tae-Soo;Lee, Dong-Hoon
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.13 no.1
    • /
    • pp.11-21
    • /
    • 2008
  • Recently, it has shown an increased interest in 3D stereoscopic contents due to the development of the digital image media. Therefore, many techniques in 3D stereoscopic image generation have being researched and developed. However, it is difficult to generate high immersion and natural 3D stereoscopic contents, because the lack of 3D geometric information imposes restrictions in 2D image. In addition, control of the camera interval and rendering of the both eyes must be repetitively accomplished for the stereo effect being high. Therefore, we propose integrated editing system for 3D stereoscopic contents production using a variety of images. Then we generate 3D model from projective geometric information in single 2D image using image-based modeling method. And we offer real-time interactive 3D stereoscopic preview function for determining high immersion 3D stereo view. And then we generate intuitively 3D stereoscopic contents of high-quality through a stereoscopic LCD monitor and a polarizing filter glasses.

  • PDF