• 제목/요약/키워드: 3D projection mapping

Search Result 35, Processing Time 0.017 seconds

VR Visualization of Casting Flow Simulation (주물 유동해석의 VR 가시화)

  • Park, Ji-Young;Suh, Ji-Hyun;Kim, Sung-Hee;Kim, Myoung-Hee
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.813-816
    • /
    • 2008
  • In this research we present a method to reconstruct the casting flow simulation result as a 3D model and visualize it on a VR display. First, numerical analysis of heat flow is performed using an existing commercial CAE simulation software. In this process the shape of the original design model is approximated to a regular rectangular grid. The filling ratio and temperature of each voxel are recorded iteratively by predefined number of steps starting from pouring the melted metal into a mold until it is entirely filled. Next we reconstruct the casting by voxels using the simulation result as an input. The color of voxel is determined by mapping the colors to temperature and filling ratio at each step as the flow proceeds. The reconstructed model is visualized on the Projection Table which is one of horizontal-type VR display. It provides active stereoscopic images.

  • PDF

View Morphing for Generation of In-between Scenes from Un-calibrated Images (비보정 (un-calibrated) 영상으로부터 중간영상 생성을 위한 뷰 몰핑)

  • Song Jin-Young;Hwang Yong-Ho;Hong Hyun-Ki
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.32 no.1
    • /
    • pp.1-8
    • /
    • 2005
  • Image morphing to generate 2D transitions between images may be difficult even to express simple 3D transformations. In addition, previous view morphing method requires control points for postwarping, and is much affected by self- occlusion. This paper presents a new morphing algorithm that can generate automatically in-between scenes from un-calibrated images. Our algorithm rectifies input images based on the fundamental matrix, which is followed by linear interpolation with bilinear disparity map. In final, we generate in-between views by inverse mapping of homography between the rectified images. The proposed method nay be applied to photographs and drawings, because neither knowledge of 3D shape nor camera calibration, which is complex process generally, is required. The generated in-between views can be used in various application areas such as simulation system of virtual environment and image communication.

Registration Technique of Partial 3D Point Clouds Acquired from a Multi-view Camera for Indoor Scene Reconstruction (실내환경 복원을 위한 다시점 카메라로 획득된 부분적 3차원 점군의 정합 기법)

  • Kim Sehwan;Woo Woontack
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.42 no.3 s.303
    • /
    • pp.39-52
    • /
    • 2005
  • In this paper, a registration method is presented to register partial 3D point clouds, acquired from a multi-view camera, for 3D reconstruction of an indoor environment. In general, conventional registration methods require a high computational complexity and much time for registration. Moreover, these methods are not robust for 3D point cloud which has comparatively low precision. To overcome these drawbacks, a projection-based registration method is proposed. First, depth images are refined based on temporal property by excluding 3D points with a large variation, and spatial property by filling up holes referring neighboring 3D points. Second, 3D point clouds acquired from two views are projected onto the same image plane, and two-step integer mapping is applied to enable modified KLT (Kanade-Lucas-Tomasi) to find correspondences. Then, fine registration is carried out through minimizing distance errors based on adaptive search range. Finally, we calculate a final color referring colors of corresponding points and reconstruct an indoor environment by applying the above procedure to consecutive scenes. The proposed method not only reduces computational complexity by searching for correspondences on a 2D image plane, but also enables effective registration even for 3D points which have low precision. Furthermore, only a few color and depth images are needed to reconstruct an indoor environment.

Automatic Building Extraction Using LIDAR and Aerial Image (LIDAR 데이터와 수치항공사진을 이용한 건물 자동추출)

  • Jeong, Jae-Wook;Jang, Hwi-Jeong;Kim, Yu-Seok;Cho, Woo-Sug
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.13 no.3 s.33
    • /
    • pp.59-67
    • /
    • 2005
  • Building information is primary source in many applications such as mapping, telecommunication, car navigation and virtual city modeling. While aerial CCD images which are captured by passive sensor(digital camera) provide horizontal positioning in high accuracy, it is far difficult to process them in automatic fashion due to their inherent properties such as perspective projection and occlusion. On the other hand, LIDAR system offers 3D information about each surface rapidly and accurately in the form of irregularly distributed point clouds. Contrary to the optical images, it is much difficult to obtain semantic information such as building boundary and object segmentation. Photogrammetry and LIDAR have their own major advantages and drawbacks for reconstructing earth surfaces. The purpose of this investigation is to automatically obtain spatial information of 3D buildings by fusing LIDAR data with aerial CCD image. The experimental results show that most of the complex buildings are efficiently extracted by the proposed method and signalize that fusing LIDAR data and aerial CCD image improves feasibility of the automatic detection and extraction of buildings in automatic fashion.

  • PDF

Comparison Among Sensor Modeling Methods in High-Resolution Satellite Imagery (고해상도 위성영상의 센서모형과 방법 비교)

  • Kim, Eui Myoung;Lee, Suk Kun
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.26 no.6D
    • /
    • pp.1025-1032
    • /
    • 2006
  • Sensor modeling of high-resolution satellites is a prerequisite procedure for mapping and GIS applications. Sensor models, describing the geometric relationship between scene and object, are divided into two main categories, which are rigorous and approximate sensor models. A rigorous model is based on the actual geometry of the image formation process, involving internal and external characteristics of the implemented sensor. However, approximate models require neither a comprehensive understanding of imaging geometry nor the internal and external characteristics of the imaging sensor, which has gathered a great interest within photogrammetric communities. This paper described a comparison between rigorous and various approximate sensor models that have been used to determine three-dimensional positions, and proposed the appropriate sensor model in terms of the satellite imagery usage. Through the case study of using IKONOS satellite scenes, rigorous and approximate sensor models have been compared and evaluated for the positional accuracy in terms of acquirable number of ground controls. Bias compensated RFM(Rational Function Model) turned out to be the best among compared approximate sensor models, both modified parallel projection and parallel-perspective model were able to be modelled with a small number of controls. Also affine transformation, one of the approximate sensor models, can be used to determine the planimetric position of high-resolution satellites and perform image registration between scenes.