• Title/Summary/Keyword: 3차원 워핑

Search Result 33, Processing Time 0.023 seconds

High-qualtiy 3-D Video Generation using Scale Space (계위 공간을 이용한 고품질 3차원 비디오 생성 방법 -다단계 계위공간 개념을 이용해 깊이맵의 경계영역을 정제하는 고화질 복합형 카메라 시스템과 고품질 3차원 스캐너를 결합하여 고품질 깊이맵을 생성하는 방법-)

  • Lee, Eun-Kyung;Jung, Young-Ki;Ho, Yo-Sung
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.620-624
    • /
    • 2009
  • In this paper, we present a new camera system combining a high-quality 3-D scanner and hybrid camera system to generate a multiview video-plus-depth. In order to get the 3-D video using the hybrid camera system and 3-D scanner, we first obtain depth information for background region from the 3-D scanner. Then, we get the depth map for foreground area from the hybrid camera system. Initial depths of each view image are estimated by performing 3-D warping with the depth information. Thereafter, multiview depth estimation using the initial depths is carried out to get each view initial disparity map. We correct the initial disparity map using a belief propagation algorithm so that we can generate the high-quality multiview disparity map. Finally, we refine depths of the foreground boundary using extracted edge information. Experimental results show that the proposed depth maps generation method produces a 3-D video with more accurate multiview depths and supports more natural 3-D views than the previous works.

  • PDF

Consider the directional hole filling method for virtual view point synthesis (가상 시점 영상 합성을 위한 방향성 고려 홀 채움 방법)

  • Mun, Ji Hun;Ho, Yo Sung
    • Smart Media Journal
    • /
    • v.3 no.4
    • /
    • pp.28-34
    • /
    • 2014
  • Recently the depth-image-based rendering (DIBR) method is usually used in 3D image application filed. Virtual view image is created by using a known view with associated depth map to make a virtual view point which did not taken by the camera. But, disocclusion area occur because the virtual view point is created using a depth image based image 3D warping. To remove those kind of disocclusion region, many hole filling methods are proposed until now. Constant color region searching, horizontal interpolation, horizontal extrapolation, and variational inpainting techniques are proposed as a hole filling methods. But when using those hole filling method some problem occurred. The different types of annoying artifacts are appear in texture region hole filling procedure. In this paper to solve those problem, the multi-directional extrapolation method is newly proposed for efficiency of expanded hole filling performance. The proposed method is efficient when performing hole filling which complex texture background region. Consideration of directionality for hole filling method use the hole neighbor texture pixel value when estimate the hole pixel value. We can check the proposed hole filling method can more efficiently fill the hole region which generated by virtual view synthesis result.

A Study on the Synthesis of Facial Poses based on Warping (워핑 기법에 의한 얼굴의 포즈 합성에 관한 연구)

  • 오승택;서준원;전병환
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2001.04b
    • /
    • pp.499-501
    • /
    • 2001
  • 본 논문에서는 사실적인 아바타(avata) 구현의 핵심이라 할 수 있는 입체적인 얼굴 표현을 위해, (※원문참조) 기하학적인 정보를 사용하지 않고 중첩 메쉬를 허용하는 개선된 메쉬 워프 알고리즘(mesh warp algor※원문참조)을 이용하여 IBR(Image Based Rendering)을 구현하는 방법을 제안한다. 3차원 모델을 대신하기 위해 (※원문참조) 인물의 정면, 좌우 반측면, 좌우 측면의 얼굴 영상들에 대해 작성된 메쉬를 사용한다. 합성하고자 하는 (※원문참조) 정면 얼굴 영상에 대해서는 정면 메쉬만을 작성하고, 반측면이나 측면 메쉬는 표준 메쉬를 근거로 자(※원문참조)된다. 얼굴 포즈 합성의 성능을 펴가하기 위해, 얼굴을 수평으로 회전하는 실제 포즈 영상과 합성된 포(※원문참조)에 대해 주요 특징점 들을 정규화 한 위치 오차를 측정한 결과, 평균적으로 양 눈의 중심에서 입의 (※원문참조)리에 대해 약 5%의 위치 오차만이 발생한 것으로 나타났다.

  • PDF

A Moving Synchronization Technique for Virtual Target Overlay (가상표적 전시를 위한 이동 동기화 기법)

  • Kim Gye-Young;Jang Seok-Woo
    • Journal of Internet Computing and Services
    • /
    • v.7 no.4
    • /
    • pp.45-55
    • /
    • 2006
  • This paper proposes a virtual target overlay technique for a realistic training simulation which projects a virtual target on ground-based CCD images according to an appointed scenario. This method creates a realistic 3D model for instructors by using high resolution GeoTIFF (Geographic Tag Image File Format) satellite images and DTED(Digital Terrain Elevation Data), and it extracts road areas from the given CCD images for both instructors and trainees, Since there is much difference in observation position, resolution, and scale between satellite Images and ground-based sensor images, feature-based matching faces difficulty, Hence, we propose a moving synchronization technique that projects the targets on sensor images according to the moving paths marked on 3D satellite images. Experimental results show the effectiveness of the proposed algorithm with satellite and sensor images of Daejoen.

  • PDF

View Synthesis Error Removal for Comfortable 3D Video Systems (편안한 3차원 비디오 시스템을 위한 영상 합성 오류 제거)

  • Lee, Cheon;Ho, Yo-Sung
    • Smart Media Journal
    • /
    • v.1 no.3
    • /
    • pp.36-42
    • /
    • 2012
  • Recently, the smart applications, such as smart phone and smart TV, become a hot issue in IT consumer markets. In particular, the smart TV provides 3D video services, hence efficient coding methods for 3D video data are required. Three-dimensional (3D) video involves stereoscopic or multi-view images to provide depth experience through 3D display systems. Binocular cues are perceived by rendering proper viewpoint images obtained at slightly different view angles. Since the number of viewpoints of the multi-view video is limited, 3D display devices should generate arbitrary viewpoint images using available adjacent view images. In this paper, after we explain a view synthesis method briefly, we propose a new algorithm to compensate view synthesis errors around object boundaries. We describe a 3D warping technique exploiting the depth map for viewpoint shifting and a hole filling method using multi-view images. Then, we propose an algorithm to remove boundary noises that are generated due to mismatches of object edges in the color and depth images. The proposed method reduces annoying boundary noises near object edges by replacing erroneous textures with alternative textures from the other reference image. Using the proposed method, we can generate perceptually inproved images for 3D video systems.

  • PDF

H.264 Encoding Technique of Multi-view Video expressed by Layered Depth Image (계층적 깊이 영상으로 표현된 다시점 비디오에 대한 H.264 부호화 기술)

  • Shin, Jong-Hong;Jee, Inn-Ho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.14 no.2
    • /
    • pp.43-51
    • /
    • 2014
  • Multi-view video including depth image is necessary to develop a new compression encoding technique for storage and transmission, because of a huge amount of data. Layered depth image is an efficient representation method of multi-view video data. This method makes a data structure that is synthesis of multi-view color and depth image. This efficient method to compress new contents is suggested to use layered depth image representation and to apply for video compression encoding by using 3D warping. This paper proposed enhanced compression method using layered depth image representation and H.264/AVC video coding technology. In experimental results, we confirmed high compression performance and good quality of reconstructed image.

Multi-view Generation using High Resolution Stereoscopic Cameras and a Low Resolution Time-of-Flight Camera (고해상도 스테레오 카메라와 저해상도 깊이 카메라를 이용한 다시점 영상 생성)

  • Lee, Cheon;Song, Hyok;Choi, Byeong-Ho;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.4A
    • /
    • pp.239-249
    • /
    • 2012
  • Recently, the virtual view generation method using depth data is employed to support the advanced stereoscopic and auto-stereoscopic displays. Although depth data is invisible to user at 3D video rendering, its accuracy is very important since it determines the quality of generated virtual view image. Many works are related to such depth enhancement exploiting a time-of-flight (TOF) camera. In this paper, we propose a fast 3D scene capturing system using one TOF camera at center and two high-resolution cameras at both sides. Since we need two depth data for both color cameras, we obtain two views' depth data from the center using the 3D warping technique. Holes in warped depth maps are filled by referring to the surrounded background depth values. In order to reduce mismatches of object boundaries between the depth and color images, we used the joint bilateral filter on the warped depth data. Finally, using two color images and depth maps, we generated 10 additional intermediate images. To realize fast capturing system, we implemented the proposed system using multi-threading technique. Experimental results show that the proposed capturing system captured two viewpoints' color and depth videos in real-time and generated 10 additional views at 7 fps.

Generation of ROI Enhanced High-resolution Depth Maps in Hybrid Camera System (복합형 카메라 시스템에서 관심영역이 향상된 고해상도 깊이맵 생성 방법)

  • Kim, Sung-Yeol;Ho, Yo-Sung
    • Journal of Broadcast Engineering
    • /
    • v.13 no.5
    • /
    • pp.596-601
    • /
    • 2008
  • In this paper, we propose a new scheme to generate region-of-interest (ROI) enhanced depth maps in the hybrid camera system, which is composed of a low-resolution depth camera and a high-resolution stereoscopic camera. The proposed method creates an ROI depth map for the left image by carrying out a three-dimensional (3-D) warping operation onto the depth information obtained from the depth camera. Then, we generate a background depth map for the left image by applying a stereo matching algorithm onto the left and right images captured by the stereoscopic camera. Finally, we merge the ROI map with the background one to create the final depth map. The proposed method provides higher quality depth information on ROI than the previous methods.

View Morphing for Generation of In-between Scenes from Un-calibrated Images (비보정 (un-calibrated) 영상으로부터 중간영상 생성을 위한 뷰 몰핑)

  • Song Jin-Young;Hwang Yong-Ho;Hong Hyun-Ki
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.32 no.1
    • /
    • pp.1-8
    • /
    • 2005
  • Image morphing to generate 2D transitions between images may be difficult even to express simple 3D transformations. In addition, previous view morphing method requires control points for postwarping, and is much affected by self- occlusion. This paper presents a new morphing algorithm that can generate automatically in-between scenes from un-calibrated images. Our algorithm rectifies input images based on the fundamental matrix, which is followed by linear interpolation with bilinear disparity map. In final, we generate in-between views by inverse mapping of homography between the rectified images. The proposed method nay be applied to photographs and drawings, because neither knowledge of 3D shape nor camera calibration, which is complex process generally, is required. The generated in-between views can be used in various application areas such as simulation system of virtual environment and image communication.

Virtual Target Overlay Technique by Matching 3D Satellite Image and Sensor Image (3차원 위성영상과 센서영상의 정합에 의한 가상표적 Overlay 기법)

  • Cha, Jeong-Hee;Jang, Hyo-Jong;Park, Yong-Woon;Kim, Gye-Young;Choi, Hyung-Il
    • The KIPS Transactions:PartD
    • /
    • v.11D no.6
    • /
    • pp.1259-1268
    • /
    • 2004
  • To organize training in limited training area for an actuai combat, realistic training simulation plugged in by various battle conditions is essential. In this paper, we propose a virtual target overlay technique which does not use a virtual image, but Projects a virtual target on ground-based CCD image by appointed scenario for a realistic training simulation. In the proposed method, we create a realistic 3D model (for an instructor) by using high resolution Geographic Tag Image File Format(GeoTIFF) satellite image and Digital Terrain Elevation Data (DTED), and extract the road area from a given CCD image (for both an instructor and a trainee). Satellite images and ground-based sensor images have many differences in observation position, resolution, and scale, thus yielding many difficulties in feature-based matching. Hence, we propose a moving synchronization technique that projects the target on the sensor image according to the marked moving path on 3D satellite image by applying Thin-Plate Spline(TPS) interpolation function, which is an image warping function, on the two given sets of corresponding control point pair. To show the experimental result of the proposed method, we employed two Pentium4 1.8MHz personal computer systems equipped with 512MBs of RAM, and the satellite and sensor images of Daejoen area are also been utilized. The experimental result revealed the effective-ness of proposed algorithm.