• Title/Summary/Keyword: Image-Based Rendering

Search Result 320, Processing Time 0.025 seconds

Camera Identification of DIBR-based Stereoscopic Image using Sensor Pattern Noise (센서패턴잡음을 이용한 DIBR 기반 입체영상의 카메라 판별)

  • Lee, Jun-Hee
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.19 no.1
    • /
    • pp.66-75
    • /
    • 2016
  • Stereoscopic image generated by depth image-based rendering(DIBR) for surveillance robot and camera is appropriate in a low bandwidth network. The image is very important data for the decision-making of a commander and thus its integrity has to be guaranteed. One of the methods used to detect manipulation is to check if the stereoscopic image is taken from the original camera. Sensor pattern noise(SPN) used widely for camera identification cannot be directly applied to a stereoscopic image due to the stereo warping in DIBR. To solve this problem, we find out a shifted object in the stereoscopic image and relocate the object to its orignal location in the center image. Then the similarity between SPNs extracted from the stereoscopic image and the original camera is measured only for the object area. Thus we can determine the source of the camera that was used.

Research on Reconstruction Technology of Biofilm Surface Based on Image Stacking

  • Zhao, Yuyang;Tao, Xueheng;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.11
    • /
    • pp.1472-1480
    • /
    • 2021
  • Image stacking technique is one of the key techniques for complex surface reconstruction. The process includes sample collection, image processing, algorithm editing, surface reconstruction, and finally reaching reliable conclusions. Since this experiment is based on laser scanning confocal microscope to collect the original contour information of the sample, it is necessary to briefly introduce the relevant principle and operation method of laser scanning confocal microscope. After that, the original image is collected and processed, and the data is expanded by interpolation method. Meanwhile, several methods of surface reconstruction are listed. After comparing the advantages and disadvantages of each method, one-dimensional interpolation and volume rendering are finally used to reconstruct the 3D model. The experimental results show that the final 3d surface modeling is more consistent with the appearance information of the original samples. At the same time, the algorithm is simple and easy to understand, strong operability, and can meet the requirements of surface reconstruction of different types of samples.

A Study on Production of Optimum Profile Considered Color Rendering in Input Device (입력 장치에서 컬러 랜더링을 고려한 최적의 프로파일 제작에 관한 연구)

  • Koo, Chul-Whoi;Cho, Ga-Ram;Lee, Sung-Hyung
    • Journal of the Korean Graphic Arts Communication Society
    • /
    • v.28 no.2
    • /
    • pp.117-128
    • /
    • 2010
  • Advancements in digital image have put high quality digital camera into the hands of many image professionals and consumers alike. High quality digital camera images consist originally of raw which have a set of color rendering operation applied to them to produce good images. With color rendering, the raw file was converted to Adobe RGB and sRGB color space. Also color rendering can incorporate factor such as white balance, contrast, saturation. Therefore, in this paper we conduct a study on production of optimum profile considered color rendering in digital camera. To do the experiment, the images were Digital ColorChecker SG target and ColorChecker DC target. A profiling tool was ProfileMaker 5.03. The results were analyzed by comparing in color gamut of $CIEL^*a^*b^*$ color space and calculating ${\Delta}E^*_{ab}$. Also results were analyzed in terms of different $CIEL^*a^*b^*$ color space quadrants based on lightness, chroma.

A Fast Volume Rendering Algorithm for Virtual Endoscopy

  • Ra Jong Beom;Kim Sang Hun;Kwon Sung Min
    • Journal of Biomedical Engineering Research
    • /
    • v.26 no.1
    • /
    • pp.23-30
    • /
    • 2005
  • 3D virtual endoscopy has been used as an alternative non-invasive procedure for visualization of hollow organs. However, due to computational complexity, this is a time-consuming procedure. In this paper, we propose a fast volume rendering algorithm based on perspective ray casting for virtual endoscopy. As a pre-processing step, the algorithm divides a volume into hierarchical blocks and classifies them into opaque or transparent blocks. Then, in the first step, we perform ray casting only for sub-sampled pixels on the image plane, and determine their pixel values and depth information. In the next step, by reducing the sub-sampling factor by half, we repeat ray casting for newly added pixels, and their pixel values and depth information are determined. Here, the previously obtained depth information is utilized to reduce the processing time. This step is recursively performed until a full-size rendering image is acquired. Experiments conducted on a PC show that the proposed algorithm can reduce the rendering time by 70- 80% for bronchus and colon endoscopy, compared with the brute-force ray casting scheme. Using the proposed algorithm, interactive volume rendering becomes more realizable in a PC environment without any specific hardware.

MRBR-based JPEG2000 Codec for Stereoscopic Image Compression of 3-Dimensional Digital Cinema (3차원 디지털 시네마의 스테레오 영상 압축을 위한 MRBR기반의 JPEG2000 코덱)

  • Seo, Young-Ho;Sin, Wan-Soo;Choi, Hyun-Jun;Yoo, Ji-Sang;Kim, Dong-Wook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.12
    • /
    • pp.2146-2152
    • /
    • 2008
  • In In this paper, we proposed a new JPEG2000 codec using multiresolution-based rendering (MRBR) technique for video compression of 3-dimensional digital cinema. We introduced discrete wavelet transform (DWT) for stereoscopic image and stereo matching technique in the wavelet domain. The disparity was extracted using stereo matching and transmitted with the reference (left) image. Since the generated right image was degraded by the occlusion lesion, the residual image which is generated from difference between the original right image and the generated one was transmitted at the same tine. The disparity data was extracted using the dynamic programming method in the disparity domain. There is high correlation between the higher and lower subbands. Therefore we decreased the calculation amount and enhanced accuracy by restricting the search window and applying the disparity information generated from higher subband.

Development of Mobile 3D Urban Landscape Authoring and Rendering System

  • Lee Ki-Won;Kim Seung-Yub
    • Korean Journal of Remote Sensing
    • /
    • v.22 no.3
    • /
    • pp.221-228
    • /
    • 2006
  • In this study, an integrated 3D modeling and rendering system dealing with 3D urban landscape features such as terrain, building, road and user-defined geometric ones was designed and implemented using $OPENGL\;{|}\;ES$ (Embedded System) API for mobile devices of PDA. In this system, the authoring functions are composed of several parts handling urban landscape features: vertex-based geometry modeling, editing and manipulating 3D landscape objects, generating geometrically complex type features with attributes for 3D objects, and texture mapping of complex types using image library. It is a kind of feature-based system, linked with 3D geo-based spatial feature attributes. As for the rendering process, some functions are provided: optimizing of integrated multiple 3D landscape objects, and rendering of texture-mapped 3D landscape objects. By the active-synchronized process among desktop system, OPENGL-based 3D visualization system, and mobile system, it is possible to transfer and disseminate 3D feature models through both systems. In this mobile 3D urban processing system, the main graphical user interface and core components is implemented under EVC 4.0 MFC and tested at PDA running on windows mobile and Pocket Pc. It is expected that the mobile 3D geo-spatial information systems supporting registration, modeling, and rendering functions can be effectively utilized for real time 3D urban planning and 3D mobile mapping on the site.

Image-Based Relighting - Luminance Mapping Based on Lighting Functions

  • Manabe, Tomohisa;Raytchev, Bisser;Tamaki, Toru;Kaneda, Kazufumi
    • International Journal of CAD/CAM
    • /
    • v.12 no.1
    • /
    • pp.38-47
    • /
    • 2012
  • The paper proposes a method for generating a sequence of images with smooth transition of illumination from two input images with different lighting conditions. Our relighting approach is image-based, such as the light field rendering. We store the luminances (pixel RGB values) into "lighting functions" consisting of a couple of parameters related to normal vectors. Images with different light positions are rendered by interpolating the luminances retrieved from the lighting functions. The proposed method is a promising technique for many applications requiring a scene with variety of lighting effects, such as movies, TV games, and so on.

Real-Time Shadow Generation Using Image-Based Rendering Technique (영상기반 렌더링 기법을 이용한 실시간 그림자 생성)

  • Lee, Jung-Yeon;Im, In-Seong
    • Journal of the Korea Computer Graphics Society
    • /
    • v.7 no.1
    • /
    • pp.27-35
    • /
    • 2001
  • Shadows are important elements in producing a realistic image. In rendering. generation of the exact shape and position of shadow is crucial in providing the user with visual cues on the scene. While the shadow map technique quickly generates a shadow for the scene wherein objects and light sources are fixed. it gets slow down as they start to move. In this paper. we apply an image-based rendering technique to generate shadows in real-time using graphics hardware. Due to the heavy requirement of storage for a shadow map repository. we use a wavelet-based compression scheme for effective compression. Our method will be efficiently used in generating realistic scenes in many real-time applications such as 3D games and virtual reality systems.

  • PDF

Near-lossless Coding of Multiview Texture and Depth Information for Graphics Applications (그래픽스 응용을 위한 다시점 텍스처 및 깊이 정보의 근접 무손실 부호화)

  • Yoon, Seung-Uk;Ho, Yo-Sung
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.1
    • /
    • pp.41-48
    • /
    • 2009
  • This Paper introduces representation and coding schemes of multiview texture and depth data for complex three-dimensional scenes. We represent input color and depth images using compressed texture and depth map pairs. The proposed X-codec encodes them further to increase compression ratio in a near-lossless way. Our system resolves two problems. First, rendering time and output visual quality depend on input image resolutions rather than scene complexity since a depth image-based rendering techniques is used. Second, the random access problem of conventional image-based rendering could be effectively solved using our image block-based compression schemes. From experimental results, the proposed approach is useful to graphics applications because it provides multiview rendering, selective decoding, and scene manipulation functionalities.

GPU-based Image-space Collision Detection among Closed Objects (GPU를 이용한 이미지 공간 충돌 검사 기법)

  • Jang, Han-Young;Jeong, Taek-Sang;Han, Jung-Hyun
    • Journal of the HCI Society of Korea
    • /
    • v.1 no.1
    • /
    • pp.45-52
    • /
    • 2006
  • This paper presents an image-space algorithm to real-time collision detection, which is run completely by GPU. For a single object or for multiple objects with no collision, the front and back faces appear alternately along the view direction. However, such alternation is violated when objects collide. Based on these observations, the algorithm propose the depth peeling method which renders the minimal surface of objects, not whole surface, to find colliding. The Depth peeling method utilizes the state-of-the-art functionalities of GPU such as framebuffer object, vertexbuffer object, and occlusion query. Combining these functions, multi-pass rendering and context switch can be done with low overhead. Therefore proposed approach has less rendering times and rendering overhead than previous image-space collision detection. The algorithm can handle deformable objects and complex objects, and its precision is governed by the resolution of the render-target-texture. The experimental results show the feasibility of GPU-based collision detection and its performance gain in real-time applications such as 3D games.

  • PDF