• Title/Summary/Keyword: 텍스쳐 정보

Search Result 239, Processing Time 0.028 seconds

GPU-based Image-space Collision Detection among Closed Objects (GPU를 이용한 이미지 공간 충돌 검사 기법)

  • Jang, Han-Young;Jeong, Taek-Sang;Han, Jung-Hyun
    • Journal of the HCI Society of Korea
    • /
    • v.1 no.1
    • /
    • pp.45-52
    • /
    • 2006
  • This paper presents an image-space algorithm to real-time collision detection, which is run completely by GPU. For a single object or for multiple objects with no collision, the front and back faces appear alternately along the view direction. However, such alternation is violated when objects collide. Based on these observations, the algorithm propose the depth peeling method which renders the minimal surface of objects, not whole surface, to find colliding. The Depth peeling method utilizes the state-of-the-art functionalities of GPU such as framebuffer object, vertexbuffer object, and occlusion query. Combining these functions, multi-pass rendering and context switch can be done with low overhead. Therefore proposed approach has less rendering times and rendering overhead than previous image-space collision detection. The algorithm can handle deformable objects and complex objects, and its precision is governed by the resolution of the render-target-texture. The experimental results show the feasibility of GPU-based collision detection and its performance gain in real-time applications such as 3D games.

  • PDF

MPEG-I RVS Software Speed-up for Real-time Application (실시간 렌더링을 위한 MPEG-I RVS 가속화 기법)

  • Ahn, Heejune;Lee, Myeong-jin
    • Journal of Broadcast Engineering
    • /
    • v.25 no.5
    • /
    • pp.655-664
    • /
    • 2020
  • Free viewpoint image synthesis technology is one of the important technologies in the MPEG-I (Immersive) standard. RVS (Reference View Synthesizer) developed by MPEG-I and in use in MPEG group is a DIBR (Depth Information-Based Rendering) program that generates an image at a virtual (intermediate) viewpoint from multiple viewpoints' inputs. RVS uses the mesh surface method based on computer graphics, and outperforms the pixel-based ones by 2.5dB or more compared to the previous pixel method. Even though its OpenGL version provides 10 times speed up over the non OpenGL based one, it still shows a non-real-time processing speed, i.e., 0.75 fps on the two 2k resolution input images. In this paper, we analyze the internal of RVS implementation and modify its structure, achieving 34 times speed up, therefore, real-time performance (22-26 fps), through the 3 key improvements: 1) the reuse of OpenGL buffers and texture objects 2) the parallelization of file I/O and OpenGL execution 3) the parallelization of GPU shader program and buffer transfer.

Piecewise Image Denoising with Multi-scale Block Region Detector based on Quadtree Structure (쿼드트리 기반의 다중 스케일 블록 영역 검출기를 통한 구간적 영상 잡음 제거 기법)

  • Lee, Jeehyun;Jeong, Jechang
    • Journal of Broadcast Engineering
    • /
    • v.20 no.4
    • /
    • pp.521-532
    • /
    • 2015
  • This paper presents a piecewise image denoising with multi-scale block region detector based on quadtree structure for effective image restoration. Proposed piecewise image denoising method suggests multi-scale block region detector (MBRD) by dividing whole pixels of a noisy image into three parts, with regional characteristics: strong variation region, weak variation region, and flat region. These regions are classified according to total pixels variation between multi-scale blocks and are applied principal component analysis with local pixel grouping, bilateral filtering, and structure-preserving image decomposition operator called relative total variation. The performance of proposed method is evaluated by Experimental results. we can observe that region detection results generated by the detector seems to be well classified along the characteristics of regions. In addition, the piecewise image denoising provides the positive gain with regard to PSNR performance. In the visual evaluation, details and edges are preserved efficiently over the each region; therefore, the proposed method effectively reduces the noise and it proves that it improves the performance of denoising by the restoration process according to the region characteristics.

Development of Digital Leaf Authoring Tool for Virtual Landscape Production (가상 조경 생성을위한 디지털 잎 저작도구 개발)

  • Kim, Jinmo
    • Journal of the Korea Computer Graphics Society
    • /
    • v.21 no.5
    • /
    • pp.1-10
    • /
    • 2015
  • This study proposes a method of developing authoring tool that can easily and intuitively generate diverse digital leaves that compose virtual landscape. The main system of the proposed authoring tool consists of deformation method for the contour of leaf blade based on image warping, procedural modeling of leaf vein and visualization method based on mathematical model that expresses the color and brightness of leaves. First, the proposed authoring tool receives leaf input image and searches for contour information on the leaf blades. It then designs leaf blade deformation method that can generate diverse shapes of leaf blades in an intuitive structure using feature-based image warping. Based on the computed leaf blade contour, the system implements the generalized procedural modeling method suitable for the authoring tool that generates natural vein patterns appropriate for the leaf blade shape. Finally, the system applies visualization function that can express color and brightness of leaves and their changes over time using a mathematical model based on convolution sums of divisor functions. This paper provides texture support function so that the digital leaves that were generated using the proposed authoring tool can be used in a variety of three-dimensional digital contents field.

Directional Interpolation of Lost Block Using Difference of DC values and Similarity of AC Coefficients (DC값 차이와 AC계수 유사성을 이용한 방향성 블록 보간)

  • Lee Hong Yub;Eom Il Kyu;Kim Yoo Shin
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.6C
    • /
    • pp.465-474
    • /
    • 2005
  • In this paper, a directional reconstruction of lost block in image over noisy channel is presented. DCT coefficients or pixel values in the lost blocks are recovered by using the linear interpolation with available neighboring blocks that are adaptively selected by the directional measure that are composed of the DDC (Difference of DC opposite blocks)and SAC(Similarity of AC opposite blocks) between opposite blocks around lost blocks. The proposed directional recovery method is effective for the strong edge and texture regions because we do not make use of the fixed 4-neighboring blocks but exploit the varying neighboring blocks adaptively by the directional information in the local image. In this paper, we describe the novel directional measure(CDS: Combination of DDC and SAC) composed of the DDC and the SAC and select the usable block to recover the lost block with the directional measure. The proposed method shows about 0.6dB PSNR improvement in average compared to the conventional methods.

Face Tracking for Multi-view Display System (다시점 영상 시스템을 위한 얼굴 추적)

  • Han, Chung-Shin;Jang, Se-Hoon;Bae, Jin-Woo;Yoo, Ji-Sang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.2C
    • /
    • pp.16-24
    • /
    • 2005
  • In this paper, we proposed a face tracking algorithm for a viewpoint adaptive multi-view synthesis system. The original scene captured by a depth camera contains a texture image and 8 bit gray-scale depth map. From this original image, multi-view images can be synthesized which correspond to viewer's position by using geometrical transformation such as a rotation and a translation. The proposed face tracking technique gives a motion parallax cue by different viewpoints and view angles. In the proposed algorithm, tracking of viewer's dominant face initially established from camera by using statistical characteristics of face colors and deformable templates is done. As a result, we can provide motion parallax cue by detecting viewer's dominant face area and tracking it even under a heterogeneous background and can successfully display the synthesized sequences.

Image Discriminal Analysis for Detecting a Esophagitis (식도염 진단을 위한 영상 판별분석)

  • Seo K. W.;Lee C. W.;Kim W.;Lee S. Y.;Lee D. W.
    • Journal of Biomedical Engineering Research
    • /
    • v.25 no.6
    • /
    • pp.545-550
    • /
    • 2004
  • An Image processing algorithm was developed and tested to detect abnormal parts, such as esophagitis, with the information on the color and the texture in a digital clinic endoscopic image by using discriminal analysis. In order to develope the algorithm, the critical parameters from many parameters were found to distinguish between normal and abnormal part in the various images. The Inflammation and ulceration which are very important diagnostic indexes were detected by the algorithm. The algorithm proved to a reliable program for detecting abnormal parts with 20 images. A success rate was 92.8% and 92.4% in the calibration stage and the validation stage by using the algorithm with discriminal analysis.

Compensation Method for Occluded-region of Arbitrary-view Image Synthesized from Multi-view Video (다시점 동영상에서 임의시점영상 생성을 위한 가려진 영역 보상기법)

  • Park, Se-Hwan;Song, Hyuk;Jang, Eun-Young;Hur, Nam-Ho;Kim, Jin-Woong;Kim, Jin-Soo;Lee, Sang-Hun;Yoo, Ji-Sang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.33 no.12C
    • /
    • pp.1029-1038
    • /
    • 2008
  • In this paper, we propose a method for an arbitrary-view image generation in multi-view video and methods for pre- and post-processing to compensate unattended regions in the generated image. To generate an arbitrary-view image, camera geometry is used. Three dimensional coordinates of image pixels can be obtained by using depth information of multi-view video and parameter information of multi-view cameras, and by replacing three dimensional coordinates on a two dimensional image plane of other view, arbitrary-view image can be reconstructed. However, the generated arbitrary-view image contains many unattended regions. In this paper, we also proposed a method for compensating these regions considering temporal redundancy and spatial direction of an image and an error of acquired multi-view image and depth information. Test results show that we could obtain a reliably synthesized view-image with objective measurement of PSNR more than 30dB and subjective estimation of DSCQS(double stimulus continuous quality scale method) more than 3.5 point.

A Vanishing Point Detection Method Based on the Empirical Weighting of the Lines of Artificial Structures (인공 구조물 내 직선을 찾기 위한 경험적 가중치를 이용한 소실점 검출 기법)

  • Kim, Hang-Tae;Song, Wonseok;Choi, Hyuk;Kim, Taejeong
    • Journal of KIISE
    • /
    • v.42 no.5
    • /
    • pp.642-651
    • /
    • 2015
  • A vanishing point is a point where parallel lines converge, and they become evident when a camera's lenses are used to project 3D space onto a 2D image plane. Vanishing point detection is the use of the information contained within an image to detect the vanishing point, and can be utilized to infer the relative distance between certain points in the image or for understanding the geometry of a 3D scene. Since parallel lines generally exist for the artificial structures within images, line-detection-based vanishing point-detection techniques aim to find the point where the parallel lines of artificial structures converge. To detect parallel lines in an image, we detect edge pixels through edge detection and then find the lines by using the Hough transform. However, the various textures and noise in an image can hamper the line-detection process so that not all of the lines converging toward the vanishing point are obvious. To overcome this difficulty, it is necessary to assign a different weight to each line according to the degree of possibility that the line passes through the vanishing point. While previous research studies assigned equal weight or adopted a simple weighting calculation, in this paper, we are proposing a new method of assigning weights to lines after noticing that the lines that pass through vanishing points typically belong to artificial structures. Experimental results show that our proposed method reduces the vanishing point-estimation error rate by 65% when compared to existing methods.

Estimation of PM concentrations at night time using CCTV images in the area around the road (도로 주변 지역의 CCTV영상을 이용한 야간시간대 미세먼지 농도 추정)

  • Won, Taeyeon;Eo, Yang Dam;Jo, Su Min;Song, Junyoung;Youn, Junhee
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.39 no.6
    • /
    • pp.393-399
    • /
    • 2021
  • In this study, experiments were conducted to estimate the PM concentrations by learning the nighttime CCTV images of various PM concentrations environments. In the case of daytime images, there have been many related studies, and the various texture and brightness information of images is well expressed, so the information affecting learning is clear. However, nighttime images contain less information than daytime images, and studies using only nighttime images are rare. Therefore, we conducted an experiment combining nighttime images with non-uniform characteristics due to light sources such as vehicles and streetlights and building roofs, building walls, and streetlights with relatively constant light sources as an ROI (Region of Interest). After that, the correlation was analyzed compared to the daytime experiment to see if deep learning-based PM concentrations estimation was possible with nighttime images. As a result of the experiment, the result of roof ROI learning was the highest, and the combined learning model with the entire image showed more improved results. Overall, R2 exceeded 0.9, indicating that PM estimation is possible from nighttime CCTV images, and it was calculated that additional combined learning of weather data did not significantly affect the experimental results.