• Title/Summary/Keyword: Texture coordinates

Search Result 32, Processing Time 0.022 seconds

Three-dimensional Texture Coordinate Coding Using Texture Image Rearrangement (텍스처 영상 재배열을 이용한 삼차원 텍스처 좌표 부호화)

  • Kim, Sung-Yeol;Ho, Yo-Sung
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.6 s.312
    • /
    • pp.36-45
    • /
    • 2006
  • Three-dimensional (3-D) texture coordinates mean the position information of torture segments that are mapped into polygons in a 3-D mesh model. In order to compress texture coordinates, previous works reused the same linear predictor that had already been employed to code geometry data. However, the previous approaches could not carry out linear prediction efficiently since texture coordinates were discontinuous along a coding order. Especially, discontinuities of texture coordinates became more serious in the 3-D mesh model including a non-atlas texture. In this paper, we propose a new scheme to code 3-D texture coordinates using as a texture image rearrangement. The proposed coding scheme first extracts texture segments from a texture. Then, we rearrange the texture segments consecutively along the coding order, and apply a linear prediction to compress texture coordinates. Since the proposed scheme minimizes discontinuities of texture coordinates, we can improve coding efficiency of texture coordinates. Experiment results show that the proposed scheme outperforms the MPEG-4 3DMC standard in terms of coding efficiency.

Coordinate Determination for Texture Mapping using Camera Calibration Method (카메라 보정을 이용한 텍스쳐 좌표 결정에 관한 연구)

  • Jeong K. W.;Lee Y.Y.;Ha S.;Park S.H.;Kim J. J.
    • Korean Journal of Computational Design and Engineering
    • /
    • v.9 no.4
    • /
    • pp.397-405
    • /
    • 2004
  • Texture mapping is the process of covering 3D models with texture images in order to increase the visual realism of the models. For proper mapping the coordinates of texture images need to coincide with those of the 3D models. When projective images from the camera are used as texture images, the texture image coordinates are defined by a camera calibration method. The texture image coordinates are determined by the relation between the coordinate systems of the camera image and the 3D object. With the projective camera images, the distortion effect caused by the camera lenses should be compensated in order to get accurate texture coordinates. The distortion effect problem has been dealt with iterative methods, where the camera calibration coefficients are computed first without considering the distortion effect and then modified properly. The methods not only cause to change the position of the camera perspective line in the image plane, but also require more control points. In this paper, a new iterative method is suggested for reducing the error by fixing the principal points in the image plane. The method considers the image distortion effect independently and fixes the values of correction coefficients, with which the distortion coefficients can be computed with fewer control points. It is shown that the camera distortion effects are compensated with fewer numbers of control points than the previous methods and the projective texture mapping results in more realistic image.

Texture Image Rearrangement for Texture Coordinate Coding of Three-dimensional Mesh Models (삼차원 메쉬 모델의 텍스처 좌표 부호화를 위한 텍스처 영상의 재배열 방법)

  • Kim, Sung-Yeol;Ho, Yo-Sung
    • Proceedings of the IEEK Conference
    • /
    • 2005.11a
    • /
    • pp.963-966
    • /
    • 2005
  • Previous works related to texture coordinate coding of the three-dimensional(3-D) mesh models employed the same predictor as the geometry coder. However, discontinuities in the texture coordinates cause unreasonable prediction. Especially, discontinuities become more serious for the 3-D mesh model with a non-atlas texture image. In this paper, we propose a new coding scheme to remove discontinuities in the texture coordinates by reallocating texture segments according to a coding order. Experiment results show that the proposed coding scheme outperforms the MPEG-4 3DMC standard in terms of compression efficiency. The proposed scheme not only overcome the discontinuity problem by regenerating a texture image, but also improve coding efficiency of texture coordinate compression.

  • PDF

Texture superpixels merging by color-texture histograms for color image segmentation

  • Sima, Haifeng;Guo, Ping
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.7
    • /
    • pp.2400-2419
    • /
    • 2014
  • Pre-segmented pixels can reduce the difficulty of segmentation and promote the segmentation performance. This paper proposes a novel segmentation method based on merging texture superpixels by computing inner similarity. Firstly, we design a set of Gabor filters to compute the amplitude responses of original image and compute the texture map by a salience model. Secondly, we employ the simple clustering to extract superpixles by affinity of color, coordinates and texture map. Then, we design a normalized histograms descriptor for superpixels integrated color and texture information of inner pixels. To obtain the final segmentation result, all adjacent superpixels are merged by the homogeneity comparison of normalized color-texture features until the stop criteria is satisfied. The experiments are conducted on natural scene images and synthesis texture images demonstrate that the proposed segmentation algorithm can achieve ideal segmentation on complex texture regions.

GPU-based dynamic point light particles rendering using 3D textures for real-time rendering (실시간 렌더링 환경에서의 3D 텍스처를 활용한 GPU 기반 동적 포인트 라이트 파티클 구현)

  • Kim, Byeong Jin;Lee, Taek Hee
    • Journal of the Korea Computer Graphics Society
    • /
    • v.26 no.3
    • /
    • pp.123-131
    • /
    • 2020
  • This study proposes a real-time rendering algorithm for lighting when each of more than 100,000 moving particles exists as a light source. Two 3D textures are used to dynamically determine the range of influence of each light, and the first 3D texture has light color and the second 3D texture has light direction information. Each frame goes through two steps. The first step is to update the particle information required for 3D texture initialization and rendering based on the Compute shader. Convert the particle position to the sampling coordinates of the 3D texture, and based on this coordinate, update the colour sum of the particle lights affecting the corresponding voxels for the first 3D texture and the sum of the directional vectors from the corresponding voxels to the particle lights for the second 3D texture. The second stage operates on a general rendering pipeline. Based on the polygon world position to be rendered first, the exact sampling coordinates of the 3D texture updated in the first step are calculated. Since the sample coordinates correspond 1:1 to the size of the 3D texture and the size of the game world, use the world coordinates of the pixel as the sampling coordinates. Lighting process is carried out based on the color of the sampled pixel and the direction vector of the light. The 3D texture corresponds 1:1 to the actual game world and assumes a minimum unit of 1m, but in areas smaller than 1m, problems such as stairs caused by resolution restrictions occur. Interpolation and super sampling are performed during texture sampling to improve these problems. Measurements of the time taken to render a frame showed that 146 ms was spent on the forward lighting pipeline, 46 ms on the defered lighting pipeline when the number of particles was 262144, and 214 ms on the forward lighting pipeline and 104 ms on the deferred lighting pipeline when the number of particle lights was 1,024766.

Iris Recognition using MPEG-7 Homogeneous Texture Descriptor (MPEG-7 Homogeneous Texture 기술자를 이용한 홍채인식)

  • 이종민;한일호;김희율
    • Proceedings of the IEEK Conference
    • /
    • 2002.06d
    • /
    • pp.45-48
    • /
    • 2002
  • In this paper, we propose an iris recognition system using Homogeneous Texture descriptor of MPEG-7 standard. The texture of iris is generally used in iris recognition system. We segment the pupil with Hough transform and the boundary of iris with it's gray level difference between the white of the eye. To extract Homogeneous Texture descriptor, this iris image is transformed into polar coordinates. The extracted descriptor is then compared with the reference in DB. If their distance is larger than threshold, they are recognized as different iris. Test results will show that Homogeneous Texture descriptor can be a good measure for iris recognition system.

  • PDF

A Study on 3D Face Modelling based on Dynamic Muscle Model for Face Animation (얼굴 애니메이션을 위한 동적인 근육모델에 기반한 3차원 얼굴 모델링에 관한 연구)

  • 김형균;오무송
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.7 no.2
    • /
    • pp.322-327
    • /
    • 2003
  • Based on dynamic muscle model to construct efficient face animation in this paper 30 face modelling techniques propose. Composed face muscle by faceline that connect 256 point and this point based on dynamic muscle model, and constructed wireframe because using this. After compose standard model who use wireframe, because using front side and side 2D picture, enforce texture mapping and created 3D individual face model. Used front side of characteristic points and side part for correct mapping, after make face that have texture coordinates using 2D coordinate of front side image and front side characteristic points, constructed face that have texture coordinates using 2D coordinate of side image and side characteristic points.

Light 3D Modeling with mobile equipment (모바일 카메라를 이용한 경량 3D 모델링)

  • Ju, Seunghwan;Seo, Heesuk;Han, Sunghyu
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.12 no.4
    • /
    • pp.107-114
    • /
    • 2016
  • Recently, 3D related technology has become a hot topic for IT. 3D technologies such as 3DTV, Kinect and 3D printers are becoming more and more popular. According to the flow of the times, the goal of this study is that the general public is exposed to 3D technology easily. we have developed a web-based application program that enables 3D modeling of facial front and side photographs using a mobile phone. In order to realize 3D modeling, two photographs (front and side) are photographed with a mobile camera, and ASM (Active Shape Model) and skin binarization technique are used to extract facial height such as nose from facial and side photographs. Three-dimensional coordinates are generated using the face extracted from the front photograph and the face height obtained from the side photograph. Using the 3-D coordinates generated for the standard face model modeled with the standard face as a control point, the face becomes the face of the subject when the RBF (Radial Basis Function) interpolation method is used. Also, in order to cover the face with the modified face model, the control point found in the front photograph is mapped to the texture map coordinate to generate the texture image. Finally, the deformed face model is covered with a texture image, and the 3D modeled image is displayed to the user.

Compression of 3D Mesh Geometry and Vertex Attributes for Mobile Graphics

  • Lee, Jong-Seok;Choe, Sung-Yul;Lee, Seung-Yong
    • Journal of Computing Science and Engineering
    • /
    • v.4 no.3
    • /
    • pp.207-224
    • /
    • 2010
  • This paper presents a compression scheme for mesh geometry, which is suitable for mobile graphics. The main focus is to enable real-time decoding of compressed vertex positions while providing reasonable compression ratios. Our scheme is based on local quantization of vertex positions with mesh partitioning. To prevent visual seams along the partitioning boundaries, we constrain the locally quantized cells of all mesh partitions to have the same size and aligned local axes. We propose a mesh partitioning algorithm to minimize the size of locally quantized cells, which relates to the distortion of a restored mesh. Vertex coordinates are stored in main memory and transmitted to graphics hardware for rendering in the quantized form, saving memory space and system bus bandwidth. Decoding operation is combined with model geometry transformation, and the only overhead to restore vertex positions is one matrix multiplication for each mesh partition. In our experiments, a 32-bit floating point vertex coordinate is quantized into an 8-bit integer, which is the smallest data size supported in a mobile graphics library. With this setting, the distortions of the restored meshes are comparable to 11-bit global quantization of vertex coordinates. We also apply the proposed approach to compression of vertex attributes, such as vertex normals and texture coordinates, and show that gains similar to vertex geometry can be obtained through local quantization with mesh partitioning.

Photometry Data Compression for Three-dimensional Mesh Models Using Connectivity and Geometry Information (연결성 정보와 기하학 정보를 이용한 삼차원 메쉬 모델의 광학성 정보 압축 방법)

  • Yoon, Young-Suk;Ho, Yo-Sung
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.3
    • /
    • pp.160-174
    • /
    • 2008
  • In this paper, we propose new coding techniques for photometry data of three-dimensional(3-D) mesh models. We make a good use of geometry and connectivity information to improve coding efficiency of color, normal vector, and texture data. First of all, we determine the coding order of photometry data exploiting connectivity information. Then, we exploit the obtained geometry information of neighboring vortices through the previous process to predict the photometry data. For color coding, the predicted color of the current vertex is computed by a weighted sum of colors for adjacent vortices considering geometrical characteristics between the current vortex and the adjacent vortices at the geometry predictor. For normal vector coding, the normal vector of the current vertex is equal to one of the optimal plane produced by the optimal plane generator with distance equalizer owing to the property of an isosceles triangle. For texture coding, our proposed method removes discontinuity in the texture coordinates and reallocates texture image segments according to the coding order. Simulation results show that the proposed compression schemes provide improved performance over previous works for various 3-D mesh models.