• Title/Summary/Keyword: super-pixel

Search Result 67, Processing Time 0.031 seconds

SuperDepthTransfer: Depth Extraction from Image Using Instance-Based Learning with Superpixels

  • Zhu, Yuesheng;Jiang, Yifeng;Huang, Zhuandi;Luo, Guibo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.10
    • /
    • pp.4968-4986
    • /
    • 2017
  • In this paper, we primarily address the difficulty of automatic generation of a plausible depth map from a single image in an unstructured environment. The aim is to extrapolate a depth map with a more correct, rich, and distinct depth order, which is both quantitatively accurate as well as visually pleasing. Our technique, which is fundamentally based on a preexisting DepthTransfer algorithm, transfers depth information at the level of superpixels. This occurs within a framework that replaces a pixel basis with one of instance-based learning. A vital superpixels feature enhancing matching precision is posterior incorporation of predictive semantic labels into the depth extraction procedure. Finally, a modified Cross Bilateral Filter is leveraged to augment the final depth field. For training and evaluation, experiments were conducted using the Make3D Range Image Dataset and vividly demonstrate that this depth estimation method outperforms state-of-the-art methods for the correlation coefficient metric, mean log10 error and root mean squared error, and achieves comparable performance for the average relative error metric in both efficacy and computational efficiency. This approach can be utilized to automatically convert 2D images into stereo for 3D visualization, producing anaglyph images that are visually superior in realism and simultaneously more immersive.

A study on character change of game graphics according to media coverage and diversity of technology (매체의 포괄성과 기술의 다양성에 따른 게임그래픽의 캐릭터변화 연구분석)

  • Lee, Dong-Lyeor
    • Journal of Digital Convergence
    • /
    • v.16 no.8
    • /
    • pp.287-292
    • /
    • 2018
  • It's called masterpieces, which are made up of a lot of enthusiasts by adding a variety of configurations, high game quality, and colorful and realistic graphics. Since many classic games came out and disappear from the game companies, they have been revived in recent years as mobile. Although this phenomenon may have various hardware and software reasons, the present researcher intends to compare and analyze the graphical issues of the characters that can increase the game immersion by the user's autonomous selection among the factors for the immersion of the game. In addition to the birth of one of the most loved characters in the world among the characters of various games, it is considered to be a character who has developed the process of graphic and various expressive production centering on the character called Super Mario which shows various developments than the character in the game history of the world We will present various possibilities of development of game character through comparative analysis and research based on the character.

Multiple Shortfall Estimation Method for Image Resolution Enhancement (영상 해상도 개선을 위한 다중 부족분 추정 방법)

  • Kim, Won-Hee;Kim, Jong-Nam;Jeong, Shin-Il
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.3
    • /
    • pp.105-111
    • /
    • 2014
  • Image resolution enhancement is a technique to generate high-resolution image through improving resolution of low-resolution obtained image. It is important to estimate correctly missing pixel value in low-resolution obtained image for image resolution enhancement. In this paper, multiple shortfall estimation method for image resolution enhancement is proposed. The proposed method estimate separate multiple shortfall by predictive degradation-restoration processing in sub-images of obtained image, and generate result image combining the estimated shortfall and interpolated obtained-image. Lastly, final reconstruction image is generated by deblurring of the result image. The experimental results demonstrate that the proposed method has the best results of all compared methods in objective image quality index: PSNR, SSIM, and FSIM. The quality of reconstructed image is superior to all compared methods, and the proposed method has better lower computational complexity than compared methods. The proposed method can be useful for image resolution enhancement.

Multi-stage Image Restoration for High Resolution Panchromatic Imagery (고해상도 범색 영상을 위한 다중 단계 영상 복원)

  • Lee, Sanghoon
    • Korean Journal of Remote Sensing
    • /
    • v.32 no.6
    • /
    • pp.551-566
    • /
    • 2016
  • In the satellite remote sensing, the operational environment of the satellite sensor causes image degradation during the image acquisition. The degradation results in noise and blurring which badly affect identification and extraction of useful information in image data. Especially, the degradation gives bad influence in the analysis of images collected over the scene with complicate surface structure such as urban area. This study proposes a multi-stage image restoration to improve the accuracy of detailed analysis for the images collected over the complicate scene. The proposed method assumes a Gaussian additive noise, Markov random field of spatial continuity, and blurring proportional to the distance between the pixels. Point-Jacobian Iteration Maximum A Posteriori (PJI-MAP) estimation is employed to restore a degraded image. The multi-stage process includes the image segmentation performing region merging after pixel-linking. A dissimilarity coefficient combining homogeneity and contrast is proposed for image segmentation. In this study, the proposed method was quantitatively evaluated using simulation data and was also applied to the two panchromatic images of super-high resolution: Dubaisat-2 data of 1m resolution from LA, USA and KOMPSAT3 data of 0.7 m resolution from Daejeon in the Korean peninsula. The experimental results imply that it can improve analytical accuracy in the application of remote sensing high resolution panchromatic imagery.

Enhancing CT Image Quality Using Conditional Generative Adversarial Networks for Applying Post-mortem Computed Tomography in Forensic Pathology: A Phantom Study (사후전산화단층촬영의 법의병리학 분야 활용을 위한 조건부 적대적 생성 신경망을 이용한 CT 영상의 해상도 개선: 팬텀 연구)

  • Yebin Yoon;Jinhaeng Heo;Yeji Kim;Hyejin Jo;Yongsu Yoon
    • Journal of radiological science and technology
    • /
    • v.46 no.4
    • /
    • pp.315-323
    • /
    • 2023
  • Post-mortem computed tomography (PMCT) is commonly employed in the field of forensic pathology. PMCT was mainly performed using a whole-body scan with a wide field of view (FOV), which lead to a decrease in spatial resolution due to the increased pixel size. This study aims to evaluate the potential for developing a super-resolution model based on conditional generative adversarial networks (CGAN) to enhance the image quality of CT. 1761 low-resolution images were obtained using a whole-body scan with a wide FOV of the head phantom, and 341 high-resolution images were obtained using the appropriate FOV for the head phantom. Of the 150 paired images in the total dataset, which were divided into training set (96 paired images) and validation set (54 paired images). Data augmentation was perform to improve the effectiveness of training by implementing rotations and flips. To evaluate the performance of the proposed model, we used the Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM) and Deep Image Structure and Texture Similarity (DISTS). Obtained the PSNR, SSIM, and DISTS values of the entire image and the Medial orbital wall, the zygomatic arch, and the temporal bone, where fractures often occur during head trauma. The proposed method demonstrated improvements in values of PSNR by 13.14%, SSIM by 13.10% and DISTS by 45.45% when compared to low-resolution images. The image quality of the three areas where fractures commonly occur during head trauma has also improved compared to low-resolution images.

GPU-based dynamic point light particles rendering using 3D textures for real-time rendering (실시간 렌더링 환경에서의 3D 텍스처를 활용한 GPU 기반 동적 포인트 라이트 파티클 구현)

  • Kim, Byeong Jin;Lee, Taek Hee
    • Journal of the Korea Computer Graphics Society
    • /
    • v.26 no.3
    • /
    • pp.123-131
    • /
    • 2020
  • This study proposes a real-time rendering algorithm for lighting when each of more than 100,000 moving particles exists as a light source. Two 3D textures are used to dynamically determine the range of influence of each light, and the first 3D texture has light color and the second 3D texture has light direction information. Each frame goes through two steps. The first step is to update the particle information required for 3D texture initialization and rendering based on the Compute shader. Convert the particle position to the sampling coordinates of the 3D texture, and based on this coordinate, update the colour sum of the particle lights affecting the corresponding voxels for the first 3D texture and the sum of the directional vectors from the corresponding voxels to the particle lights for the second 3D texture. The second stage operates on a general rendering pipeline. Based on the polygon world position to be rendered first, the exact sampling coordinates of the 3D texture updated in the first step are calculated. Since the sample coordinates correspond 1:1 to the size of the 3D texture and the size of the game world, use the world coordinates of the pixel as the sampling coordinates. Lighting process is carried out based on the color of the sampled pixel and the direction vector of the light. The 3D texture corresponds 1:1 to the actual game world and assumes a minimum unit of 1m, but in areas smaller than 1m, problems such as stairs caused by resolution restrictions occur. Interpolation and super sampling are performed during texture sampling to improve these problems. Measurements of the time taken to render a frame showed that 146 ms was spent on the forward lighting pipeline, 46 ms on the defered lighting pipeline when the number of particles was 262144, and 214 ms on the forward lighting pipeline and 104 ms on the deferred lighting pipeline when the number of particle lights was 1,024766.

The Usefulness of LEUR Collimator for 1-Day Basal/Acetazolamide Brain Perfusion SPECT (1-Day Protocol을 사용하는 Brain Perfusion SPECT에서 LEUR 콜리메이터의 유용성)

  • Choi, Jin-Wook;Kim, Soo-Mee;Lee, Hyung-Jin;Kim, Jin-Eui;Kim, Hyun-Joo;Lee, Jae-Sung;Lee, Dong-Soo
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.15 no.1
    • /
    • pp.94-100
    • /
    • 2011
  • Purpose: Basal/Acetazolamide-challenged brain perfusion SPECT is very useful to assess cerebral perfusion and vascular reserve. However, as there is a trade off between sensitivity and spatial resolution in the selection of collimator, the selection of optimal collimator is crucial. In this study, we examined three collimators to select optimal one for 1-day brain perfusion SPECT. Materials and Methods: Three collimators, low energy high resolution-parallel beam (LEHR-par), ultra resolution-fan beam (LEUR-fan) and super fine-fan beam (LESFR-fan), were tested for 1-day imaging using Triad XLT 9 (TRIONIX). The SPECT images of Hoffman 3D brain phantom filled with 99mTc of 170 MBq and a normal volunteer were acquired with a protocol of 50 kcts/frame and detector rotation of 3 degree. Filterd backprojection (FBP) reconstruction with Butterworth filter (cut off frequencies, 0.3 to 0.5) was performed. The quantitative and qualitative assessments for three collimators were performed. Results: The blind tests showed that LESFR-fan provided the best image quality for Hoffman brain phantom and the volunteer. However, images for all the collimator were evaluated as 'acceptable'. On the other hand, in order to meet the equivalent signal-to-noise ratio (SNR), total acquisition time or radioactivity dose for LESFR-fan must have been increased up to almost twice of that for LEUR-fan and LEHR-par. The volunteer test indicated that total acquisition time could be reduced approximately by 10 to 14 min in clinical practice using LEUR-fan and LEHR-par without significant loss on image quality, in comparison with LESFR-fan. Conclusion: Although LESFR-fan provides the best image quality, it requires significantly more acquisition time than LEUR-fan and LEHR-par to provide reasonable SNR. Since there is no significant clinical difference between three collimators, LEUR-fan and LEHR-par can be recommended as optimal collimators for 1-day brain perfusion imaging with respect to image quality and SNR.

  • PDF