• Title/Summary/Keyword: pixel-based image similarity

Search Result 63, Processing Time 0.024 seconds

Efficient Superpixel Generation Method Based on Image Complexity

  • Park, Sanghyun
    • Journal of Multimedia Information System
    • /
    • v.7 no.3
    • /
    • pp.197-204
    • /
    • 2020
  • Superpixel methods are widely used in the preprocessing stage as a method to reduce computational complexity by simplifying images while maintaining the characteristics of the images in the computer vision applications. It is common to generate superpixels of similar size and shape based on the pixel values rather than considering the characteristics of the image. In this paper, we propose a method to control the sizes and shapes of generated superpixels, considering the contents of an image. The proposed method consists of two steps. The first step is to over-segment an image so that the boundary information of the image is well preserved. In the second step, generated superpixels are merged based on similarity to produce the target number of superpixels, where the shapes of superpixels are controlled by limiting the maximum size and the proposed roundness metric. Experimental results show that the proposed method preserves the boundaries of the objects in an image more accurately than the existing method.

Real-Time Hierarchical Techniques for Rendering of Translucent Materials and Screen-Space Interpolation (반투명 재질의 렌더링과 화면 보간을 위한 실시간 계층화 알고리즘)

  • Ki, Hyun-Woo;Oh, Kyoung-Su
    • Journal of Korea Game Society
    • /
    • v.7 no.1
    • /
    • pp.31-42
    • /
    • 2007
  • In the natural world, most materials such as skin, marble and cloth are translucent. Their appearance is smooth and soft compared with metals or mirrors. In this paper, we propose a new GPU based hierarchical rendering technique for translucent materials, based on the dipole diffusion approximation, at interactive rates. Information of incident light, position, normal, and irradiance, on the surfaces are stored into 2D textures by rendering from a primary light view. Huge numbers of pixel photons are clustered into quad-tree image pyramids. Each pixel, we select clusters (sets of photons), and then we approximate multiple subsurface scattering term with the clusters. We also introduce a novel hierarchical screen-space interpolation technique by exploiting spatial coherence with early-z culling on the GPU. We also build image pyramids of the screen using mipmap and pixel shader. Each pixel of the pyramids is stores position, normal and spatial similarity of children pixels. If a pixel's the similarity is high, we render the pixel and interpolate the pixel to multiple pixels. Result images show that our method can interactively render deformable translucent objects by approximating hundreds of thousand photons with only hundreds clusters without any preprocessing. We use an image-space approach for entire process on the GPU, thus our method is less dependent to scene complexity.

  • PDF

Generation of Simulated Geospatial Images from Global Elevation Model and SPOT Ortho-Image

  • Park, Wan Yong;Eo, Yang Dam
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.32 no.3
    • /
    • pp.217-223
    • /
    • 2014
  • With precise sensor position, attitude element, and imaging resolution, a simulated geospatial image can be generated. In this study, a satellite image is simulated using SPOT ortho-image and global elevation data, and the geometric similarity between original and simulated images is analyzed. Using a SPOT panchromatic image and high-density elevation data from a 1/5K digital topographic map data an ortho-image with 10-meter resolution was produced. The simulated image was then generated by exterior orientation parameters and global elevation data (SRTM1, GDEM2). Experimental results showed that (1) the agreement of the image simulation between pixel location from the SRTM1/GDEM2 and high-resolution elevation data is above 99% within one pixel; (2) SRTM1 is closer than GDEM2 to high-resolution elevation data; (3) the location of error occurrence is caused by the elevation difference of topographical objects between high-density elevation data generated from the Digital Terrain Model (DTM) and Digital Surface Model (DSM)-based global elevation data. Error occurrences were typically found at river boundaries, in urban areas, and in forests. In conclusion, this study showed that global elevation data are of practical use in generating simulated images with 10-meter resolution.

A Novel Text to Image Conversion Method Using Word2Vec and Generative Adversarial Networks

  • LIU, XINRUI;Joe, Inwhee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.05a
    • /
    • pp.401-403
    • /
    • 2019
  • In this paper, we propose a generative adversarial networks (GAN) based text-to-image generating method. In many natural language processing tasks, which word expressions are determined by their term frequency -inverse document frequency scores. Word2Vec is a type of neural network model that, in the case of an unlabeled corpus, produces a vector that expresses semantics for words in the corpus and an image is generated by GAN training according to the obtained vector. Thanks to the understanding of the word we can generate higher and more realistic images. Our GAN structure is based on deep convolution neural networks and pixel recurrent neural networks. Comparing the generated image with the real image, we get about 88% similarity on the Oxford-102 flowers dataset.

S&P Noise Removal Filter Algorithm using Plane Equations (평면 방정식을 이용한 S&P 잡음제거 필터 알고리즘)

  • Young-Su, Chung;Nam-Ho, Kim
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.27 no.1
    • /
    • pp.47-53
    • /
    • 2023
  • Devices such as X-Ray, CT, MRI, scanners, etc. can generate S&P noise from several sources during the image acquisition process. Since S&P noise appearing in the image degrades the image quality, it is essential to use noise reduction technology in the image processing process. Various methods have already been proposed in research on S&P noise removal, but all of them have a problem of generating residual noise in an environment with high noise density. Therefore, this paper proposes a filtering algorithm based on a three-dimensional plane equation by setting the grayscale value of the image as a new axis. The proposed algorithm subdivides the local mask to design the three closest non-noisy pixels as effective pixels, and applies cosine similarity to a region with a plurality of pixels. In addition, even when the input pixel cannot form a plane, it is classified as an exception pixel to achieve excellent restoration without residual noise.

Image Denoising Using Nonlocal Similarity and 3D Filtering (비지역적 유사성 및 3차원 필터링 기반 영상 잡음제거)

  • Kim, Seehyun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.10
    • /
    • pp.1886-1891
    • /
    • 2017
  • Denoising which is one of major research topics in the image processing deals with recovering the noisy images. Natural images are well known not only for their local but also nonlocal similarity. Patterns of unique edges and texture which are crucial for understanding the image are repeated over the nonlocal region. In this paper, a nonlocal similarity based denoising algorithm is proposed. First for every blocks of the noisy image, nonlocal similar blocks are gathered to construct a overcomplete data set which are inherently sparse in the transform domain due to the characteristics of the images. Then, the sparse transform coefficients are filtered to suppress the non-sparse additive noise. Finally, the image is recovered by aggregating the overcomplete estimates of each pixel. Performance experiments with several images show that the proposed algorithm outperforms the conventional methods in removing the additive Gaussian noise effectively while preserving the image details.

Design of high speed weighted FDNN applied DWW algorithm (DWW 알고리즘을 적용한 고속 가중 FDNN의 설계)

  • 이철희;변오성;문성룡
    • Journal of the Korean Institute of Telematics and Electronics C
    • /
    • v.35C no.7
    • /
    • pp.101-108
    • /
    • 1998
  • In this paper, after we got to realized FDNN (fuzzy decision neural network) applied the quantization triangularity fuzzy function to DBNN(decision based neural network) of a hierarchical structure for image process, we could esign hardware of the realized FDNN. Also it is normalized the standard image and the input image as the same size. We are applied DWW algorithm which selected the closest value with finding similarity of an interval image by this distance to FDNN. So we could calulated in terms of distance to weight of pixel which composed two image and eliminated the nise of image, minimized the lost of information, obtained the optimal information. It is designed hardware of high speed weighted FDNN using COMPASS tool. Aslo, the total circuit is realized as gates of 61,000 and could show to superiority of FDNN using the simulation.

  • PDF

Image Dehazing Enhancement Algorithm Based on Mean Guided Filtering

  • Weimin Zhou
    • Journal of Information Processing Systems
    • /
    • v.19 no.4
    • /
    • pp.417-426
    • /
    • 2023
  • To improve the effect of image restoration and solve the image detail loss, an image dehazing enhancement algorithm based on mean guided filtering is proposed. The superpixel calculation method is used to pre-segment the original foggy image to obtain different sub-regions. The Ncut algorithm is used to segment the original image, and it outputs the segmented image until there is no more region merging in the image. By means of the mean-guided filtering method, the minimum value is selected as the value of the current pixel point in the local small block of the dark image, and the dark primary color image is obtained, and its transmittance is calculated to obtain the image edge detection result. According to the prior law of dark channel, a classic image dehazing enhancement model is established, and the model is combined with a median filter with low computational complexity to denoise the image in real time and maintain the jump of the mutation area to achieve image dehazing enhancement. The experimental results show that the image dehazing and enhancement effect of the proposed algorithm has obvious advantages, can retain a large amount of image detail information, and the values of information entropy, peak signal-to-noise ratio, and structural similarity are high. The research innovatively combines a variety of methods to achieve image dehazing and improve the quality effect. Through segmentation, filtering, denoising and other operations, the image quality is effectively improved, which provides an important reference for the improvement of image processing technology.

Modified Weight Filter Algorithm using Pixel Matching in AWGN Environment (AWGN 환경에서 화소매칭을 이용한 변형된 가중치 필터 알고리즘)

  • Cheon, Bong-Won;Kim, Nam-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.10
    • /
    • pp.1310-1316
    • /
    • 2021
  • Recently, with the development of artificial intelligence and IoT technology, the importance of video processing such as object tracking, medical imaging, and object recognition is increasing. In particular, the noise reduction technology used in the preprocessing process demands the ability to effectively remove noise and maintain detailed features as the importance of system images increases. In this paper, we provide a modified weight filter based on pixel matching in an AWGN environment. The proposed algorithm uses a pixel matching method to maintain high-frequency components in which the pixel value of the image changes significantly, detects areas with highly relevant patterns in the peripheral area, and matches pixels required for output calculation. Classify the values. The final output is obtained by calculating the weight according to the similarity and spatial distance between the matching pixels with the center pixel in order to consider the edge component in the filtering process.

Deep Learning-Based Low-Light Imaging Considering Image Signal Processing

  • Minsu, Kwon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.2
    • /
    • pp.19-25
    • /
    • 2023
  • In this paper, we propose a method for improving raw images captured in a low light condition based on deep learning considering the image signal processing. In the case of a smart phone camera, compared to a DSLR camera, the size of a lens or sensor is limited, so the noise increases and the reduces the quality of images in low light conditions. Existing deep learning-based low-light image processing methods create unnatural images in some cases since they do not consider the lens shading effect and white balance, which are major factors in the image signal processing. In this paper, pixel distances from the image center and channel average values are used to consider the lens shading effect and white balance with a deep learning model. Experiments with low-light images taken with a smart phone demonstrate that the proposed method achieves a higher peak signal to noise ratio and structural similarity index measure than the existing method by creating high-quality low-light images.