• Title/Summary/Keyword: Image information measure

Search Result 800, Processing Time 0.029 seconds

Research for 3-D Information Reconstruction by Appling Composition Focus Measure Function to Time-series Image (복합초점함수의 시간열 영상적용을 통한 3 차원정보복원에 관한 연구)

  • 김정길;한영준;한헌수
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2004.10a
    • /
    • pp.426-429
    • /
    • 2004
  • To reconstruct the 3-D information of a irregular object, this paper proposes a new method applying the composition focus measure to time-series image. A focus measure function is carefully selected because a focus measure is apt to be affected by the working environment and the characteristics of an object. The proposed focus measure function combines the variance measure which is robust to noise and the Laplacian measure which, regardless of an object shape, has a good performance in calculating the focus measure. And the time-series image, which considers the object shape, is proposed in order to efficiently applying the interesting window. This method, first, divides the image frame by the window. Second, the composition focus measure function be applied to the windows, and the time-series image is constructed. Finally, the 3-D information of an object is reconstructed from the time-series images considering the object shape. The experimental results have shown that the proposed method is suitable algorithm to 3-D reconstruction of an irregular object.

  • PDF

Statistical Image Quality Measure (통계적 영상 품질 측정)

  • Bae, Kyoung-Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.13 no.4
    • /
    • pp.79-90
    • /
    • 2007
  • The image quality measure is an important issue in the image processing. Several methods which measure the image quality have been proposed and these are based on the mathematical point of view. However, there is difference between the mathematicalmeasure and the measure based on the human visual system and a new measure has to be proposed because the final target of the image is a human visual system In this paper, a statistical image quality measure which is considered the human visual feature was suggested. The human visual system is using the global quality of the image and the local quality of the image and the local quality is more important to human visual system. In this paper, the image divided into several segments and the image qualities were calculated respectively. After then, the statistical method using scoring was applied to the image qualities. The result of the image quality measure was similar to the result of measure based on the human visual system.

  • PDF

A Study on the step edge detection method based on image information measure and eutral network (영상의 정보척도와 신경회로망을 이용한 계단에지 검출에 관한 연구)

  • Lee, S.B.;Kim, S.G.
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.3
    • /
    • pp.549-555
    • /
    • 2006
  • An edge detection is an very important area in image processing and computer vision, General edge detection methods (Robert mask, Sobel mask, Kirsh mask etc) are a good performance to detect step edge in a image but are no good performance to detect step edge in a noses image. We suggested a step edge detection method based on image information measure and neutral network. Using these essential properties of step edges, which are directional and structural and whose gray level distribution in neighborhood, as a input vector to the BP neutral network we get the good result of proposed algorithm. And also we get the satisfactory experimental result using rose image and cell images an experimental and analysing image.

Fractal image compression with perceptual distortion measure (인지 왜곡 척도를 사용한 프랙탈 영상 압축)

  • 문용호;박기웅;손경식;김윤수;김재호
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.21 no.3
    • /
    • pp.587-599
    • /
    • 1996
  • In general fractal imge compression, each range block is approximated by a contractive transform of the matching domain block under the mean squared error criterion. In this paper, a distortion measure reflecting the properties of human visual system is defined and applied to a fractal image compression. the perceptual distortion measure is obtained by multiplying the mean square error and the noise sensitivity modeled by using the background brightness and spatial masking. In order to compare the performance of the mean squared error and perceptual distortion measure, a simulation is carried out by using the 512*512 Lena and papper gray image. Compared to the results, 6%-10% compression ratio improvements under improvements under the same image quality are achieved in the perceptual distortion measure.

  • PDF

Image compression using K-mean clustering algorithm

  • Munshi, Amani;Alshehri, Asma;Alharbi, Bayan;AlGhamdi, Eman;Banajjar, Esraa;Albogami, Meznah;Alshanbari, Hanan S.
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.9
    • /
    • pp.275-280
    • /
    • 2021
  • With the development of communication networks, the processes of exchanging and transmitting information rapidly developed. As millions of images are sent via social media every day, also wireless sensor networks are now used in all applications to capture images such as those used in traffic lights, roads and malls. Therefore, there is a need to reduce the size of these images while maintaining an acceptable degree of quality. In this paper, we use Python software to apply K-mean Clustering algorithm to compress RGB images. The PSNR, MSE, and SSIM are utilized to measure the image quality after image compression. The results of compression reduced the image size to nearly half the size of the original images using k = 64. In the SSIM measure, the higher the K, the greater the similarity between the two images which is a good indicator to a significant reduction in image size. Our proposed compression technique powered by the K-Mean clustering algorithm is useful for compressing images and reducing the size of images.

Segment-based Image Classification of Multisensor Images

  • Lee, Sang-Hoon
    • Korean Journal of Remote Sensing
    • /
    • v.28 no.6
    • /
    • pp.611-622
    • /
    • 2012
  • This study proposed two multisensor fusion methods for segment-based image classification utilizing a region-growing segmentation. The proposed algorithms employ a Gaussian-PDF measure and an evidential measure respectively. In remote sensing application, segment-based approaches are used to extract more explicit information on spatial structure compared to pixel-based methods. Data from a single sensor may be insufficient to provide accurate description of a ground scene in image classification. Due to the redundant and complementary nature of multisensor data, a combination of information from multiple sensors can make reduce classification error rate. The Gaussian-PDF method defines a regional measure as the PDF average of pixels belonging to the region, and assigns a region into a class associated with the maximum of regional measure. The evidential fusion method uses two measures of plausibility and belief, which are derived from a mass function of the Beta distribution for the basic probability assignment of every hypothesis about region classes. The proposed methods were applied to the SPOT XS and ENVISAT data, which were acquired over Iksan area of of Korean peninsula. The experiment results showed that the segment-based method of evidential measure is greatly effective on improving the classification via multisensor fusion.

Learning Discriminative Fisher Kernel for Image Retrieval

  • Wang, Bin;Li, Xiong;Liu, Yuncai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.3
    • /
    • pp.522-538
    • /
    • 2013
  • Content based image retrieval has become an increasingly important research topic for its wide application. It is highly challenging when facing to large-scale database with large variance. The retrieval systems rely on a key component, the predefined or learned similarity measures over images. We note that, the similarity measures can be potential improved if the data distribution information is exploited using a more sophisticated way. In this paper, we propose a similarity measure learning approach for image retrieval. The similarity measure, so called Fisher kernel, is derived from the probabilistic distribution of images and is the function over observed data, hidden variable and model parameters, where the hidden variables encode high level information which are powerful in discrimination and are failed to be exploited in previous methods. We further propose a discriminative learning method for the similarity measure, i.e., encouraging the learned similarity to take a large value for a pair of images with the same label and to take a small value for a pair of images with distinct labels. The learned similarity measure, fully exploiting the data distribution, is well adapted to dataset and would improve the retrieval system. We evaluate the proposed method on Corel-1000, Corel5k, Caltech101 and MIRFlickr 25,000 databases. The results show the competitive performance of the proposed method.

Learning Free Energy Kernel for Image Retrieval

  • Wang, Cungang;Wang, Bin;Zheng, Liping
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.8
    • /
    • pp.2895-2912
    • /
    • 2014
  • Content-based image retrieval has been the most important technique for managing huge amount of images. The fundamental yet highly challenging problem in this field is how to measure the content-level similarity based on the low-level image features. The primary difficulties lie in the great variance within images, e.g. background, illumination, viewpoint and pose. Intuitively, an ideal similarity measure should be able to adapt the data distribution, discover and highlight the content-level information, and be robust to those variances. Motivated by these observations, we in this paper propose a probabilistic similarity learning approach. We first model the distribution of low-level image features and derive the free energy kernel (FEK), i.e., similarity measure, based on the distribution. Then, we propose a learning approach for the derived kernel, under the criterion that the kernel outputs high similarity for those images sharing the same class labels and output low similarity for those without the same label. The advantages of the proposed approach, in comparison with previous approaches, are threefold. (1) With the ability inherited from probabilistic models, the similarity measure can well adapt to data distribution. (2) Benefitting from the content-level hidden variables within the probabilistic models, the similarity measure is able to capture content-level cues. (3) It fully exploits class label in the supervised learning procedure. The proposed approach is extensively evaluated on two well-known databases. It achieves highly competitive performance on most experiments, which validates its advantages.

Similarity Measurement using Gabor Energy Feature and Mutual Information for Image Registration

  • Ye, Chul-Soo
    • Korean Journal of Remote Sensing
    • /
    • v.27 no.6
    • /
    • pp.693-701
    • /
    • 2011
  • Image registration is an essential process to analyze the time series of satellite images for the purpose of image fusion and change detection. The Mutual Information (MI) is commonly used as similarity measure for image registration because of its robustness to noise. Due to the radiometric differences, it is not easy to apply MI to multi-temporal satellite images using directly the pixel intensity. Image features for MI are more abundantly obtained by employing a Gabor filter which varies adaptively with the filter characteristics such as filter size, frequency and orientation for each pixel. In this paper we employed Bidirectional Gabor Filter Energy (BGFE) defined by Gabor filter features and applied the BGFE to similarity measure calculation as an image feature for MI. The experiment results show that the proposed method is more robust than the conventional MI method combined with intensity or gradient magnitude.

Wavelet Based Watermarking Technique Using PIM(Picture information measure) (PIM(Picture information measure)을 이용한 Wavelet기반 워터마킹 기법)

  • 김윤평;김영준;이동규;한수영;이두수
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.1811-1814
    • /
    • 2003
  • In this paper, a novel watermarking technique is proposed to authenticate the owner-ship of copyright for the digital contents. Using the 2-level DWT(Discrete Wavelet Transform) we divide a specific frequency band into detailed blocks and apply PIM(picture information measure). After the complexity is calculated, the watermark is embedded in only on high complexity areas. Conventional watermarking technique damages to the original image, because it does not consider the feature of the whole area or a specific frequency band. Easily affected by noise and compression, it is difficult to extract the watermark. However, the proposed watermarking technique, considering the complexity of input image, does not damage to the original image Simulation result show that the proposed technique has the robustness of JPEG compression, noise and filtering such as a general signal processing

  • PDF