• Title/Summary/Keyword: histogram data

Search Result 492, Processing Time 0.024 seconds

Salt & Pepper Noise Removal Using Histogram and Spline Interpolation (히스토그램 및 Spline 보간법을 이용한 Salt & Pepper 잡음 제거)

  • Ko, You-Hak;Kwon, Se-Ik;Kim, Nam-Ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2017.10a
    • /
    • pp.691-693
    • /
    • 2017
  • As the modern society develops into the digital information age, the application field is gradually expanded and used as an important field. The image data is deteriorated due to various causes in the process of transmitting the image, and typically there is salt & pepper noise. Conventional methods for removing salt & pepper noise are somewhat lacking in noise canceling characteristics. In this paper, we propose a weighted filter using the histogram of the image damaged by salt & pepper noise and a spline interpolation method according to the direction of the local mask.

  • PDF

Identification of Transformed Image Using the Composition of Features

  • Yang, Won-Keun;Cho, A-Young;Cho, Ik-Hwan;Oh, Weon-Geun;Jeong, Dong-Seok
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.6
    • /
    • pp.764-776
    • /
    • 2008
  • Image identification is the process of checking whether the query image is the transformed version of the specific original image or not. In this paper, image identification method based on feature composition is proposed. Used features include color distance, texture information and average pixel intensity. We extract color characteristics using color distance and texture information by Modified Generalized Symmetry Transform as well as average intensity of each pixel as features. Individual feature is quantized adaptively to be used as bins of histogram. The histogram is normalized according to data type and it is used as the signature in comparing the query image with database images. In matching part, Manhattan distance is used for measuring distance between two signatures. To evaluate the performance of the proposed method, independent test and accuracy test are achieved. In independent test, 60,433 images are used to evaluate the ability of discrimination between different images. And 4,002 original images and its 29 transformed versions are used in accuracy test, which evaluate the ability that the proposed algorithm can find the original image correctly when some transforms was applied in original image. Experiment results show that the proposed identification method has good performance in accuracy test. And the proposed method is very useful in real environment because of its high accuracy and fast matching capacity.

  • PDF

Copyright Protection of Digital Image Information based on Multiresolution and Adaptive Spectral Watermark (다중 해상도와 적응성 스펙트럼 워터마크를 기반으로 한 디지털 영상 정보의 소유권 보호)

  • 서정희
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.10 no.4
    • /
    • pp.13-19
    • /
    • 2000
  • With the rapid development of the information communication technology, more and more distribution multimedia data and electronic publishing in the web, has created a need for the copyright protection with authentication of digital information. In this paper, we propose a multi-watermarking adding and adaptive spectral watermark algorithm well adaptive frequency domain of each hierarchical using orthogonal forward wavelet transform(FWT. Numerical test results, created watermarking image robustness not only image transform such as low-pass filtering, bluring, sharpen filtering, wavelet compression but also brightness, contrast gamma correction, histogram equalization, cropping.

Estimation of evapotranspiration using NOAA-AVHRR data (NOAA-AVHRR data를 이용한 증발산량추정)

  • Shin, Sha-Chul;Sawamoto, Masaki;Kim, Chi-Hong
    • Water for future
    • /
    • v.28 no.1
    • /
    • pp.71-80
    • /
    • 1995
  • The purpose of this study is to estimate evapotranspiration and its spatial distribution using NOAA-AVHRR data. Evapotranspiration phenomena are exceedingly complex. But, factors which control evapotranspiration can be considered that these are reflected by conditions of the vegetation. To evaluate the vegetation condition as a fixed quantity, the NDVI(Normalized Difference Vegetation Index) calculated from NOAA data is utilized. In this study, land cover classification of the Korean peninsula using property of NDVI is performed. Also, from the relationship between evapotranspiration and NDVI histograms, evapotranspiration and its distribution of the Han River basin are estimated.

  • PDF

Symbolic Cluster Analysis for Distribution Valued Dissimilarity

  • Matsui, Yusuke;Minami, Hiroyuki;Misuta, Masahiro
    • Communications for Statistical Applications and Methods
    • /
    • v.21 no.3
    • /
    • pp.225-234
    • /
    • 2014
  • We propose a novel hierarchical clustering for distribution valued dissimilarities. Analysis of large and complex data has attracted significant interest. Symbolic Data Analysis (SDA) was proposed by Diday in 1980's, which provides a new framework for statistical analysis. In SDA, we analyze an object with internal variation, including an interval, a histogram and a distribution, called a symbolic object. In the study, we focus on a cluster analysis for distribution valued dissimilarities, one of the symbolic objects. A hierarchical clustering has two steps in general: find out step and update step. In the find out step, we find the nearest pair of clusters. We extend it for distribution valued dissimilarities, introducing a measure on their order relations. In the update step, dissimilarities between clusters are redefined by mixture of distributions with a mixing ratio. We show an actual example of the proposed method and a simulation study.

Cubic normal distribution and its significance in structural reliability

  • Zhao, Yan-Gang;Lu, Zhao-Hui
    • Structural Engineering and Mechanics
    • /
    • v.28 no.3
    • /
    • pp.263-280
    • /
    • 2008
  • Information on the distribution of the basic random variable is essential for the accurate analysis of structural reliability. The usual method for determining the distributions is to fit a candidate distribution to the histogram of available statistical data of the variable and perform approximate goodness-of-fit tests. Generally, such candidate distribution would have parameters that may be evaluated from the statistical moments of the statistical data. In the present paper, a cubic normal distribution, whose parameters are determined using the first four moments of available sample data, is investigated. A parameter table based on the first four moments, which simplifies parameter estimation, is given. The simplicity, generality, flexibility and advantages of this distribution in statistical data analysis and its significance in structural reliability evaluation are discussed. Numerical examples are presented to demonstrate these advantages.

Reversible data hiding technique applying triple encryption method (삼중 암호화 기법을 적용한 가역 데이터 은닉기법)

  • Jung, Soo-Mok
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.15 no.1
    • /
    • pp.36-44
    • /
    • 2022
  • Reversible data hiding techniques have been developed to hide confidential data in the image by shifting the histogram of the image. These techniques have a weakness in which the security of hidden confidential data is weak. In this paper, to solve this drawback, we propose a technique of triple encrypting confidential data using pixel value information and hiding it in the cover image. When confidential data is triple encrypted using the proposed technique and hidden in the cover image to generate a stego-image, since encryption based on pixel information is performed three times, the security of confidential data hidden by triple encryption is greatly improved. In the experiment to measure the performance of the proposed technique, even if the triple-encrypted confidential data was extracted from the stego-image, the original confidential data could not be extracted without the encryption keys. And since the image quality of the stego-image is 48.39dB or higher, it was not possible to recognize whether confidential data was hidden in the stego-image, and more than 30,487 bits of confidential data were hidden in the stego-image. The proposed technique can extract the original confidential data from the triple-encrypted confidential data hidden in the stego-image without loss, and can restore the original cover image from the stego-image without distortion. Therefore, the proposed technique can be effectively used in applications such as military, medical, digital library, where security is important and it is necessary to completely restore the original cover image.

Reversible Watermarking based on Predicted Error Histogram for Medical Imagery (의료 영상을 위한 추정오차 히스토그램 기반 가역 워터마킹 알고리즘)

  • Oh, Gi-Tae;Jang, Han-Byul;Do, Um-Ji;Lee, Hae-Yeoun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.5
    • /
    • pp.231-240
    • /
    • 2015
  • Medical imagery require to protect the privacy with preserving the quality of the original contents. Therefore, reversible watermarking is a solution for this purpose. Previous researches have focused on general imagery and achieved high capacity and high quality. However, they raise a distortion over entire image and hence are not applicable to medical imagery which require to preserve the quality of the objects. In this paper, we propose a novel reversible watermarking for medical imagery, which preserve the quality of the objects and achieves high capacity. First, object and background region is segmented and then predicted error histogram-based reversible watermarking is applied for each region. For the efficient watermark embedding with small distortion in the object region, the embedding level at object region is set as low while the embedding level at background region is set as high. In experiments, the proposed algorithm is compared with the previous predicted error histogram-based algorithm in aspects of embedding capacity and perceptual quality. Results support that the proposed algorithm performs well over the previous algorithm.

Percentile-Based Analysis of Non-Gaussian Diffusion Parameters for Improved Glioma Grading

  • Karaman, M. Muge;Zhou, Christopher Y.;Zhang, Jiaxuan;Zhong, Zheng;Wang, Kezhou;Zhu, Wenzhen
    • Investigative Magnetic Resonance Imaging
    • /
    • v.26 no.2
    • /
    • pp.104-116
    • /
    • 2022
  • The purpose of this study is to systematically determine an optimal percentile cut-off in histogram analysis for calculating the mean parameters obtained from a non-Gaussian continuous-time random-walk (CTRW) diffusion model for differentiating individual glioma grades. This retrospective study included 90 patients with histopathologically proven gliomas (42 grade II, 19 grade III, and 29 grade IV). We performed diffusion-weighted imaging using 17 b-values (0-4000 s/mm2) at 3T, and analyzed the images with the CTRW model to produce an anomalous diffusion coefficient (Dm) along with temporal (𝛼) and spatial (𝛽) diffusion heterogeneity parameters. Given the tumor ROIs, we created a histogram of each parameter; computed the P-values (using a Student's t-test) for the statistical differences in the mean Dm, 𝛼, or 𝛽 for differentiating grade II vs. grade III gliomas and grade III vs. grade IV gliomas at different percentiles (1% to 100%); and selected the highest percentile with P < 0.05 as the optimal percentile. We used the mean parameter values calculated from the optimal percentile cut-offs to do a receiver operating characteristic (ROC) analysis based on individual parameters or their combinations. We compared the results with those obtained by averaging data over the entire region of interest (i.e., 100th percentile). We found the optimal percentiles for Dm, 𝛼, and 𝛽 to be 68%, 75%, and 100% for differentiating grade II vs. III and 58%, 19%, and 100% for differentiating grade III vs. IV gliomas, respectively. The optimal percentile cut-offs outperformed the entire-ROI-based analysis in sensitivity (0.761 vs. 0.690), specificity (0.578 vs. 0.526), accuracy (0.704 vs. 0.639), and AUC (0.671 vs. 0.599) for grade II vs. III differentiations and in sensitivity (0.789 vs. 0.578) and AUC (0.637 vs. 0.620) for grade III vs. IV differentiations, respectively. Percentile-based histogram analysis, coupled with the multi-parametric approach enabled by the CTRW diffusion model using high b-values, can improve glioma grading.

Studies on the Computer Programming of Statistical Methods (II) (품질관리기법(品質管理技法)의 전산화(電算化)에 관(關)한 연구(硏究)(II))

  • Jeong, Su-Il
    • Journal of Korean Society for Quality Management
    • /
    • v.14 no.1
    • /
    • pp.19-25
    • /
    • 1986
  • This paper studies the computer programming of statistical methods. A few computer programs are developed for * computing the basic statistics and the coefficients of process capability for raw and grouped data * drawing the frequency table and histogram * goodness of fit testing for normality with the analyses for stratifications if necessary. A special emphasis is laid on the significant digits and rounding-off for the output. A running result appears in the Appendix for a hypothetical example.

  • PDF