• Title/Summary/Keyword: Information Density

Search Result 4,133, Processing Time 0.031 seconds

A Note on Support Vector Density Estimation with Wavelets

  • Lee, Sung-Ho
    • Journal of the Korean Data and Information Science Society
    • /
    • v.16 no.2
    • /
    • pp.411-418
    • /
    • 2005
  • We review support vector and wavelet density estimation. The relationship between support vector and wavelet density estimation in reproducing kernel Hilbert space (RKHS) is investigated in order to use wavelets as a variety of support vector kernels in support vector density estimation.

  • PDF

Calibration Technique of Liquid Density Measurement using Magnetostriction Technology (자기 변형 기술을 이용한 액체 밀도 측정의 보정 기술)

  • Seo, Moogyo;Hong, Youngho;Choi, Inseoup
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.8
    • /
    • pp.178-184
    • /
    • 2014
  • In this study, we develop liquid density sensor by measuring of balanced position between gravity and bouyancy, corresponding to liquid density, using distance measuring by magnetostriction technology. For improvement of accuracy of liquid density sensor system. And we derive the related equation between liquid density and moving distance of density sensor, and make the calibration method for liquid density sensor by magnetostriction technology. Using fabricated liquid density sensing system and derived equation, have measured the density of several liquids. And compare it to measuring results using Oscillating U-tube type high accuracy density meter, having 0.000001 g/cc resolution. The deviation of results between two density measuring systems was less than 0.001 g/cc.

A Density Peak Clustering Algorithm Based on Information Bottleneck

  • Yongli Liu;Congcong Zhao;Hao Chao
    • Journal of Information Processing Systems
    • /
    • v.19 no.6
    • /
    • pp.778-790
    • /
    • 2023
  • Although density peak clustering can often easily yield excellent results, there is still room for improvement when dealing with complex, high-dimensional datasets. One of the main limitations of this algorithm is its reliance on geometric distance as the sole similarity measurement. To address this limitation, we draw inspiration from the information bottleneck theory, and propose a novel density peak clustering algorithm that incorporates this theory as a similarity measure. Specifically, our algorithm utilizes the joint probability distribution between data objects and feature information, and employs the loss of mutual information as the measurement standard. This approach not only eliminates the potential for subjective error in selecting similarity method, but also enhances performance on datasets with multiple centers and high dimensionality. To evaluate the effectiveness of our algorithm, we conducted experiments using ten carefully selected datasets and compared the results with three other algorithms. The experimental results demonstrate that our information bottleneck-based density peaks clustering (IBDPC) algorithm consistently achieves high levels of accuracy, highlighting its potential as a valuable tool for data clustering tasks.

Evaluation of Information Systems Using Intelligence Density (지능밀도를 이용한 정보시스템의 평가)

  • Kim, Guk;Song, Gi-Won
    • Proceedings of the Korean Society for Quality Management Conference
    • /
    • 2006.04a
    • /
    • pp.86-91
    • /
    • 2006
  • Companies had to be more intelligent in order to survive in the rapidly changing environments. We need to make a decision to build the Information System to support their decision making. But, how can we know the new system would be better than the old system in making us intelligent? The answer is we can do it with the concept of Intelligence Density. In this study, Intelligence Density concept will be introduced, and the way how it can be applied to the information system will be presented. I think Intelligence Density should be studiedmoretohelpmanagersmakerightdecisions.

  • PDF

A note on nonparametric density deconvolution by weighted kernel estimators

  • Lee, Sungho
    • Journal of the Korean Data and Information Science Society
    • /
    • v.25 no.4
    • /
    • pp.951-959
    • /
    • 2014
  • Recently Hazelton and Turlach (2009) proposed a weighted kernel density estimator for the deconvolution problem. In the case of Gaussian kernels and measurement error, they argued that the weighted kernel density estimator is a competitive estimator over the classical deconvolution kernel estimator. In this paper we consider weighted kernel density estimators when sample observations are contaminated by double exponentially distributed errors. The performance of the weighted kernel density estimators is compared over the classical deconvolution kernel estimator and the kernel density estimator based on the support vector regression method by means of a simulation study. The weighted density estimator with the Gaussian kernel shows numerical instability in practical implementation of optimization function. However the weighted density estimates with the double exponential kernel has very similar patterns to the classical kernel density estimates in the simulations, but the shape is less satisfactory than the classical kernel density estimator with the Gaussian kernel.

Reducing Bias of the Minimum Hellinger Distance Estimator of a Location Parameter

  • Pak, Ro-Jin
    • Journal of the Korean Data and Information Science Society
    • /
    • v.17 no.1
    • /
    • pp.213-220
    • /
    • 2006
  • Since Beran (1977) developed the minimum Hellinger distance estimation, this method has been a popular topic in the field of robust estimation. In the process of defining a distance, a kernel density estimator has been widely used as a density estimator. In this article, however, we show that a combination of a kernel density estimator and an empirical density could result a smaller bias of the minimum Hellinger distance estimator than using just a kernel density estimator for a location parameter.

  • PDF

Piecewise Continuous Linear Density Estimator

  • Jang, Dae-Heung
    • Journal of the Korean Data and Information Science Society
    • /
    • v.16 no.4
    • /
    • pp.959-968
    • /
    • 2005
  • The piecewise linear histogram can be used as a simple and efficient tool for the density estimator. But, this piecewise linear histogram is discontinuous function. We suppose the piecewise continuous linear histogram as a simple and efficient tool for the density estimator and the alternative of the piecewise linear histogram.

  • PDF

Adaptive Signal Separation with Maximum Likelihood

  • Zhao, Yongjian;Jiang, Bin
    • Journal of Information Processing Systems
    • /
    • v.16 no.1
    • /
    • pp.145-154
    • /
    • 2020
  • Maximum likelihood (ML) is the best estimator asymptotically as the number of training samples approaches infinity. This paper deduces an adaptive algorithm for blind signal processing problem based on gradient optimization criterion. A parametric density model is introduced through a parameterized generalized distribution family in ML framework. After specifying a limited number of parameters, the density of specific original signal can be approximated automatically by the constructed density function. Consequently, signal separation can be conducted without any prior information about the probability density of the desired original signal. Simulations on classical biomedical signals confirm the performance of the deduced technique.

Application of Fuzzy Information Representation Using Frequency Ratio and Non-parametric Density Estimation to Multi-source Spatial Data Fusion for Landslide Hazard Mapping

  • Park No-Wook;Chi Kwang-Hoon;Kwon Byung-Doo
    • Journal of the Korean earth science society
    • /
    • v.26 no.2
    • /
    • pp.114-128
    • /
    • 2005
  • Fuzzy information representation of multi-source spatial data is applied to landslide hazard mapping. Information representation based on frequency ratio and non-parametric density estimation is used to construct fuzzy membership functions. Of particular interest is the representation of continuous data for preventing loss of information. The non-parametric density estimation method applied here is a Parzen window estimation that can directly use continuous data without any categorization procedure. The effect of the new continuous data representation method on the final integrated result is evaluated by a validation procedure. To illustrate the proposed scheme, a case study from Jangheung, Korea for landslide hazard mapping is presented. Analysis of the results indicates that the proposed methodology considerably improves prediction capabilities, as compared with the case in traditional continuous data representation.

Mutual Information in Naive Bayes with Kernel Density Estimation (나이브 베이스에서의 커널 밀도 측정과 상호 정보량)

  • Xiang, Zhongliang;Yu, Xiangru;Kang, Dae-Ki
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2014.05a
    • /
    • pp.86-88
    • /
    • 2014
  • Naive Bayes (NB) assumption has some harmful effects in classification to the real world data. To relax this assumption, we now propose approach called Naive Bayes Mutual Information Attribute Weighting with Smooth Kernel Density Estimation (NBMIKDE) that combine the smooth kernel for attribute and attribute weighting method based on mutual information measure.

  • PDF