• 제목/요약/키워드: Information Density

검색결과 4,140건 처리시간 0.034초

A Note on Support Vector Density Estimation with Wavelets

  • Lee, Sung-Ho
    • Journal of the Korean Data and Information Science Society
    • /
    • 제16권2호
    • /
    • pp.411-418
    • /
    • 2005
  • We review support vector and wavelet density estimation. The relationship between support vector and wavelet density estimation in reproducing kernel Hilbert space (RKHS) is investigated in order to use wavelets as a variety of support vector kernels in support vector density estimation.

  • PDF

자기 변형 기술을 이용한 액체 밀도 측정의 보정 기술 (Calibration Technique of Liquid Density Measurement using Magnetostriction Technology)

  • 서무교;홍영호;최인섭
    • 전자공학회논문지
    • /
    • 제51권8호
    • /
    • pp.178-184
    • /
    • 2014
  • 자기 변형 기술의 거리 측정을 응용하여, 중력과 액체 밀도에 대응하는 부력의 평형 위치를 측정하는 액체 밀도 센서를 개발하였다. 이 시스템의 정밀도 향상을 위해, 액체 밀도변화에 따른 밀도 센서의 이동거리 사이의 관계식을 유도하고, 이를 이용하여, 액체 밀도 센서의 2 점 보정 방법을 마련하였다. 제작된 액체 밀도 센서 시스템과 유도된 관계식을 사용하여 액체의 밀도들을 측정하였다. 측정된 결과들을 U-tube 진동주기 측정방식의 고 정밀 밀도 측정기(Oscillating U-tube density meter: 분해능 0.000001 g/cc)의 측정결과와 비교하였다. 그 결과 두 액체 밀도 측정 시스템간의 측정 편차가 0.001 g/cc 미만임을 확인하였다.

A Density Peak Clustering Algorithm Based on Information Bottleneck

  • Yongli Liu;Congcong Zhao;Hao Chao
    • Journal of Information Processing Systems
    • /
    • 제19권6호
    • /
    • pp.778-790
    • /
    • 2023
  • Although density peak clustering can often easily yield excellent results, there is still room for improvement when dealing with complex, high-dimensional datasets. One of the main limitations of this algorithm is its reliance on geometric distance as the sole similarity measurement. To address this limitation, we draw inspiration from the information bottleneck theory, and propose a novel density peak clustering algorithm that incorporates this theory as a similarity measure. Specifically, our algorithm utilizes the joint probability distribution between data objects and feature information, and employs the loss of mutual information as the measurement standard. This approach not only eliminates the potential for subjective error in selecting similarity method, but also enhances performance on datasets with multiple centers and high dimensionality. To evaluate the effectiveness of our algorithm, we conducted experiments using ten carefully selected datasets and compared the results with three other algorithms. The experimental results demonstrate that our information bottleneck-based density peaks clustering (IBDPC) algorithm consistently achieves high levels of accuracy, highlighting its potential as a valuable tool for data clustering tasks.

지능밀도를 이용한 정보시스템의 평가 (Evaluation of Information Systems Using Intelligence Density)

  • 김국;송기원
    • 한국품질경영학회:학술대회논문집
    • /
    • 한국품질경영학회 2006년도 춘계학술대회
    • /
    • pp.86-91
    • /
    • 2006
  • Companies had to be more intelligent in order to survive in the rapidly changing environments. We need to make a decision to build the Information System to support their decision making. But, how can we know the new system would be better than the old system in making us intelligent? The answer is we can do it with the concept of Intelligence Density. In this study, Intelligence Density concept will be introduced, and the way how it can be applied to the information system will be presented. I think Intelligence Density should be studiedmoretohelpmanagersmakerightdecisions.

  • PDF

A note on nonparametric density deconvolution by weighted kernel estimators

  • Lee, Sungho
    • Journal of the Korean Data and Information Science Society
    • /
    • 제25권4호
    • /
    • pp.951-959
    • /
    • 2014
  • Recently Hazelton and Turlach (2009) proposed a weighted kernel density estimator for the deconvolution problem. In the case of Gaussian kernels and measurement error, they argued that the weighted kernel density estimator is a competitive estimator over the classical deconvolution kernel estimator. In this paper we consider weighted kernel density estimators when sample observations are contaminated by double exponentially distributed errors. The performance of the weighted kernel density estimators is compared over the classical deconvolution kernel estimator and the kernel density estimator based on the support vector regression method by means of a simulation study. The weighted density estimator with the Gaussian kernel shows numerical instability in practical implementation of optimization function. However the weighted density estimates with the double exponential kernel has very similar patterns to the classical kernel density estimates in the simulations, but the shape is less satisfactory than the classical kernel density estimator with the Gaussian kernel.

Reducing Bias of the Minimum Hellinger Distance Estimator of a Location Parameter

  • Pak, Ro-Jin
    • Journal of the Korean Data and Information Science Society
    • /
    • 제17권1호
    • /
    • pp.213-220
    • /
    • 2006
  • Since Beran (1977) developed the minimum Hellinger distance estimation, this method has been a popular topic in the field of robust estimation. In the process of defining a distance, a kernel density estimator has been widely used as a density estimator. In this article, however, we show that a combination of a kernel density estimator and an empirical density could result a smaller bias of the minimum Hellinger distance estimator than using just a kernel density estimator for a location parameter.

  • PDF

Piecewise Continuous Linear Density Estimator

  • Jang, Dae-Heung
    • Journal of the Korean Data and Information Science Society
    • /
    • 제16권4호
    • /
    • pp.959-968
    • /
    • 2005
  • The piecewise linear histogram can be used as a simple and efficient tool for the density estimator. But, this piecewise linear histogram is discontinuous function. We suppose the piecewise continuous linear histogram as a simple and efficient tool for the density estimator and the alternative of the piecewise linear histogram.

  • PDF

Adaptive Signal Separation with Maximum Likelihood

  • Zhao, Yongjian;Jiang, Bin
    • Journal of Information Processing Systems
    • /
    • 제16권1호
    • /
    • pp.145-154
    • /
    • 2020
  • Maximum likelihood (ML) is the best estimator asymptotically as the number of training samples approaches infinity. This paper deduces an adaptive algorithm for blind signal processing problem based on gradient optimization criterion. A parametric density model is introduced through a parameterized generalized distribution family in ML framework. After specifying a limited number of parameters, the density of specific original signal can be approximated automatically by the constructed density function. Consequently, signal separation can be conducted without any prior information about the probability density of the desired original signal. Simulations on classical biomedical signals confirm the performance of the deduced technique.

Application of Fuzzy Information Representation Using Frequency Ratio and Non-parametric Density Estimation to Multi-source Spatial Data Fusion for Landslide Hazard Mapping

  • Park No-Wook;Chi Kwang-Hoon;Kwon Byung-Doo
    • 한국지구과학회지
    • /
    • 제26권2호
    • /
    • pp.114-128
    • /
    • 2005
  • Fuzzy information representation of multi-source spatial data is applied to landslide hazard mapping. Information representation based on frequency ratio and non-parametric density estimation is used to construct fuzzy membership functions. Of particular interest is the representation of continuous data for preventing loss of information. The non-parametric density estimation method applied here is a Parzen window estimation that can directly use continuous data without any categorization procedure. The effect of the new continuous data representation method on the final integrated result is evaluated by a validation procedure. To illustrate the proposed scheme, a case study from Jangheung, Korea for landslide hazard mapping is presented. Analysis of the results indicates that the proposed methodology considerably improves prediction capabilities, as compared with the case in traditional continuous data representation.

나이브 베이스에서의 커널 밀도 측정과 상호 정보량 (Mutual Information in Naive Bayes with Kernel Density Estimation)

  • 샹총량;유샹루;강대기
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국정보통신학회 2014년도 춘계학술대회
    • /
    • pp.86-88
    • /
    • 2014
  • 나이브 베이스가 가지는 가정은 실세계 데이터를 분류함에 있어 해로운 효과를 보이곤 한다. 이러한 가정을 완화하기 위해, 우리는 Naive Bayes Mutual Information Attribute Weighting with Smooth Kernel Density Estimation (NBMIKDE) 접근 방법을 소개한다. NBMIKDE는 애트리뷰트를 위한 스무드 커널과 상호 정보량 측정값을 기반으로 하는 어트리뷰트 가중치 기법을 조합한 것이다.

  • PDF