• Title/Summary/Keyword: Sample entropy

Search Result 71, Processing Time 0.022 seconds

Efficient Controlled Selection

  • Ryu, Jea-Bok;Lee, Seung-Joo
    • Communications for Statistical Applications and Methods
    • /
    • v.4 no.1
    • /
    • pp.151-159
    • /
    • 1997
  • In sample surveys, we expect preferred samples that reduce the survey cost and increase the precision of estimators will be selected. Goodman and Kish (1950) introduced controlled selection as a method of sample selection that increases the probability of drawing preferred samples, while decreases the probability of drawing nonpreferred samples. In this paper, we obtain the controlled plans using the maximum entropy principle, and when the order of nonpreferred samples is considered, we propose the algorithm to obtain a controlled plan.

  • PDF

A Comparison on the Empirical Power of Some Normality Tests

  • Kim, Dae-Hak;Eom, Jun-Hyeok;Jeong, Heong-Chul
    • Journal of the Korean Data and Information Science Society
    • /
    • v.17 no.1
    • /
    • pp.31-39
    • /
    • 2006
  • In many cases, we frequently get a desired information based on the appropriate statistical analysis of collected data sets. Lots of statistical theory rely on the assumption of the normality of the data. In this paper, we compare the empirical power of some normality tests including sample entropy quantity. Monte carlo simulation is conducted for the calculation of empirical power of considered normality tests by varying sample sizes for various distributions.

  • PDF

Magnetic Properties and Magnetocaloric Effect in Ordered Double Perovskites Sr1.8Pr0.2FeMo1-xWxO6

  • Hussain, Imad;Anwar, Mohammad Shafique;Khan, Saima Naz;Lee, Chan Gyu;Koo, Bon Heun
    • Korean Journal of Materials Research
    • /
    • v.28 no.8
    • /
    • pp.445-451
    • /
    • 2018
  • We report the structural, magnetic and magnetocaloric properties of $Sr_{1.8}Pr_{0.2}FeMo_{1-x}W_xO_6$($0.0{\leq}x{\leq}0.4$) samples prepared by the conventional solid state reaction method. The X-ray diffraction analysis confirms the formation of the tetragonal double perovskite structure with a I4/mmm space group in all the synthesized samples. The temperature dependent magnetization measurements reveal that all the samples go through a ferromagnetic to paramagnetic phase transition with an increasing temperature. The Arrott plot obtained for each synthesized sample demonstrates the second order nature of the magnetic phase transition. A magnetic entropy change is obtained from the magnetic isotherms. The values of maximum magnetic entropy change and relative cooling power at an applied field of 2.5 T are found to be $0.40Jkg^{-1}K^{-1}$ and $69Jkg^{-1}$ respectively for the $Sr_{1.8}Pr_{0.2}FeMoO_6$ sample. The tunability of magnetization and excellent magnetocaloric features at low applied magnetic field make these materials attractive for use in magnetic refrigeration technology.

A Method for Optimizing the Structure of Neural Networks Based on Information Entropy

  • Yuan Hongchun;Xiong Fanlnu;Kei, Bai-Shi
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2001.01a
    • /
    • pp.30-33
    • /
    • 2001
  • The number of hidden neurons of the feed-forward neural networks is generally decided on the basis of experience. The method usually results in the lack or redundancy of hidden neurons, and causes the shortage of capacity for storing information of learning overmuch. This research proposes a new method for optimizing the number of hidden neurons bases on information entropy, Firstly, an initial neural network with enough hidden neurons should be trained by a set of training samples. Second, the activation values of hidden neurons should be calculated by inputting the training samples that can be identified correctly by the trained neural network. Third, all kinds of partitions should be tried and its information gain should be calculated, and then a decision-tree correctly dividing the whole sample space can be constructed. Finally, the important and related hidden neurons that are included in the tree can be found by searching the whole tree, and other redundant hidden neurons can be deleted. Thus, the number of hidden neurons can be decided. In the case of building a neural network with the best number of hidden units for tea quality evaluation, the proposed method is applied. And the result shows that the method is effective

  • PDF

Nonlinear Quality Indices Based on a Novel Lempel-Ziv Complexity for Assessing Quality of Multi-Lead ECGs Collected in Real Time

  • Zhang, Yatao;Ma, Zhenguo;Dong, Wentao
    • Journal of Information Processing Systems
    • /
    • v.16 no.2
    • /
    • pp.508-521
    • /
    • 2020
  • We compared a novel encoding Lempel-Ziv complexity (ELZC) with three common complexity algorithms i.e., approximate entropy (ApEn), sample entropy (SampEn), and classic Lempel-Ziv complexity (CLZC) so as to determine a satisfied complexity and its corresponding quality indices for assessing quality of multi-lead electrocardiogram (ECG). First, we calculated the aforementioned algorithms on six artificial time series in order to compare their performance in terms of discerning randomness and the inherent irregularity within time series. Then, for analyzing sensitivity of the algorithms to content level of different noises within the ECG, we investigated their change trend in five artificial synthetic noisy ECGs containing different noises at several signal noise ratios. Finally, three quality indices based on the ELZC of the multi-lead ECG were proposed to assess the quality of 862 real 12-lead ECGs from the MIT databases. The results showed the ELZC could discern randomness and the inherent irregularity within six artificial time series, and also reflect content level of different noises within five artificial synthetic ECGs. The results indicated the AUCs of three quality indices of the ELZC had statistical significance (>0.500). The ELZC and its corresponding three indices were more suitable for multi-lead ECG quality assessment than the other three algorithms.

Improving a Test for Normality Based on Kullback-Leibler Discrimination Information (쿨백-라이블러 판별정보에 기반을 둔 정규성 검정의 개선)

  • Choi, Byung-Jin
    • The Korean Journal of Applied Statistics
    • /
    • v.20 no.1
    • /
    • pp.79-89
    • /
    • 2007
  • A test for normality introduced by Arizono and Ohta(1989) is based on fullback-Leibler discrimination information. The test statistic is derived from the discrimination information estimated using sample entropy of Vasicek(1976) and the maximum likelihood estimator of the variance. However, these estimators are biased and so it is reasonable to make use of unbiased estimators to accurately estimate the discrimination information. In this paper, Arizono-Ohta test for normality is improved. The derived test statistic is based on the bias-corrected entropy estimator and the uniformly minimum variance unbiased estimator of the variance. The properties of the improved KL test are investigated and Monte Carlo simulation is performed for power comparison.

A cross-entropy algorithm based on Quasi-Monte Carlo estimation and its application in hull form optimization

  • Liu, Xin;Zhang, Heng;Liu, Qiang;Dong, Suzhen;Xiao, Changshi
    • International Journal of Naval Architecture and Ocean Engineering
    • /
    • v.13 no.1
    • /
    • pp.115-125
    • /
    • 2021
  • Simulation-based hull form optimization is a typical HEB (high-dimensional, expensive computationally, black-box) problem. Conventional optimization algorithms easily fall into the "curse of dimensionality" when dealing with HEB problems. A recently proposed Cross-Entropy (CE) optimization algorithm is an advanced stochastic optimization algorithm based on a probability model, which has the potential to deal with high-dimensional optimization problems. Currently, the CE algorithm is still in the theoretical research stage and rarely applied to actual engineering optimization. One reason is that the Monte Carlo (MC) method is used to estimate the high-dimensional integrals in parameter update, leading to a large sample size. This paper proposes an improved CE algorithm based on quasi-Monte Carlo (QMC) estimation using high-dimensional truncated Sobol subsequence, referred to as the QMC-CE algorithm. The optimization performance of the proposed algorithm is better than that of the original CE algorithm. With a set of identical control parameters, the tests on six standard test functions and a hull form optimization problem show that the proposed algorithm not only has faster convergence but can also apply to complex simulation optimization problems.

Sample-spacing Approach for the Estimation of Mutual Information (SAMPLE-SPACING 방법에 의한 상호정보의 추정)

  • Huh, Moon-Yul;Cha, Woon-Ock
    • The Korean Journal of Applied Statistics
    • /
    • v.21 no.2
    • /
    • pp.301-312
    • /
    • 2008
  • Mutual information is a measure of association of explanatory variable for predicting target variable. It is used for variable ranking and variable subset selection. This study is about the Sample-spacing approach which can be used for the estimation of mutual information from data consisting of continuous explanation variables and categorical target variable without estimating a joint probability density function. The results of Monte-Carlo simulation and experiments with real-world data show that m = 1 is preferable in using Sample-spacing.

THE EXISTENCE OF PRODUCT BROWNIAN PROCESSES

  • Kwon, Joong-Sung
    • Journal of the Korean Mathematical Society
    • /
    • v.33 no.2
    • /
    • pp.319-332
    • /
    • 1996
  • Many authors have studied multiple stochastic integrals in pursuit of the existence of product processes in terms of multiple integrals. But there has not been much research into the structure of the product processes themselves. In this direction, a study which gives emphasis on sample path continuity and boundedness properties was initiated in Pyke[9]. For details of problem set-ups and necessary notations, see [9]. Recently the weak limits of U-processes are shown to be chaos processes, which is product of the same Brownian measures, see [2] and [7].

  • PDF

On the Length of Sample Sequence in Universal Statistical Test (유니버설 통계적 검정에서 표본 수열의 길이에 대한 분석)

  • 강주성
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.8 no.3
    • /
    • pp.105-114
    • /
    • 1998
  • Maurer가 제안한 유니버설 통계적 검정을 소개하고 검정에 사용된 통계량의 의미를 분석한다. 기존 검정법을 포괄하는 이 검정법은 보다 넓은 의미의 통계적 결점들을 탐지해낼 수 있다. 또한, 검정에 사용되는 통계량은 엔트로피와 밀접한 연관이 있으며 암호학적 응용에서 시스템의 안전성에 영향을 키치는 요소들을 탐지해 낸다. 이러한 특징과 함께 기존 검정법 보다 상당히 긴 표본 수열을 필요로 한다는 사실이 유니버설 검정의 단점으로 지적되어 왔다. 그러나 본 논문에서는 빈도(freequency) 검정법과의 비교를 통해서 미세한 편의(bias)를 탐지해내기 위한 도구로서는 유니버설 검정이 오히려 더 효율적이라는 사실을 보였다.