• Title/Summary/Keyword: Sample entropy

Search Result 71, Processing Time 0.024 seconds

Kullback-Leibler Information-Based Tests of Fit for Inverse Gaussian Distribution (역가우스분포에 대한 쿨백-라이블러 정보 기반 적합도 검정)

  • Choi, Byung-Jin
    • The Korean Journal of Applied Statistics
    • /
    • v.24 no.6
    • /
    • pp.1271-1284
    • /
    • 2011
  • The entropy-based test of fit for the inverse Gaussian distribution presented by Mudholkar and Tian(2002) can only be applied to the composite hypothesis that a sample is drawn from an inverse Gaussian distribution with both the location and scale parameters unknown. In application, however, a researcher may want a test of fit either for an inverse Gaussian distribution with one parameter known or for an inverse Gaussian distribution with both the two partameters known. In this paper, we introduce tests of fit for the inverse Gaussian distribution based on the Kullback-Leibler information as an extension of the entropy-based test. A window size should be chosen to implement the proposed tests. By means of Monte Carlo simulations, window sizes are determined for a wide range of sample sizes and the corresponding critical values of the test statistics are estimated. The results of power analysis for various alternatives report that the Kullback-Leibler information-based goodness-of-fit tests have good power.

Large Magnetic Entropy Change in La0.55Ce0.2Ca0.25MnO3 Perovskite

  • Anwar, M.S.;Kumar, Shalendra;Ahmed, Faheem;Arshi, Nishat;Kim, G.W.;Lee, C.G.;Koo, Bon-Heun
    • Journal of Magnetics
    • /
    • v.16 no.4
    • /
    • pp.457-460
    • /
    • 2011
  • In this paper, magnetic property and magnetocaloric effect (MCE) in perovskite manganites of the type $La_{(0.75-X)}Ce_XCa_{0.25}MnO_3$ (x = 0.0, 0.2, 0.3 and 0.5) synthesized by using the standard solid state reaction method have been reported. From the magnetic measurements as a function of temperature and applied magnetic field, we have observed that the Curie temperature ($T_C$) of the prepared samples strongly dependent on Ce content and was found to be 255, 213 and 150 K for x = 0.0, 0.2 and 0.3, respectively. A large magnetocaloric effect in vicinity of $T_C$ has been observed with a maximum magnetic entropy change (${\mid}{\Delta}S_M{\mid}_{max}$) of 3.31 and 6.40 J/kgK at 1.5 and 4 T, respectively, for $La_{0.55}Ce_{0.2}Ca_{0.25}MnO_3$. In addition, relative cooling power (RCP) of the sample under the magnetic field variation of 1.5 T reaches 59 J/kg. These results suggest that $La_{0.55}Ce_{0.2}Ca_{0.25}MnO_3$ compound could be a suitable candidate as working substance in magnetic refrigeration at 213 K.

The Annealing Effect on Magnetocaloric Properties of Fe91-xYxZr9 Alloys

  • Kim, K.S.;Min, S.G.;Zidanic, J.;Yu, S.C.
    • Journal of Magnetics
    • /
    • v.12 no.4
    • /
    • pp.133-136
    • /
    • 2007
  • We have carried out the study of magnetocaloric effect for as-quenched and annealed $Fe_{91-x}Y_xZr_9$ alloys. Samples were prepared by arc melting the high-purity elemental constituents under argon gas atmosphere and by single roller melt spinning. These alloys were annealed one hour at 773 K in vacuum chamber. The magnetization behaviours of the samples were measured by vibrating sample magnetometer. The Curie temperature increases with increasing Y concentration (x=0 to 8). Temperature dependence of the entropy variation ${\Delta}S_M$ was found to appear in the vicinity of the Curie temperature. The results show that annealed $Fe_{86}Y_5Zr_9$ and $Fe_{83}Y_8Zr_9$ alloys a bigger magnetocaloric effect than that those in as-quenched alloys. The value is 1.23 J/kg K for annealed $Fe_{86}Y_5Zr_9$ alloy and 0.89 J/kg K for as-quenched alloy, respectively. In addition, the values of ${\Delta}S_M$ for $Fe_{83}Y_8Zr_9$ alloy is 0.72 J/Kg K for as-quenched and 1.09 J/Kg K for annealed alloy, respectively.

Efficient Adaptive Algorithms Based on Zero-Error Probability Maximization (영확률 최대화에 근거한 효율적인 적응 알고리듬)

  • Kim, Namyong
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.39A no.5
    • /
    • pp.237-243
    • /
    • 2014
  • In this paper, a calculation-efficient method for weight update in the algorithm based on maximization of the zero-error probability (MZEP) is proposed. This method is to utilize the current slope value in calculation of the next slope value, replacing the block processing that requires a summation operation in a sample time period. The simulation results shows that the proposed method yields the same performance as the original MZEP algorithm while significantly reducing the computational time and complexity with no need for a buffer for error samples. Also the proposed algorithm produces faster convergence speed than the algorithm that is based on the error-entropy minimization.

A Hill-Sliding Strategy for Initialization of Gaussian Clusters in the Multidimensional Space

  • Park, J.Kyoungyoon;Chen, Yung-H.;Simons, Daryl-B.;Miller, Lee-D.
    • Korean Journal of Remote Sensing
    • /
    • v.1 no.1
    • /
    • pp.5-27
    • /
    • 1985
  • A hill-sliding technique was devised to extract Gaussian clusters from the multivariate probability density estimates of sample data for the first step of iterative unsupervised classification. The underlying assumption in this approach was that each cluster possessed a unimodal normal distribution. The key idea was that a clustering function proposed could distinguish elements of a cluster under formation from the rest in the feature space. Initial clusters were extracted one by one according to the hill-sliding tactics. A dimensionless cluster compactness parameter was proposed as a universal measure of cluster goodness and used satisfactorily in test runs with Landsat multispectral scanner (MSS) data. The normalized divergence, defined by the cluster divergence divided by the entropy of the entire sample data, was utilized as a general separability measure between clusters. An overall clustering objective function was set forth in terms of cluster covariance matrices, from which the cluster compactness measure could be deduced. Minimal improvement of initial data partitioning was evaluated by this objective function in eliminating scattered sparse data points. The hill-sliding clustering technique developed herein has the potential applicability to decomposition of any multivariate mixture distribution into a number of unimodal distributions when an appropriate diatribution function to the data set is employed.

One-step deep learning-based method for pixel-level detection of fine cracks in steel girder images

  • Li, Zhihang;Huang, Mengqi;Ji, Pengxuan;Zhu, Huamei;Zhang, Qianbing
    • Smart Structures and Systems
    • /
    • v.29 no.1
    • /
    • pp.153-166
    • /
    • 2022
  • Identifying fine cracks in steel bridge facilities is a challenging task of structural health monitoring (SHM). This study proposed an end-to-end crack image segmentation framework based on a one-step Convolutional Neural Network (CNN) for pixel-level object recognition with high accuracy. To particularly address the challenges arising from small object detection in complex background, efforts were made in loss function selection aiming at sample imbalance and module modification in order to improve the generalization ability on complicated images. Specifically, loss functions were compared among alternatives including the Binary Cross Entropy (BCE), Focal, Tversky and Dice loss, with the last three specialized for biased sample distribution. Structural modifications with dilated convolution, Spatial Pyramid Pooling (SPP) and Feature Pyramid Network (FPN) were also performed to form a new backbone termed CrackDet. Models of various loss functions and feature extraction modules were trained on crack images and tested on full-scale images collected on steel box girders. The CNN model incorporated the classic U-Net as its backbone, and Dice loss as its loss function achieved the highest mean Intersection-over-Union (mIoU) of 0.7571 on full-scale pictures. In contrast, the best performance on cropped crack images was achieved by integrating CrackDet with Dice loss at a mIoU of 0.7670.

A Method to Determine the Final Importance of Customer Attributes Considering Statistical Significance (통계적 유의성을 고려하여 고객 요구속성의 중요도를 산정하는 방법)

  • Kim, Kyung-Mee O.
    • Journal of Korean Society for Quality Management
    • /
    • v.36 no.3
    • /
    • pp.1-12
    • /
    • 2008
  • Obtaining the accurate final importance of each customer attribute (CA) is very important in the house of quality(HOQ), because it is deployed to the quality of the final product or service through the quality function deployment(QFD). The final importance is often calculated by the multiplication of the relative importance rate and the competitive priority rate. Traditionally, the sample mean is used for estimating two rates but the dispersion is ignored. This paper proposes a new approach that incorporates statistical significance to consider the dispersion of rates in determining the final importance of CA. The approach is illustrated with a design of car door for each case of crisp and fuzzy numbers.

Time-Frequency Analysis of Electrohysterogram for Classification of Term and Preterm Birth

  • Ryu, Jiwoo;Park, Cheolsoo
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.4 no.2
    • /
    • pp.103-109
    • /
    • 2015
  • In this paper, a novel method for the classification of term and preterm birth is proposed based on time-frequency analysis of electrohysterogram (EHG) using multivariate empirical mode decomposition (MEMD). EHG is a promising study for preterm birth prediction, because it is low-cost and accurate compared to other preterm birth prediction methods, such as tocodynamometry (TOCO). Previous studies on preterm birth prediction applied prefilterings based on Fourier analysis of an EHG, followed by feature extraction and classification, even though Fourier analysis is suboptimal to biomedical signals, such as EHG, because of its nonlinearity and nonstationarity. Therefore, the proposed method applies prefiltering based on MEMD instead of Fourier-based prefilters before extracting the sample entropy feature and classifying the term and preterm birth groups. For the evaluation, the Physionet term-preterm EHG database was used where the proposed method and Fourier prefiltering-based method were adopted for comparative study. The result showed that the area under curve (AUC) of the receiver operating characteristic (ROC) was increased by 0.0351 when MEMD was used instead of the Fourier-based prefilter.

Development of Computer Program for Computation of 12 Refrigerant Properties (12가지 냉매 (R11, R12, R13, R14, R21, R22, R23, R113, R114, R500, R502, C318)의 상태치계산 프로그램)

  • Lee Ki Bang;Chung M. K.
    • The Magazine of the Society of Air-Conditioning and Refrigerating Engineers of Korea
    • /
    • v.16 no.5
    • /
    • pp.477-483
    • /
    • 1987
  • A FORTRAN code has been developed to calculate thermodynamic properties of 12 kinds of refrigerants. Input variables are temperature and pressure or temperature only depending on the saturation. The program output properties are specific volume, saturation pressure, enthalpy, entropy, specific heats and speed of sound. Sample calculations show that output properties are in very good agreements with thermodynamic tables and charts.

  • PDF

Sensitivity Approach of Sequential Sampling Using Adaptive Distance Criterion (적응거리 조건을 이용한 순차적 실험계획의 민감도법)

  • Jung, Jae-Jun;Lee, Tae-Hee
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.29 no.9 s.240
    • /
    • pp.1217-1224
    • /
    • 2005
  • To improve the accuracy of a metamodel, additional sample points can be selected by using a specified criterion, which is often called sequential sampling approach. Sequential sampling approach requires small computational cost compared to one-stage optimal sampling. It is also capable of monitoring the process of metamodeling by means of identifying an important design region for approximation and further refining the fidelity in the region. However, the existing critertia such as mean squared error, entropy and maximin distance essentially depend on the distance between previous selected sample points. Therefore, although sufficient sample points are selected, these sequential sampling strategies cannot guarantee the accuracy of metamodel in the nearby optimum points. This is because criteria of the existing sequential sampling approaches are inefficient to approximate extremum and inflection points of original model. In this research, new sequential sampling approach using the sensitivity of metamodel is proposed to reflect the response. Various functions that can represent a variety of features of engineering problems are used to validate the sensitivity approach. In addition to both root mean squared error and maximum error, the error of metamodel at optimum points is tested to access the superiority of the proposed approach. That is, optimum solutions to minimization of metamodel obtained from the proposed approach are compared with those of true functions. For comparison, both mean squared error approach and maximin distance approach are also examined.