• Title/Summary/Keyword: statistical approach

Search Result 2,335, Processing Time 0.031 seconds

Constraining Cosmological Parameters with Gravitational Lensed Quasars in the Sloan Digital Sky Survey

  • Han, Du-Hwan;Park, Myeong-Gu
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.39 no.1
    • /
    • pp.34-34
    • /
    • 2014
  • We investigate the constraints on the matter density ${\Omega}m$ and the cosmological constant ${\Omega}{\Lambda}$ using the gravitational lensed QSO (Quasi Stellar Object) systems from the Sloan Digital Sky Survey (SDSS) by analyzing the distribution of image separation. The main sample consists of 16 QSO lens systems with measured source and lens redshifts. We use a lensing probability that is simply defined by the gaussian distribution. We perform the curvature test and the constraints on the cosmological parameters as the statistical tests. The statistical tests have considered well-defined selection effects and adopt parameter of velocity dispersion function. We also applied the same analysis to Monte-Carlo generated mock gravitational lens samples to assess the accuracy and limit of our approach. As the results of these statistical tests, we find that only the excessively positively curved universe (${\Omega}m+{\Omega}{\Lambda}$ > 1) are rejected at 95% confidence level. However, if the informations of the galaxy as play a lens are measured accurately, we confirm that the gravitational lensing statistics would be the most powerful tool.

  • PDF

PCA vs. ICA for Face Recognition

  • Lee, Oyoung;Park, Hyeyoung;Park, Seung-Jin
    • Proceedings of the IEEK Conference
    • /
    • 2000.07b
    • /
    • pp.873-876
    • /
    • 2000
  • The information-theoretic approach to face recognition is based on the compact coding where face images are decomposed into a small set of basis images. Most popular method for the compact coding may be the principal component analysis (PCA) which eigenface methods are based on. PCA based methods exploit only second-order statistical structure of the data, so higher- order statistical dependencies among pixels are not considered. Independent component analysis (ICA) is a signal processing technique whose goal is to express a set of random variables as linear combinations of statistically independent component variables. ICA exploits high-order statistical structure of the data that contains important information. In this paper we employ the ICA for the efficient feature extraction from face images and show that ICA outperforms the PCA in the task of face recognition. Experimental results using a simple nearest classifier and multi layer perceptron (MLP) are presented to illustrate the performance of the proposed method.

  • PDF

Statistical Error Compensation Techniques for Spectral Quantization

  • Choi, Seung-Ho;Kim, Hong-Kook
    • Speech Sciences
    • /
    • v.11 no.4
    • /
    • pp.17-28
    • /
    • 2004
  • In this paper, we propose a statistical approach to improve the performance of spectral quantization of speech coders. The proposed techniques compensate for the distortion in a decoded line spectrum pairs (LSP) vector based on a statistical mapping function between a decoded LSP vector and its corresponding original LSP vector. We first develop two codebook-based probabilistic matching (CBPM) methods based on linear mapping functions according to different assumption of distribution of LSP vectors. In addition, we propose an iterative procedure for the two CBPMs. We apply the proposed techniques to a predictive vector quantizer used for the IS-641 speech coder. The experimental results show that the proposed techniques reduce average spectral distortion by around 0.064dB.

  • PDF

Statistical Approach to Analyze Vibration Localization Phenomena in Periodic Structural Systems

  • Shin Sang Ha;Lee Se Jung;Yoo Hong Hee
    • Journal of Mechanical Science and Technology
    • /
    • v.19 no.7
    • /
    • pp.1405-1413
    • /
    • 2005
  • Malfunctions or critical fatigue problems often occur in mistuned periodic structural systems since their vibration responses may become much larger than those of perfectly tuned periodic systems. These are called vibration localization phenomena and it is of great importance to accurately predict the localization phenomena for safe and reliable designs of the periodic structural systems. In this study, a simple discrete system which represents periodic structural systems is employed to analyze the vibration localization phenomena. The statistical effects of mistuning, stiffness coupling, and damping on the vibration localization phenomena are investigated through Monte Carlo simulation. It is found that the probability of vibration localization was significantly influenced by the statistical properties except the standard deviation of coupling stiffness.

How to identify fake images? : Multiscale methods vs. Sherlock Holmes

  • Park, Minsu;Park, Minjeong;Kim, Donghoh;Lee, Hajeong;Oh, Hee-Seok
    • Communications for Statistical Applications and Methods
    • /
    • v.28 no.6
    • /
    • pp.583-594
    • /
    • 2021
  • In this paper, we propose wavelet-based procedures to identify the difference between images, including portraits and handwriting. The proposed methods are based on a novel combination of multiscale methods with a regularization technique. The multiscale method extracts the local characteristics of an image, and the distinct features are obtained through the regularized regression of the local characteristics. The regularized regression approach copes with the high-dimensional problem to build the relation between the local characteristics. Lytle and Yang (2006) introduced the detection method of forged handwriting via wavelets and summary statistics. We expand the scope of their method to the general image and significantly improve the results. We demonstrate the promising empirical evidence of the proposed method through various experiments.

On efficient estimation of population mean under non-response

  • Bhushan, Shashi;Pandey, Abhay Pratap
    • Communications for Statistical Applications and Methods
    • /
    • v.26 no.1
    • /
    • pp.11-25
    • /
    • 2019
  • The present paper utilizes auxiliary information to neutralize the effect of non-response for estimating the population mean. Improved ratio type estimators for population mean have been proposed and their properties are studied. These estimators are suggested for both single phase sampling and two phase sampling in presence of non-response. Empirical studies are conducted to validate the theoretical results and demonstrate the performance of the proposed estimators. The proposed estimators are shown to perform better than those used by Cochran (Sampling Techniques (3rd ed), John Wiley & Sons, 1977), Khare and Srivastava (In Proceedings-National Academy Science, India, Section A, 65, 195-203, 1995), Rao (Randomization Approach in Incomplete Data in Sample Surveys, Academic Press, 1983; Survey Methodology 12, 217-230, 1986), and Singh and Kumar (Australian & New Zealand Journal of Statistics, 50, 395-408, 2008; Statistical Papers, 51, 559-582, 2010) under the derived optimality condition. Suitable recommendations are put forward for survey practitioners.

Statistical Approach for AESA Radar Maximum Detection Range (AESA 레이더 최대탐지거리의 통계적 접근)

  • Tak, Daesuk;Shin, Kyung Soo
    • Journal of the Korean Society of Systems Engineering
    • /
    • v.15 no.1
    • /
    • pp.43-50
    • /
    • 2019
  • Statistical hypothesis tests are important for quantifying answers to questions about samples of data. The Step Process of Statistical Hypothesis Testing; state the null hypothesis, State the alternate hypothesis, State the alpha level, Find the z-score associated with alpha level, Find the test statistic using this formula, If the calculated t distribution value from the data is larger than the t distribution value of alpha level, then you are in the Rejection region and you can reject the Null Hypothesis with ($1-{\alpha}$) level of confidence.

Change point analysis in Bitcoin return series : a robust approach

  • Song, Junmo;Kang, Jiwon
    • Communications for Statistical Applications and Methods
    • /
    • v.28 no.5
    • /
    • pp.511-520
    • /
    • 2021
  • Over the last decade, Bitcoin has attracted a great deal of public interest and Bitcoin market has grown rapidly. One of the main characteristics of the market is that it often undergoes some events or incidents that cause outlying observations. To obtain reliable results in the statistical analysis of Bitcoin data, these outlying observations need to be carefully treated. In this study, we are interested in change point analysis for Bitcoin return series having such outlying observations. Since these outlying observations can affect change point analysis undesirably, we use a robust test for parameter change to locate change points. We report some significant change points that are not detected by the existing tests and demonstrate that the model allowing for parameter changes is better fitted to the data. Finally, we show that the model with parameter change can improve the forecasting performance of Value-at-Risk.

Optimized Chinese Pronunciation Prediction by Component-Based Statistical Machine Translation

  • Zhu, Shunle
    • Journal of Information Processing Systems
    • /
    • v.17 no.1
    • /
    • pp.203-212
    • /
    • 2021
  • To eliminate ambiguities in the existing methods to simplify Chinese pronunciation learning, we propose a model that can predict the pronunciation of Chinese characters automatically. The proposed model relies on a statistical machine translation (SMT) framework. In particular, we consider the components of Chinese characters as the basic unit and consider the pronunciation prediction as a machine translation procedure (the component sequence as a source sentence, the pronunciation, pinyin, as a target sentence). In addition to traditional features such as the bidirectional word translation and the n-gram language model, we also implement a component similarity feature to overcome some typos during practical use. We incorporate these features into a log-linear model. The experimental results show that our approach significantly outperforms other baseline models.

A New Methodology for Software Reliability based on Statistical Modeling

  • Avinash S;Y.Srinivas;P.Annan naidu
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.9
    • /
    • pp.157-161
    • /
    • 2023
  • Reliability is one of the computable quality features of the software. To assess the reliability the software reliability growth models(SRGMS) are used at different test times based on statistical learning models. In all situations, Tradational time-based SRGMS may not be enough, and such models cannot recognize errors in small and medium sized applications.Numerous traditional reliability measures are used to test software errors during application development and testing. In the software testing and maintenance phase, however, new errors are taken into consideration in real time in order to decide the reliability estimate. In this article, we suggest using the Weibull model as a computational approach to eradicate the problem of software reliability modeling. In the suggested model, a new distribution model is suggested to improve the reliability estimation method. We compute the model developed and stabilize its efficiency with other popular software reliability growth models from the research publication. Our assessment results show that the proposed Model is worthier to S-shaped Yamada, Generalized Poisson, NHPP.