• Title/Summary/Keyword: variance-covariance matrix

Search Result 104, Processing Time 0.022 seconds

An approach to improving the James-Stein estimator shrinking towards projection vectors

  • Park, Tae Ryong;Baek, Hoh Yoo
    • Journal of the Korean Data and Information Science Society
    • /
    • v.25 no.6
    • /
    • pp.1549-1555
    • /
    • 2014
  • Consider a p-variate normal distribution ($p-q{\geq}3$, q = rank($P_V$) with a projection matrix $P_V$). Using a simple property of noncentral chi square distribution, the generalized Bayes estimators dominating the James-Stein estimator shrinking towards projection vectors under quadratic loss are given based on the methods of Brown, Brewster and Zidek for estimating a normal variance. This result can be extended the cases where covariance matrix is completely unknown or ${\sum}={\sigma}^2I$ for an unknown scalar ${\sigma}^2$.

A Study on the Accuracy of the Maximum Likelihood Estimator of the Generalized Logistic Distribution According to Information Matrix (Information Matrix에 따른 Generalized Logistic 분포의 최우도 추정량 정확도에 관한 연구)

  • Shin, Hong-Joon;Jung, Young-Hun;Heo, Jun-Haeng
    • Journal of Korea Water Resources Association
    • /
    • v.42 no.4
    • /
    • pp.331-341
    • /
    • 2009
  • In this study, we compared the observed information matrix with the Fisher information matrix to estimate the uncertainty of maximum likelihood estimators of the generalized logistic (GL) distribution. The previous literatures recommended the use of the observed information matrix because this is convenient since this matrix is determined as the part of the parameter estimation procedure and there is little difference in accuracy between the observed information matrix and the Fisher information matrix for large sample size. The observed information matrix has been applied for the generalized logistic distribution based on the previous study without verification. For this purpose, a simulation experiment was performed to verify which matrix gave the better accuracy for the GL model. The simulation results showed that the variance-covariance of the ML parameters for the GL distribution came up with similar results to those of previous literature, but it is preferable to use of the Fisher information matrix to estimate the uncertainty of quantile of ML estimators.

Application of Objective Mapping to Surface Currents Observed by HF Radar off the Keum River Estuary (금강하구 연안에서 고주파 레이더로 관측된 표층해류에 대한 객관적 유속산출 적용)

  • Hwang, Jin-A;Lee, Sang-Ho;Choi, Byung-Joo;Kim, Chang-Soo
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.16 no.1
    • /
    • pp.14-26
    • /
    • 2011
  • Surface currents were observed by high-frequency (HF) radars off the Keum River estuary from December 2008 to February 2009. The dataset of observed surface currents had data gaps due to the interference of electromagnetic waves and the deteriorating weather conditions. To fill the data gaps an optimal interpolation procedure was developed. The characteristics of spatial correlation in the surface currents off the Keum River estuary were investigated and the spatial data gaps were filled using the optimal interpolation. Then, the temporal and spatial distribution of the interpolated surface currents and the patterns of interpolation error were examined. The correlation coefficients between the surface currents in the coastal region were higher than 0.7 because tidal currents dominate the surface circulation. The sample data covariance matrix (C), spatially averaged covariance matrix with localization ($C^G_{sm}$) and covariance matrix fitted by an exponential function ($C_{ft}$) were used to interpolate the original dataset. The optimal interpolation filled the data gaps and suppressed the spurious data with spikes in the time series of surface current speed so that the variance of the interpolated time series was smaller than that of the original data. When the spatial data coverage was larger (smaller) than 70% of the region, the interpolation error produced by $C^G_{sm}$ ($C_{ft}$) was smaller compared with that by C.

The Two Dimensional Analysis of RF Passive Device using Stochastic Finite Element Method (확률유한요소법을 이용한 초고주파 수동소자의 2차원 해석)

  • Kim, Jun-Yeon;Jeong, Cheol-Yong;Lee, Seon-Yeong;Cheon, Chang-Ryeol
    • The Transactions of the Korean Institute of Electrical Engineers C
    • /
    • v.49 no.4
    • /
    • pp.249-257
    • /
    • 2000
  • In this paper, we propose the use of stochastic finite element method, that is popularly employed in mechanical structure analysis, for more practical designing purpose of RF device. The proposed method is formulated based on the vector finite element method cooperated by pertubation analysis. The method utilizes sensitivity analysis algorithm with covariance matrix of the random variables that represent for uncertain physical quantities such as length or various electrical constants to compute the probabilities of the measure of performance of the structure. For this computation one need to know the variance and covariance of the random variables that might be determined by practical experiences. The presenting algorithm has been verified by analyzing several device with different be determined by practical experiences. The presenting algorithm has been verified by analysis several device with different measure of performanes. For the convenience of formulation, two dimensional analysis has been performed to apply it into waveguide with dielectric slab. In the problem the dielectric constant of the dielectric slab is considered as random variable. Another example is matched waveguide and cavity problem. In the problem, the dimension of them are assumed to be as random variables and the expectations and variances of quality factor have been computed.

  • PDF

A study on Robust Feature Image for Texture Classification and Detection (텍스쳐 분류 및 검출을 위한 강인한 특징이미지에 관한 연구)

  • Kim, Young-Sub;Ahn, Jong-Young;Kim, Sang-Bum;Hur, Kang-In
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.10 no.5
    • /
    • pp.133-138
    • /
    • 2010
  • In this paper, we make up a feature image including spatial properties and statistical properties on image, and format covariance matrices using region variance magnitudes. By using it to texture classification, this paper puts a proposal for tough texture classification way to illumination, noise and rotation. Also we offer a way to minimalize performance time of texture classification using integral image expressing middle image for fast calculation of region sum. To estimate performance evaluation of proposed way, this paper use a Brodatz texture image, and so conduct a noise addition and histogram specification and create rotation image. And then we conduct an experiment and get better performance over 96%.

Empirical Analysis on Rao-Scott First Order Adjustment for Two Population Homogeneity test Based on Stratified Three-Stage Cluster Sampling with PPS

  • Heo, Sunyeong
    • Journal of Integrative Natural Science
    • /
    • v.7 no.3
    • /
    • pp.208-213
    • /
    • 2014
  • National-wide and/or large scale sample surveys generally use complex sample design. Traditional Pearson chi-square test is not appropriate for the categorical complex sample data. Rao-Scott suggested an adjustment method for Pearson chi-square test, which uses the average of eigenvalues of design matrix of cell probabilities. This study is to compare the efficiency of Rao-Scott first order adjusted test to Wald test for homogeneity between two populations using 2009 Gyeongnam regional education offices's customer satisfaction survey (2009 GREOCSS) data. The 2009 GREOCSS data were collected based on stratified three-stage cluster sampling with probability proportional to size. The empirical results show that the Rao-Scott adjusted test statistic using only the variances of cell probabilities is very close to the Wald test statistic, which uses the covariance matrix of cell probabilities, under the 2009 GREOCSS data based. However it is necessary to be cautious to use the Rao-Scott first order adjusted test statistic in the place of Wald test because its efficiency is decreasing as the relative variance of eigenvalues of the design matrix of cell probabilities is increasing, specially more when the number of degrees of freedom is small.

Averaging TRIAD Algorithm for Attitude Determination (평균 TRIAD를 이용한 자세 결정)

  • Kim, Dong-Hoon;Lee, Henzeh;Oh, Hwa-Suk
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.37 no.1
    • /
    • pp.36-41
    • /
    • 2009
  • In general, accurate attitude information is essential to perform the mission. Two algorithms are well-known to determine the attitude through two or more vector observations. One is deterministic method such as TRIAD algorithm, the other is optimal method such as QUEST algorithm. This Paper suggests the idea to improve performance of the TRIAD algorithm and to determine the attitude by combination of different sensors. First, we change the attitude matrix to Euler angle instead of using orthogonalization method and also use covariance in place of variance, then apply an unbiased minimum variance formula for more accurate solutions. We also suggest the methodology to determine the attitude when more than two measurements are given. The performance of the Averaging TRIAD algorithm upon the combination of different sensors is analyzed by numerical simulation in terms of standard deviation and probability.

Impact of Mathematical Modeling Schemes into Accuracy Representation of GPS Control Surveying (수학적 모형화 기법이 GPS 기준점 측량 정확도 표현에 미치는 영향)

  • Lee, Hungkyu;Seo, Wansoo
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.30 no.5
    • /
    • pp.445-458
    • /
    • 2012
  • The objective of GPS control surveying is ultimately to determine coordinate sets of control points within targeted accuracy through a series of observations and network adjustments. To this end, it is of equivalent importance for the accuracy of these coordinates to be realistically represented by using an appropriate method. The accuracy representation can be quantitively made by the variance-covariance matrices of the estimates, of which features are sensitive to the mathematical models used in the adjustment. This paper deals with impact of functional and stochastic modeling techniques into the accuracy representation of the GPS control surveying with a view of gaining background for its standardization. In order to achieve this goal, mathematical theory and procedure of the single-baseline based multi-session adjustment has been rigorously reviewed together with numerical analysis through processing real world data. Based on this study, it was possible to draw a conclusion that weighted-constrained adjustment with the empirical stochastic model was among the best scheme to more realistically describe both of the absolute and relative accuracies of the GPS surveying results.

SUMRAY: R and Python Codes for Calculating Cancer Risk Due to Radiation Exposure of a Population

  • Michiya Sasaki;Kyoji Furukawa;Daiki Satoh;Kazumasa Shimada;Shin'ichi Kudo;Shunji Takagi;Shogo Takahara;Michiaki Kai
    • Journal of Radiation Protection and Research
    • /
    • v.48 no.2
    • /
    • pp.90-99
    • /
    • 2023
  • Background: Quantitative risk assessments should be accompanied by uncertainty analyses of the risk models employed in the calculations. In this study, we aim to develop a computational code named SUMRAY for use in cancer risk projections from radiation exposure taking into account uncertainties. We also aim to make SUMRAY publicly available as a resource for further improvement of risk projection. Materials and Methods: SUMRAY has two versions of code written in R and Python. The risk models used in SUMRAY for all-solid-cancer mortality and incidence were those published in the Life Span Study of a cohort of the atomic bomb survivors in Hiroshima and Nagasaki. The confidence intervals associated with the evaluated risks were derived by propagating the statistical uncertainties in the risk model parameter estimates by the Monte Carlo method. Results and Discussion: SUMRAY was used to calculate the lifetime or time-integrated attributable risks of cancer under an exposure scenario (baseline rates, dose[s], age[s] at exposure, age at the end of follow-up, sex) specified by the user. The results were compared with those calculated using another well-known web-based tool, Radiation Risk Assessment Tool (RadRAT; National Institutes of Health), and showed a reasonable agreement within the estimated confidential interval. Compared with RadRAT, SUMRAY can be used for a wide range of applications, as it allows the risk projection with arbitrarily specified risk models and/or population reference data. Conclusion: The reliabilities of SUMRAY with the present risk-model parameters and their variance-covariance matrices were verified by comparing them with those of the other codes. The SUMRAY code is distributed to the public as an open-source code under the Massachusetts Institute of Technology license.

Multivariate Classification of Choson Coins (다변수 분석법에 의한 조선시대 동전의 분류연구)

  • Lee, Chang-Keun;Kang, Hyung-Tai;Goh, Sung-Hee
    • 보존과학연구
    • /
    • s.8
    • /
    • pp.1-12
    • /
    • 1987
  • Fifty ancient Korean coins originated in Choson dynasty have been determined for 9 elements such as Sn, Fe, As, Ag, Co, Sb, Ir, Ru and Ni by instrumental neutron activation analysis and for 3 elements such as Cu, Pb, and Zn by atomicalsorption spectrometry. Bronze coins originated in early days of the dynasty contain as major constituents Cu, Pb and Sn approximately in the ratio 90 : 4 : 3, where as, those in latter days contain in the ratio 7 : 2 : 0. Brass coins which had begun in 17century contain as major constituents Cu, Zn and Pb approximately in the ratio 7 : 1: 1. The multivariate date have been analyzed for the relation among elemental contents through the variance-covariance matrix. The data have been fur theranalyzed by a principal component mapping method. As the results training set of 8class have been chosen, based on the spread of sample points in an eigenvector plotand archaeolgical data such as age and the office of minting.

  • PDF