• Title/Summary/Keyword: Probability and statistics

Search Result 1,185, Processing Time 0.024 seconds

The Analysis of the Research Trends Related to School Health in Korea (학교보건 관련 국내 연구동향 분석)

  • Jung, Jeong-Sim;Kim, Jung-Soon
    • Journal of the Korean Society of School Health
    • /
    • v.17 no.1
    • /
    • pp.85-95
    • /
    • 2004
  • Objectives : The aim of this study was to identify the trend of school health research by analyzing articles related to school health for the last 10 years. this information can be used to guide research direction for the future. Methods : This study is a descriptive study that analyzed annual data. using an objective frame of evaluation about the methodology and research domain in each paper, all the papers included in the journals concerning school health from January 1993 to December 2000 were analyzed. The data was processed statistically by frequency and percentage. Results : 455 papers in 9 journals related to school health were published. The Journal of the Korean Society of School Health had 204 articles, the highest number of any journal. most of the articles were descriptive, but the number of experimental studies increased over time. the most common research subjects were students were the greatest ones, but the trend to study both parents and teachers increased near the end of the sampling period. the most common selection of subjects appeared to be based on convenience, but probability sampling gradually increasing annually. the most common research instrument was the questionnaire and the reliability and the validity of instruments were described in approximately half of the studies. The survey was the most commonly used method of data collection. The papers that met ethical issue in data collection were less than those that did not. In addition, the papers that provided the rationale for the calculation of sample size were less than those that did not. parametric statistics were the main methods of data analysis, but some advanced statistics were used more often than simple descriptive statistics in the latter part of the sampling period. In general, limit of the studies were not frequently mentioned but more recommendations were made. regarding the characteristics of the research area, the assesment domain was remarkable. The rate of school health problem assesment was the highest among research subjects. Sex- related subjects were the highest in detail research subjects. Conclusions : The research of school health has increased quantitatively, but it is difficult to ascertain its qualitative development. Therefore, on the basis of the research completed up until now, more school-based intervention studies and longitudinal studies need to be another target for the evaluation of the effects of the school health service. as well, policy suggestion through international and cross-sectional comparison studies are needed to assist in the establishment of the long term direction of school health.

New composite distributions for insurance claim sizes (보험 청구액에 대한 새로운 복합분포)

  • Jung, Daehyeon;Lee, Jiyeon
    • The Korean Journal of Applied Statistics
    • /
    • v.30 no.3
    • /
    • pp.363-376
    • /
    • 2017
  • The insurance market is saturated and its growth engine is exhausted; consequently, the insurance industry is now in a low growth period with insurance companies that face a fierce competitive environment. In such a situation, it will be an important issue to find the probability distributions that can explain the flow of insurance claims, which are the basis of the actuarial calculation of the insurance product. Insurance claims are generally known to be well fitted by lognormal distributions or Pareto distributions biased to the left with a thick tail. In recent years, skew normal distributions or skew t distributions have been considered reasonable distributions for describing insurance claims. Cooray and Ananda (2005) proposed a composite lognormal-Pareto distribution that has the advantages of both lognormal and Pareto distributions and they also showed the composite distribution has a higher fitness than single distributions. In this paper, we introduce new composite distributions based on skew normal distributions or skew t distributions and apply them to Danish fire insurance claim data and US indemnity loss data to compare their performance with the other composite distributions and single distributions.

On the Small Sample Distribution and its Consistency with the Large Sample Distribution of the Chi-Squared Test Statistic for a Two-Way Contigency Table with Fixed Margins (주변값이 주어진 이원분할표에 대한 카이제곱 검정통계량의 소표본 분포 및 대표본 분포와의 일치성 연구)

  • Park, Cheol-Yong;Choi, Jae-Sung;Kim, Yong-Gon
    • Journal of the Korean Data and Information Science Society
    • /
    • v.11 no.1
    • /
    • pp.83-90
    • /
    • 2000
  • The chi-squared test statistic is usually employed for testing independence of two categorical variables in a two-way contingency table. It is well known that, under independence, the test statistic has an asymptotic chi-squared distribution under multinomial or product-multinomial models. For the case where both margins fixed, the sampling model of the contingency table is a multiple hypergeometric distribution and the chi-squared test statistic follows the same limiting distribution. In this paper, we study the difference between the small sample and large sample distributions of the chi-squared test statistic for the case with fixed margins. For a few small sample cases, the exact small sample distribution of the test statistic is directly computed. For a few large sample sizes, the small sample distribution of the statistic is generated via a Monte Carlo algorithm, and then is compared with the large sample distribution via chi-squared probability plots and Kolmogorov-Smirnov tests.

  • PDF

Numerical studies on approximate option prices (근사적 옵션 가격의 수치적 비교)

  • Yoon, Jeongyoen;Seung, Jisu;Song, Seongjoo
    • The Korean Journal of Applied Statistics
    • /
    • v.30 no.2
    • /
    • pp.243-257
    • /
    • 2017
  • In this paper, we compare several methods to approximate option prices: Edgeworth expansion, A-type and C-type Gram-Charlier expansions, a method using normal inverse gaussian (NIG) distribution, and an asymptotic method using nonlinear regression. We used two different types of approximation. The first (called the RNM method) approximates the risk neutral probability density function of the log return of the underlying asset and computes the option price. The second (called the OPTIM method) finds the approximate option pricing formula and then estimates parameters to compute the option price. For simulation experiments, we generated underlying asset data from the Heston model and NIG model, a well-known stochastic volatility model and a well-known Levy model, respectively. We also applied the above approximating methods to the KOSPI200 call option price as a real data application. We then found that the OPTIM method shows better performance on average than the RNM method. Among the OPTIM, A-type Gram-Charlier expansion and the asymptotic method that uses nonlinear regression showed relatively better performance; in addition, among RNM, the method of using NIG distribution was relatively better than others.

The Significance Test on the AHP-based Alternative Evaluation: An Application of Non-Parametric Statistical Method (AHP를 이용한 대안 평가의 유의성 분석: 비모수적 통계 검정 적용)

  • Park, Joonsoo;Kim, Sung-Chul
    • The Journal of Society for e-Business Studies
    • /
    • v.22 no.1
    • /
    • pp.15-35
    • /
    • 2017
  • The method of weighted sum of evaluation using AHP is widely used in feasibility analysis and alternative selection. Final scores are given in forms of weighted sums and the alternative with largest score is selected. With two alternatives, as in feasibility analysis, the final score greater than 0.5 gives the selection but there remains a question that how large is large enough. KDI suggested a concept of 'grey area' where scores are between 0.45 and 0.55 in which decisions are to be made with caution, but it lacks theoretical background. Statistical testing was introduced to answer the question in some studies. It was assumed some kinds of probability distribution, but did not give the validity on them. We examine the various cases of weighted sum of evaluation score and show why the statistical testing has to be introduced. We suggest a non-parametric testing procedure which does not assume a specific distribution. A case study is conducted to analyze the validity of our suggested testing procedure. We conclude our study with remarks on the implication of analysis and the future way of research development.

Regression Models for Determining the Patent Royalty Rates using Infringement Damage Awards and Inter-Partes Review Cases (손해배상액과 무효심판 판례를 이용한 특허 로열티율 산정 회귀모형)

  • Yang, Dong Hong;Kang, Gunseog;Kim, Sung-Chul
    • The Journal of Society for e-Business Studies
    • /
    • v.23 no.1
    • /
    • pp.47-63
    • /
    • 2018
  • This study suggested quantitative models to calculate a royalty rate as an important input factor of the relief from royalty method which has the characteristics of income approach method and market approach method that are generally used in the valuation of intangible assets. This study built a royalty rate regression model by referring to the patent infringement damages cases based on royalties, i.e., by using the royalty rates as a dependent variable and the patent indexes of the corresponding patent right as independent variables. Then, a logistic regression model was constructed by referring to inter-partes review cases of patent rights, i.e. by using not-unpatentable results as a dependent variable and the patent indexes of the corresponding patent right as independent variables. A final royalty rate was calculated by matching the royalty rate from the royalty rate regression model with a not-unpatentable probability from the logistic regression model. The suggested royalty rate was compared with the royalty rate obtained by the traditional methods to check its reliability.

Empirical Analysis on Rao-Scott First Order Adjustment for Two Population Homogeneity test Based on Stratified Three-Stage Cluster Sampling with PPS

  • Heo, Sunyeong
    • Journal of Integrative Natural Science
    • /
    • v.7 no.3
    • /
    • pp.208-213
    • /
    • 2014
  • National-wide and/or large scale sample surveys generally use complex sample design. Traditional Pearson chi-square test is not appropriate for the categorical complex sample data. Rao-Scott suggested an adjustment method for Pearson chi-square test, which uses the average of eigenvalues of design matrix of cell probabilities. This study is to compare the efficiency of Rao-Scott first order adjusted test to Wald test for homogeneity between two populations using 2009 Gyeongnam regional education offices's customer satisfaction survey (2009 GREOCSS) data. The 2009 GREOCSS data were collected based on stratified three-stage cluster sampling with probability proportional to size. The empirical results show that the Rao-Scott adjusted test statistic using only the variances of cell probabilities is very close to the Wald test statistic, which uses the covariance matrix of cell probabilities, under the 2009 GREOCSS data based. However it is necessary to be cautious to use the Rao-Scott first order adjusted test statistic in the place of Wald test because its efficiency is decreasing as the relative variance of eigenvalues of the design matrix of cell probabilities is increasing, specially more when the number of degrees of freedom is small.

Organized structure of turbulent boundary layer with rod-roughened wall (표면조도가 난류구조에 미치는 영향)

  • Lee, Jae-Hwa;Lee, Seung-Hyun;Kim, Kyoung-Youn;Sung, Hyung-Jin
    • 한국전산유체공학회:학술대회논문집
    • /
    • 2008.03b
    • /
    • pp.189-192
    • /
    • 2008
  • Turbulent coherent structure near rod-roughened wall are investigated by analyzing the database of direct numerical simulation of turbulent boundary layer. The roughness sublayer id defined as two-point correlations are not independent of streamwise locations around roughness. The roughness sublayer based on the two-point spatial correlation is different from that given by one-point statistics. Quadrant analysis and probability-weighted Reynolds shear stress indicate that turbulent structures are not affected by surface roughness above the roughness sublayer defined by the spatial correlations. The conditionally-averaged flow fields associated with Reynolds shear stress producing Q2/Q4 events show that though turbulent vortices are affected in the roughness sublayer, these are very similar at different streamwise locations above the roughness sublayer. The Reynolds stress producing turbulent vortices in the log layer have almost the same geometrical shape as those in the smooth wall-bounded turbulent flows. This suggests that the mechanism by which the Reynolds stress is produced in the log layer has not been significantly affected by the present surface roughness.

  • PDF

Organized Structure of Turbulent Boundary Layer with Rod-roughened Wall (표면조도가 있는 난류경계층 내 난류구조)

  • Lee, Jae-Hwa;Lee, Seung-Hyun;Kim, Kyoung-Youn;Sung, Hyung-Jin
    • Transactions of the Korean Society of Mechanical Engineers B
    • /
    • v.32 no.6
    • /
    • pp.463-470
    • /
    • 2008
  • Turbulent coherent structures near rod-roughened wall are investigated by analyzing the database of direct numerical simulation of turbulent boundary layer. The surface roughness rods with the height $k/{\delta}=0.05$ are arranged periodically in $Re_{\delta}=9000$. The roughness sublayer is defined as two-point correlations are not independent of streamwise locations around roughness. The roughness sublayer based on the two-point spatial correlation is different from that given by one-point statistics. Quadrant analysis and probability-weighted Reynolds shear stress indicate that turbulent structures are not affected by surface roughness above the roughness sublayer defined by the spatial correlations. The conditionally-averaged flow fields associated with Reynolds shear stress producing Q2/Q4 events show that though turbulent vortices are affected in the roughness sublayer, these are very similar at different streamwise locations above the roughness sublayer. The Reynolds stress producing turbulent vortices in the log layer ($y/{\delta}=0.15$)have almost the same geometrical shape as those in the smooth wall-bounded turbulent flows. This suggests that the mechanism by which the Reynolds stress is produced in the log layer has not been significantly affected by the present surface roughness.

Noninformative Priors for the Ratio of Parameters in Inverse Gaussian Distribution (INVERSE GAUSSIAN분포의 모수비에 대한 무정보적 사전분포에 대한 연구)

  • 강상길;김달호;이우동
    • The Korean Journal of Applied Statistics
    • /
    • v.17 no.1
    • /
    • pp.49-60
    • /
    • 2004
  • In this paper, when the observations are distributed as inverse gaussian, we developed the noninformative priors for ratio of the parameters of inverse gaussian distribution. We developed the first order matching prior and proved that the second order matching prior does not exist. It turns out that one-at-a-time reference prior satisfies a first order matching criterion. Some simulation study is performed.