• 제목/요약/키워드: non-parametric statistics

검색결과 134건 처리시간 0.02초

정보검색 연구의 방법론에 관한 고찰 (Methodological Problems in Information Retrieval Research)

  • 이명희
    • 한국비블리아학회지
    • /
    • 제7권1호
    • /
    • pp.231-246
    • /
    • 1994
  • A major problem for information retrieval research in the past three decades has been methodology, even though some progress has been made in obtaining useful results from methodologically sound experiments. Within a methodology, potential problems include artificial data generated by the researcher, small sample size interpretation of findings. Critics have pointed out that some room exists for improving methodology of information retrieval research; using existing data, having big enough sample size, including large numbers of search queries, introducing more control in relation to variables, utilizing more appropriate performance measures, conducting rests carefully and evaluating findings properly. Relevance judgments depend entirely on the perception of the user and on the situation of the moment. In an experiment, the best judge of relevance is a user with a well defined information need. Normally more than two categories for relevance judgments are desirable becase there are degrees of relevance. In experimental design, careful control of variables is meeded for internal validity. When no single database exists for comparison, existing operational databases should be used cautiously, Careful control for the variations of search queries, inter-searcher sonsistency, intra-searcher consistency and search strategies is necessary. Parametric statistics requiring rigid assumptions are not appropriate in information retrieval research and non-parametric statistics requiring few assumptions are necessary. Particularly, the sign test and the Wilcoxon test are good alternatives.

  • PDF

2차원 데이터의 여러 가지 분석방법 (Various types of analyses for two-dimensional data)

  • 백재욱
    • 한국신뢰성학회지:신뢰성응용연구
    • /
    • 제10권4호
    • /
    • pp.251-263
    • /
    • 2010
  • Modelling for failures is important for reliability analysis since failures of products such as automobiles occur as both time and usage progress and the results from the proper analysis of the two-dimensional data can be used for establishing warranty assurance policy. Hence, in this paper general issues which concern modelling failures are discussed, and both one-dimensional approaches and two-dimensional approaches to two-dimensional data are investigated. Finally non-parametric approaches to two-dimensional data are presented as a means of exploratory data analyses.

Fuzzy Local Linear Regression Analysis

  • Hong, Dug-Hun;Kim, Jong-Tae
    • Journal of the Korean Data and Information Science Society
    • /
    • 제18권2호
    • /
    • pp.515-524
    • /
    • 2007
  • This paper deals with local linear estimation of fuzzy regression models based on Diamond(1998) as a new class of non-linear fuzzy regression. The purpose of this paper is to introduce a use of smoothing in testing for lack of fit of parametric fuzzy regression models.

  • PDF

Bayesian 방법에 의한 잡음감소 방법에 관한 연구 (Wavelet Denoising based on a Bayesian Approach)

  • 이문직;정진현
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 1999년도 하계학술대회 논문집 G
    • /
    • pp.2956-2958
    • /
    • 1999
  • The classical solution to the noise removal problem is the Wiener filter, which utilizes the second-order statistics of the Fourier decomposition. We discuss a Bayesian formalism which gives rise to a type of wavelet threshold estimation in non-parametric regression. A prior distribution is imposed on the wavelet coefficients of the unknown response function, designed to capture the sparseness of wavelet expansion common to most application. For the prior specified, the posterior median yields a thresholding procedure

  • PDF

A comparison of tests for homoscedasticity using simulation and empirical data

  • Anastasios Katsileros;Nikolaos Antonetsis;Paschalis Mouzaidis;Eleni Tani;Penelope J. Bebeli;Alex Karagrigoriou
    • Communications for Statistical Applications and Methods
    • /
    • 제31권1호
    • /
    • pp.1-35
    • /
    • 2024
  • The assumption of homoscedasticity is one of the most crucial assumptions for many parametric tests used in the biological sciences. The aim of this paper is to compare the empirical probability of type I error and the power of ten parametric and two non-parametric tests for homoscedasticity with simulations under different types of distributions, number of groups, number of samples per group, variance ratio and significance levels, as well as through empirical data from an agricultural experiment. According to the findings of the simulation study, when there is no violation of the assumption of normality and the groups have equal variances and equal number of samples, the Bhandary-Dai, Cochran's C, Hartley's Fmax, Levene (trimmed mean) and Bartlett tests are considered robust. The Levene (absolute and square deviations) tests show a high probability of type I error in a small number of samples, which increases as the number of groups rises. When data groups display a nonnormal distribution, researchers should utilize the Levene (trimmed mean), O'Brien and Brown-Forsythe tests. On the other hand, if the assumption of normality is not violated but diagnostic plots indicate unequal variances between groups, researchers are advised to use the Bartlett, Z-variance, Bhandary-Dai and Levene (trimmed mean) tests. Assessing the tests being considered, the test that stands out as the most well-rounded choice is the Levene's test (trimmed mean), which provides satisfactory type I error control and relatively high power. According to the findings of the study and for the scenarios considered, the two non-parametric tests are not recommended. In conclusion, it is suggested to initially check for normality and consider the number of samples per group before choosing the most appropriate test for homoscedasticity.

절단함수를 이용한 AUC와 VUS (AUC and VUS using truncated distributions)

  • 홍종선;홍성혁
    • 응용통계연구
    • /
    • 제32권4호
    • /
    • pp.593-605
    • /
    • 2019
  • ROC 곡선 아래 면적과 ROC 곡면 아래 부피를 이용하여 분류모형의 판별력을 측정하는 통계량인 AUC와 VUS에 관한 많은 연구가 있다. ROC 곡선을 구성하는 FPR과 TPR 모두에 제한을 두는 양방향 부분 AUC는 부분 AUC보다 더 효과적이고 정확하게 제안되었다. ROC 곡면에서도 부분 VUS 뿐만 아니라 세 방향 부분 VUS 통계량이 개발되었다. 본 연구에서는 ROC 곡선의 FPR과 TPR 모두에 제한된 두 개의 절단함수를 이용하여 확률 개념과 적분 표현으로 대안적인 AUC를 제안한다. 또한 이 AUC는 양방향 부분 AUC와 관계가 있음을 알 수 있다. ROC 곡면에서의 세 방향 부분 VUS도 절단함수를 이용하는 VUS와 관련되어 있음을 발견하였다. 그리고 이러한 대안적인 AUC와 VUS는 맨-휘트니 통계량으로 표현되고 추정된다. 정규분포와 확률표본을 기반으로 이들의 모수적인 추정 방법과 비모수적인 추정 방법을 탐색한다.

An integrated approach for structural health monitoring using an in-house built fiber optic system and non-parametric data analysis

  • Malekzadeh, Masoud;Gul, Mustafa;Kwon, Il-Bum;Catbas, Necati
    • Smart Structures and Systems
    • /
    • 제14권5호
    • /
    • pp.917-942
    • /
    • 2014
  • Multivariate statistics based damage detection algorithms employed in conjunction with novel sensing technologies are attracting more attention for long term Structural Health Monitoring of civil infrastructure. In this study, two practical data driven methods are investigated utilizing strain data captured from a 4-span bridge model by Fiber Bragg Grating (FBG) sensors as part of a bridge health monitoring study. The most common and critical bridge damage scenarios were simulated on the representative bridge model equipped with FBG sensors. A high speed FBG interrogator system is developed by the authors to collect the strain responses under moving vehicle loads using FBG sensors. Two data driven methods, Moving Principal Component Analysis (MPCA) and Moving Cross Correlation Analysis (MCCA), are coded and implemented to handle and process the large amount of data. The efficiency of the SHM system with FBG sensors, MPCA and MCCA methods for detecting and localizing damage is explored with several experiments. Based on the findings presented in this paper, the MPCA and MCCA coupled with FBG sensors can be deemed to deliver promising results to detect both local and global damage implemented on the bridge structure.

비모수 통계기법을 이용한 낙동강 수계의 수질 장기 경향 분석 (Long-Term Trend Analyses of Water Qualities in Nakdong River Based on Non-Parametric Statistical Methods)

  • 김주화;박석순
    • 한국물환경학회지
    • /
    • 제20권1호
    • /
    • pp.63-71
    • /
    • 2004
  • The long-tenn trend analyses of water qualities were performed for 49 monitoring stations located in Nakdong River. Water quality parameters used in this study are the monthly data of BOD(Biological Oxygen Demand), TN(Total Nitrogen) and TP(Total Phosphorus) measured from 1990 to 1999. The long-tenn trends were analyzed by Seasonal Mann-Kendall Test and Locally WEighted Scatter plot Smoother(LOWESS). Nakdong river was divided into four subbasins, including upstream watershed, midstream watershed, western downstream watershed and eastern downstream watershed. The results of Seasonal Mann-Kendall Test indicated that there would be no trends of BOD in upstream watershed, western and eastern downstream watershed. Trends of BOD were downward in midstream watershed. For TN and TP, there were upward trends in all of watersheds. But LOWESS curves suggested that BOD, TN and TP concentrations generally increased between 1990 and 1996, then resumed decreasing.

Estimating survival distributions for two-stage adaptive treatment strategies: A simulation study

  • Vilakati, Sifiso;Cortese, Giuliana;Dlamini, Thembelihle
    • Communications for Statistical Applications and Methods
    • /
    • 제28권5호
    • /
    • pp.411-424
    • /
    • 2021
  • Inference following two-stage adaptive designs (also known as two-stage randomization designs) with survival endpoints usually focuses on estimating and comparing survival distributions for the different treatment strategies. The aim is to identify the treatment strategy(ies) that leads to better survival of the patients. The objectives of this study were to assess the performance three commonly cited methods for estimating survival distributions in two-stage randomization designs. We review three non-parametric methods for estimating survival distributions in two-stage adaptive designs and compare their performance using simulation studies. The simulation studies show that the method based on the marginal mean model is badly affected by high censoring rates and response rate. The other two methods which are natural extensions of the Nelson-Aalen estimator and the Kaplan-Meier estimator have similar performance. These two methods yield survival estimates which have less bias and more precise than the marginal mean model even in cases of small sample sizes. The weighted versions of the Nelson-Aalen and the Kaplan-Meier estimators are less affected by high censoring rates and low response rates. The bias of the method based on the marginal mean model increases rapidly with increase in censoring rate compared to the other two methods. We apply the three methods to a leukemia clinical trial dataset and also compare the results.

A PERMUTATION APPROACH TO THE BEHRENS-FISHER PROBLEM

  • Proschan, Michael-A.;, Dean-A.
    • Journal of the Korean Statistical Society
    • /
    • 제33권1호
    • /
    • pp.79-97
    • /
    • 2004
  • We propose a permutation approach to the classic Behrens-Fisher problem of comparing two means in the presence of unequal variances. It is motivated by the observation that a paired test is valid whether or not the variances are equal. Rather than using a single arbitrary pairing of the data, we average over all possible pairings. We do this in both a parametric and nonparametric setting. When the sample sizes are equal, the parametric version is equivalent to referral of the unpaired t-statistic to a t-table with half the usual degrees of freedom. The derivation provides an interesting representation of the unpaired t-statistic in terms of all possible pairwise t-statistics. The nonparametric version uses the same idea of considering all different pairings of data from the two groups, but applies it to a permutation test setting. Each pairing gives rise to a permutation distribution obtained by relabeling treatment and control within pairs. The totality of different mean differences across all possible pairings and relabelings forms the null distribution upon which the p-value is based. The conservatism of this procedure diminishes as the disparity in variances increases, disappearing completely when the ratio of the smaller to larger variance approaches 0. The nonparametric procedure behaves increasingly like a paired t-test as the sample sizes increase.