• Title/Summary/Keyword: Maximum Probability

Search Result 1,101, Processing Time 0.028 seconds

Texture Segmentation Using Statistical Characteristics of SOM and Multiscale Bayesian Image Segmentation Technique (SOM의 통계적 특성과 다중 스케일 Bayesian 영상 분할 기법을 이용한 텍스쳐 분할)

  • Kim Tae-Hyung;Eom Il-Kyu;Kim Yoo-Shin
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.6
    • /
    • pp.43-54
    • /
    • 2005
  • This paper proposes a novel texture segmentation method using Bayesian image segmentation method and SOM(Self Organization feature Map). Multi-scale wavelet coefficients are used as the input of SOM, and likelihood and a posterior probability for observations are obtained from trained SOMs. Texture segmentation is performed by a posterior probability from trained SOMs and MAP(Maximum A Posterior) classification. And the result of texture segmentation is improved by context information. This proposed segmentation method shows better performance than segmentation method by HMT(Hidden Markov Tree) model. The texture segmentation results by SOM and multi-sclae Bayesian image segmentation technique called HMTseg also show better performance than by HMT and HMTseg.

A Comparison of the Interval Estimations for the Difference in Paired Areas under the ROC Curves (대응표본에서 AUC차이에 대한 신뢰구간 추정에 관한 고찰)

  • Kim, Hee-Young
    • Communications for Statistical Applications and Methods
    • /
    • v.17 no.2
    • /
    • pp.275-292
    • /
    • 2010
  • Receiver operating characteristic(ROC) curves can be used to assess the accuracy of tests measured on ordinal or continuous scales. The most commonly used measure for the overall diagnostic accuracy of diagnostic tests is the area under the ROC curve(AUC). When two ROC curves are constructed based on two tests performed on the same individuals, statistical analysis on differences between AUCs must take into account the correlated nature of the data. This article focuses on confidence interval estimation of the difference between paired AUCs. We compare nonparametric, maximum likelihood, bootstrap and generalized pivotal quantity methods, and conduct a monte carlo simulation to investigate the probability coverage and expected length of the four methods.

Vulnerability assessment of strategic buildings based on ambient vibrations measurements

  • Mori, Federico;Spina, Daniele
    • Structural Monitoring and Maintenance
    • /
    • v.2 no.2
    • /
    • pp.115-132
    • /
    • 2015
  • This paper presents a new method for seismic vulnerability assessment of buildings with reference to their operational limit state. The importance of this kind of evaluation arises from the civil protection necessity that some buildings, considered strategic for seismic emergency management, should retain their functionality also after a destructive earthquake. The method is based on the identification of experimental modal parameters from ambient vibrations measurements. The knowledge of the experimental modes allows to perform a linear spectral analysis computing the maximum structural drifts of the building caused by an assigned earthquake. Operational condition is then evaluated by comparing the maximum building drifts with the reference value assigned by the Italian Technical Code for the operational limit state. The uncertainty about the actual building seismic frequencies, typically significantly lower than the ambient ones, is explicitly taken into account through a probabilistic approach that allows to define for the building the Operational Index together with the Operational Probability Curve. The method is validated with experimental seismic data from a permanently monitored public building: by comparing the probabilistic prediction and the building experimental drifts, resulting from three weak earthquakes, the reliability of the method is confirmed. Finally an application of the method to a strategic building in Italy is presented: all the procedure, from ambient vibrations measurement, to seismic input definition, up to the computation of the Operational Probability Curve is illustrated.

Target Detection Performance in a Clutter Environment Based on the Generalized Likelihood Ratio Test (클러터 환경에서의 GLRT 기반 표적 탐지성능)

  • Suh, Jin-Bae;Chun, Joo-Hwan;Jung, Ji-Hyun;Kim, Jin-Uk
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.30 no.5
    • /
    • pp.365-372
    • /
    • 2019
  • We propose a method to estimate unknown parameters(e.g., target amplitude and clutter parameters) in the generalized likelihood ratio test(GLRT) using maximum likelihood estimation and the Newton-Raphson method. When detecting targets in a clutter environ- ment, it is important to establish a modular model of clutter similar to the actual environment. These correlated clutter models can be generated using spherically invariant random vectors. We obtain the GLRT of the generated clutter model and check its detection probability using estimated parameters.

On the Security of Rijndael-like Structures against Differential and Linear Cryptanalysis (Rijndael 유사 구조의 차분 공격과 선형 공격에 대한 안전성에 관한 연구)

  • 박상우;성수학;지성택;윤이중;임종인
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.12 no.5
    • /
    • pp.3-14
    • /
    • 2002
  • Rijndael-like structure is the special case of SPN structure. The linear transformation of Rijndael-like structure consisits of linear transformations of two types, the one is byte permutation $\pi$ and the other is linear tranformation $\theta$= ($\theta_1, \theta_2, \theta_3, \theta_4$), where each of $\theta_i$ separately operates on each of the four rows of a state. The block cipher, Rijndael is an example of Rijndael-like structures. In this paper. we present a new method for upper bounding the maximum differential probability and the maximum linear hull probability for Rijndael-like structures.

Novel approach to predicting the release probability when applying the MARSSIM statistical test to a survey unit with a specific residual radioactivity distribution based on Monte Carlo simulation

  • Chun, Ga Hyun;Cheong, Jae Hak
    • Nuclear Engineering and Technology
    • /
    • v.54 no.5
    • /
    • pp.1606-1615
    • /
    • 2022
  • For investigating whether the MARSSIM nonparametric test has sufficient statistical power when a site has a specific contamination distribution before conducting a final status survey (FSS), a novel approach was proposed to predict the release probability of the site. Five distributions were assumed: lognormal distribution, normal distribution, maximum extreme value distribution, minimum extreme value distribution, and uniform distribution. Hypothetical radioactivity populations were generated for each distribution, and Sign tests were performed to predict the release probabilities after extracting samples using Monte Carlo simulations. The designed Type I error (0.01, 0.05, and 0.1) was always satisfied for all distributions, while the designed Type II error (0.01, 0.05, and 0.1) was not always met for the uniform, maximum extreme value, and lognormal distributions. Through detailed analyses for lognormal and normal distributions which are often found for contaminants in actual environmental or soil samples, it was found that a greater statistical power was obtained from survey units with normal distribution than with lognormal distribution. This study is expected to contribute to achieving the designed decision error when the contamination distribution of a survey unit is identified, by predicting whether the survey unit passes the statistical test before undertaking the FSS according to MARSSIM.

Evaluation of a Solar Flare Forecast Model with Cost/Loss Ratio

  • Park, Jongyeob;Moon, Yong-Jae;Lee, Kangjin;Lee, Jaejin
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.40 no.1
    • /
    • pp.84.2-84.2
    • /
    • 2015
  • There are probabilistic forecast models for solar flare occurrence, which can be evaluated by various skill scores (e.g. accuracy, critical success index, heidek skill score, true skill score). Since these skill scores assume that two types of forecast errors (i.e. false alarm and miss) are equal or constant, which does not take into account different situations of users, they may be unrealistic. In this study, we make an evaluation of a probabilistic flare forecast model (Lee et al. 2012) which use sunspot groups and its area changes as a proxy of flux emergence. We calculate daily solar flare probabilities from 1996 to 2014 using this model. Overall frequencies are 61.08% (C), 22.83% (M), and 5.44% (X). The maximum probabilities computed by the model are 99.9% (C), 89.39% (M), and 25.45% (X), respectively. The skill scores are computed through contingency tables as a function of forecast probability, which corresponds to the maximum skill score depending on flare class and type of a skill score. For the critical success index widely used, the probability threshold values for contingency tables are 25% (C), 20% (M), and 4% (X). We use a value score with cost/loss ratio, relative importance between the two types of forecast errors. We find that the forecast model has an effective range of cost/loss ratio for each class flare: 0.15-0.83(C), 0.11-0.51(M), and 0.04-0.17(X), also depending on a lifetime of satellite. We expect that this study would provide a guideline to determine the probability threshold for space weather forecast.

  • PDF

Cooperative Node Selection for the Cognitive Radio Networks (인지무선 네트워크를 위한 협력 노드 선택 기법)

  • Gao, Xiang;Lee, Juhyeon;Park, Hyung-Kun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.2
    • /
    • pp.287-293
    • /
    • 2013
  • Cognitive radio has been recently proposed to dynamically access unused-spectrum. The CR users can share the same frequency band with the primary user without interference to each other. Usually each CR user needs to determine spectrum availability by itself depending only on its local observations. But uncertainty communication environment effects can be mitigated so that the detection probability is improved in a heavily shadowed environment. Soft detection is a primary user detection method of cooperative cognitive radio networks. In our research, we will improve system detection probability by using optimal cooperative node selection algorithm. New algorithm can find optimal number of cooperative sensing nodes for cooperative soft detection by using maximum ratio combining (MRC) method. Through analysis, proposed cooperative node selection algorithm can select optimal node for cooperative sensing according to the system requirement and improve the system detection probability.

Use of Information Technologies to Explore Correlations between Climatic Factors and Spontaneous Intracerebral Hemorrhage in Different Age Groups

  • Ting, Hsien-Wei;Chan, Chien-Lung;Pan, Ren-Hao;Lai, Robert K.;Chien, Ting-Ying
    • Journal of Computing Science and Engineering
    • /
    • v.11 no.4
    • /
    • pp.142-151
    • /
    • 2017
  • Spontaneous intracerebral hemorrhage (sICH) has a high mortality rate. Research has demonstrated that sICH occurrence is related to weather conditions; therefore, this study used the decision tree method to explore the impact of climatic risk factors on sICH at different ages. The Taiwan National Health Insurance Research Database (NHIRD) and other open-access data were used in this study. The inclusion criterion was a first-attack sICH. The decision tree algorithm and random forest were implemented in R programming language. We defined a high risk of sICH as more than the average number of cases daily, and the younger, middle-aged and older groups were calculated as having 0.77, 2.26 and 2.60 cases per day, respectively. In total, 22,684 sICH cases were included in this study; 3,102 patients were younger (<44 years, younger group), 9,089 were middle-aged (45-64 years, middle group), and 10,457 were older (>65 years, older group). The risk of sICH in the younger group was not correlated with temperature, wind speed or humidity. The middle group had two decision nodes: a higher risk if the maximum temperature was >$19^{\circ}C$ (probability = 63.7%), and if the maximum temperature was <$19^{\circ}C$ in addition to a wind speed <2.788 (m/s) (probability = 60.9%). The older group had a higher risk if the average temperature was >$23.933^{\circ}C$ (probability = 60.7%). This study demonstrated that the sICH incidence in the younger patients was not significantly correlated with weather factors; that in the middle-aged sICH patients was highly-correlated with the apparent temperature; and that in the older sICH patients was highly-correlated with the mean ambient temperature. "Warm" cold ambient temperatures resulted in a higher risk of sICH, especially in the older patients.

A probabilistic information retrieval model by document ranking using term dependencies (용어간 종속성을 이용한 문서 순위 매기기에 의한 확률적 정보 검색)

  • You, Hyun-Jo;Lee, Jung-Jin
    • The Korean Journal of Applied Statistics
    • /
    • v.32 no.5
    • /
    • pp.763-782
    • /
    • 2019
  • This paper proposes a probabilistic document ranking model incorporating term dependencies. Document ranking is a fundamental information retrieval task. The task is to sort documents in a collection according to the relevance to the user query (Qin et al., Information Retrieval Journal, 13, 346-374, 2010). A probabilistic model is a model for computing the conditional probability of the relevance of each document given query. Most of the widely used models assume the term independence because it is challenging to compute the joint probabilities of multiple terms. Words in natural language texts are obviously highly correlated. In this paper, we assume a multinomial distribution model to calculate the relevance probability of a document by considering the dependency structure of words, and propose an information retrieval model to rank a document by estimating the probability with the maximum entropy method. The results of the ranking simulation experiment in various multinomial situations show better retrieval results than a model that assumes the independence of words. The results of document ranking experiments using real-world datasets LETOR OHSUMED also show better retrieval results.