• Title/Summary/Keyword: multi-Gaussian approach

Search Result 58, Processing Time 0.029 seconds

Segmentation of Color Image using the Deterministic Annealing EM Algorithm (결정적 어닐링 EM 알고리즘을 이요한 칼라 영상의 분할)

  • Cho, Wan-Hyun;Park, Jong-Hyun;Park, Soon-Young
    • Journal of KIISE:Databases
    • /
    • v.28 no.3
    • /
    • pp.324-333
    • /
    • 2001
  • In this paper we present a novel color image segmentation algorithm based on a Gaussian Mixture Model(GMM). It is introduced a Deterministic Annealing Expectation Maximization(DAEM) algorithm which is developed using the principle of maximum entropy to overcome the local maxima problem associated with the standard EM algorithm. In our approach, the GMM is used to represent the multi-colored objects statistically and its parameters are estimated by DAEM algorithm. We also develop the automatic determination method of the number of components in Gaussian mixtures models. The segmentation of image is based on the maximum posterior probability distribution which is calculated by using the GMM. The experimental results show that the proposed DAEM can estimate the parameters more accurately than the standard EM and the determination method of the number of mixture models is very efficient. When tested on two natural images, the proposed algorithm performs much better than the traditional algorithm in segmenting the image fields.

  • PDF

Landslide risk zoning using support vector machine algorithm

  • Vahed Ghiasi;Nur Irfah Mohd Pauzi;Shahab Karimi;Mahyar Yousefi
    • Geomechanics and Engineering
    • /
    • v.34 no.3
    • /
    • pp.267-284
    • /
    • 2023
  • Landslides are one of the most dangerous phenomena and natural disasters. Landslides cause many human and financial losses in most parts of the world, especially in mountainous areas. Due to the climatic conditions and topography, people in the northern and western regions of Iran live with the risk of landslides. One of the measures that can effectively reduce the possible risks of landslides and their crisis management is to identify potential areas prone to landslides through multi-criteria modeling approach. This research aims to model landslide potential area in the Oshvand watershed using a support vector machine algorithm. For this purpose, evidence maps of seven effective factors in the occurrence of landslides namely slope, slope direction, height, distance from the fault, the density of waterways, rainfall, and geology, were prepared. The maps were generated and weighted using the continuous fuzzification method and logistic functions, resulting values in zero and one range as weights. The weighted maps were then combined using the support vector machine algorithm. For the training and testing of the machine, 81 slippery ground points and 81 non-sliding points were used. Modeling procedure was done using four linear, polynomial, Gaussian, and sigmoid kernels. The efficiency of each model was compared using the area under the receiver operating characteristic curve; the root means square error, and the correlation coefficient . Finally, the landslide potential model that was obtained using Gaussian's kernel was selected as the best one for susceptibility of landslides in the Oshvand watershed.

A new method to calculate a standard set of finite cloud dose correction factors for the level 3 probabilistic safety assessment of nuclear power plants

  • Gee Man Lee;Woo Sik Jung
    • Nuclear Engineering and Technology
    • /
    • v.56 no.4
    • /
    • pp.1225-1233
    • /
    • 2024
  • Level 3 probabilistic safety assessment (PSA) is performed to calculate radionuclide concentrations and exposure dose resulting from nuclear power plant accidents. To calculate the external exposure dose from the released radioactive materials, the radionuclide concentrations are multiplied by two factors of dose coefficient and a finite cloud dose correction factor (FCDCF), and the obtained values are summed. This indicates that a standard set of FCDCFs is required for external exposure dose calculations. To calculate a standard set of FCDCFs, the effective distance from the release point to the receptor along the wind direction should be predetermined. The TID-24190 document published in 1968 provides equations to calculate FCDCFs and the resultant standard set of FCDCFs. However, it does not provide any explanation on the effective distance required to calculate the standard set of FCDCFs. In 2021, Sandia National Laboratories (SNLs) proposed a method to predetermine finite effective distances depending on the atmospheric stability classes A to F, which results in six standard sets of FCDCFs. Meanwhile, independently of the SNLs, the authors of this paper discovered that an infinite effective distance assumption is a very reasonable approach to calculate one standard set of FCDCFs, and they implemented it into the multi-unit radiological consequence calculator (MURCC) code, which is a post-processor of the level 3 PSA codes. This paper calculates and compares short- and long-range FCDCFs calculated using the TID-24190, SNLs method, and MURCC method, and explains the strength of the MURCC method over the SNLs method. Although six standard sets of FCDCFs are required by the SNLs method, one standard sets of FCDCFs are sufficient by the MURCC method. Additionally, the use of the MURCC method and its resultant FCDCFs for level 3 PSA was strongly recommended.

The Effects of PRF and Slot Interval on the PPM-Based Ultra Wide-Band Systems (PPM-기반의 UWB 시스템에 대한 PRF와 슬롯 시간의 영향)

  • 김성준;임성빈
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.12C
    • /
    • pp.1192-1199
    • /
    • 2003
  • In this paper, we investigate the effect of pulse repetition frequency (PRF) and slot interval on the throughput performance of the ultra wide band (UWB) wireless communication system in multi-path channels, and based on these observations, a data throughput control using PRF and slot interval is proposed for maximizing the effective throughput. Recently, due to many desirable features of the UWB system, it has drawn much attention especially for short-range high-speed data transmission. The UWB system has two parameters to determine its data throughput; pulse repetition frequency and slot interval. In the multi-path channel with additive white Gaussian noise, the UWB system suffers from the inter-pulse interference (IPI) and noise, which result in degradation of system performance. The UWB system can vary the two parameters to maintain and/or improve the system performance. In this paper, we demonstrate the effects of the two parameters on the data throughput of the UWB system in various multi-path indoor channels through computer simulation, and show that the variable data rate approach designed based on the observations is superior to the fixed data rate one in terms of effective throughput performance.

The Prediction of DEA based Efficiency Rating for Venture Business Using Multi-class SVM (다분류 SVM을 이용한 DEA기반 벤처기업 효율성등급 예측모형)

  • Park, Ji-Young;Hong, Tae-Ho
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.139-155
    • /
    • 2009
  • For the last few decades, many studies have tried to explore and unveil venture companies' success factors and unique features in order to identify the sources of such companies' competitive advantages over their rivals. Such venture companies have shown tendency to give high returns for investors generally making the best use of information technology. For this reason, many venture companies are keen on attracting avid investors' attention. Investors generally make their investment decisions by carefully examining the evaluation criteria of the alternatives. To them, credit rating information provided by international rating agencies, such as Standard and Poor's, Moody's and Fitch is crucial source as to such pivotal concerns as companies stability, growth, and risk status. But these types of information are generated only for the companies issuing corporate bonds, not venture companies. Therefore, this study proposes a method for evaluating venture businesses by presenting our recent empirical results using financial data of Korean venture companies listed on KOSDAQ in Korea exchange. In addition, this paper used multi-class SVM for the prediction of DEA-based efficiency rating for venture businesses, which was derived from our proposed method. Our approach sheds light on ways to locate efficient companies generating high level of profits. Above all, in determining effective ways to evaluate a venture firm's efficiency, it is important to understand the major contributing factors of such efficiency. Therefore, this paper is constructed on the basis of following two ideas to classify which companies are more efficient venture companies: i) making DEA based multi-class rating for sample companies and ii) developing multi-class SVM-based efficiency prediction model for classifying all companies. First, the Data Envelopment Analysis(DEA) is a non-parametric multiple input-output efficiency technique that measures the relative efficiency of decision making units(DMUs) using a linear programming based model. It is non-parametric because it requires no assumption on the shape or parameters of the underlying production function. DEA has been already widely applied for evaluating the relative efficiency of DMUs. Recently, a number of DEA based studies have evaluated the efficiency of various types of companies, such as internet companies and venture companies. It has been also applied to corporate credit ratings. In this study we utilized DEA for sorting venture companies by efficiency based ratings. The Support Vector Machine(SVM), on the other hand, is a popular technique for solving data classification problems. In this paper, we employed SVM to classify the efficiency ratings in IT venture companies according to the results of DEA. The SVM method was first developed by Vapnik (1995). As one of many machine learning techniques, SVM is based on a statistical theory. Thus far, the method has shown good performances especially in generalizing capacity in classification tasks, resulting in numerous applications in many areas of business, SVM is basically the algorithm that finds the maximum margin hyperplane, which is the maximum separation between classes. According to this method, support vectors are the closest to the maximum margin hyperplane. If it is impossible to classify, we can use the kernel function. In the case of nonlinear class boundaries, we can transform the inputs into a high-dimensional feature space, This is the original input space and is mapped into a high-dimensional dot-product space. Many studies applied SVM to the prediction of bankruptcy, the forecast a financial time series, and the problem of estimating credit rating, In this study we employed SVM for developing data mining-based efficiency prediction model. We used the Gaussian radial function as a kernel function of SVM. In multi-class SVM, we adopted one-against-one approach between binary classification method and two all-together methods, proposed by Weston and Watkins(1999) and Crammer and Singer(2000), respectively. In this research, we used corporate information of 154 companies listed on KOSDAQ market in Korea exchange. We obtained companies' financial information of 2005 from the KIS(Korea Information Service, Inc.). Using this data, we made multi-class rating with DEA efficiency and built multi-class prediction model based data mining. Among three manners of multi-classification, the hit ratio of the Weston and Watkins method is the best in the test data set. In multi classification problems as efficiency ratings of venture business, it is very useful for investors to know the class with errors, one class difference, when it is difficult to find out the accurate class in the actual market. So we presented accuracy results within 1-class errors, and the Weston and Watkins method showed 85.7% accuracy in our test samples. We conclude that the DEA based multi-class approach in venture business generates more information than the binary classification problem, notwithstanding its efficiency level. We believe this model can help investors in decision making as it provides a reliably tool to evaluate venture companies in the financial domain. For the future research, we perceive the need to enhance such areas as the variable selection process, the parameter selection of kernel function, the generalization, and the sample size of multi-class.

Local Uncertainty of Thickness of Consolidation Layer for Songdo New City (송도신도시 압밀층 두께의 국부적 불확실성 평가)

  • Kim, Dong-Hee;Ryu, Dong-Woo;Chae, Young-Ho;Lee, Woo-Jin
    • Journal of the Korean Geotechnical Society
    • /
    • v.28 no.1
    • /
    • pp.17-27
    • /
    • 2012
  • Since geologic data are often sampled at sparse locations, it is important not only to predict attribute values at unsampled locations but also to assess the uncertainty attached to the prediction. In this study the assessment of the local uncertainty of prediction for the thickness of the consolidation layer was performed by using the indicator approach. A conditional cumulative distribution function (ccdf) was first modeled, and then E-type estimates and the conditional variance were computed for the spatial distribution of the thickness of the consolidation layer. These results could be used to estimate the spatial distribution of secondary compression and to assess the local uncertainty of secondary compression for Songdo New City.

Percentile-Based Analysis of Non-Gaussian Diffusion Parameters for Improved Glioma Grading

  • Karaman, M. Muge;Zhou, Christopher Y.;Zhang, Jiaxuan;Zhong, Zheng;Wang, Kezhou;Zhu, Wenzhen
    • Investigative Magnetic Resonance Imaging
    • /
    • v.26 no.2
    • /
    • pp.104-116
    • /
    • 2022
  • The purpose of this study is to systematically determine an optimal percentile cut-off in histogram analysis for calculating the mean parameters obtained from a non-Gaussian continuous-time random-walk (CTRW) diffusion model for differentiating individual glioma grades. This retrospective study included 90 patients with histopathologically proven gliomas (42 grade II, 19 grade III, and 29 grade IV). We performed diffusion-weighted imaging using 17 b-values (0-4000 s/mm2) at 3T, and analyzed the images with the CTRW model to produce an anomalous diffusion coefficient (Dm) along with temporal (𝛼) and spatial (𝛽) diffusion heterogeneity parameters. Given the tumor ROIs, we created a histogram of each parameter; computed the P-values (using a Student's t-test) for the statistical differences in the mean Dm, 𝛼, or 𝛽 for differentiating grade II vs. grade III gliomas and grade III vs. grade IV gliomas at different percentiles (1% to 100%); and selected the highest percentile with P < 0.05 as the optimal percentile. We used the mean parameter values calculated from the optimal percentile cut-offs to do a receiver operating characteristic (ROC) analysis based on individual parameters or their combinations. We compared the results with those obtained by averaging data over the entire region of interest (i.e., 100th percentile). We found the optimal percentiles for Dm, 𝛼, and 𝛽 to be 68%, 75%, and 100% for differentiating grade II vs. III and 58%, 19%, and 100% for differentiating grade III vs. IV gliomas, respectively. The optimal percentile cut-offs outperformed the entire-ROI-based analysis in sensitivity (0.761 vs. 0.690), specificity (0.578 vs. 0.526), accuracy (0.704 vs. 0.639), and AUC (0.671 vs. 0.599) for grade II vs. III differentiations and in sensitivity (0.789 vs. 0.578) and AUC (0.637 vs. 0.620) for grade III vs. IV differentiations, respectively. Percentile-based histogram analysis, coupled with the multi-parametric approach enabled by the CTRW diffusion model using high b-values, can improve glioma grading.

Structural Response Analysis of a Tension Leg Platform in Multi-directional Irregular Waves (다방향 불규칙파중의 인장계류식 해양구조물의 구조응답 해석)

  • Lee, Soo-Lyong;Suh, Kyu-Youl;Lee, Chang-Ho
    • Journal of Navigation and Port Research
    • /
    • v.31 no.8
    • /
    • pp.675-681
    • /
    • 2007
  • A numerical procedure is described for estimating the effects of the multi-directional irregular waves on the structural responses of the Tension Leg Platform (TLP). The numerical approach is based on a three dimensional source distribution method for hydrodynamic forces, a three dimensional frame analysis method for structural responses, in which the superstructure of TLP is assumed to be flexible instead of rigid. Hydrodynamic and hydrostatic forces on the submerged surface of a TLP have been accurately calculated by excluding the assumption of the slender body theory. The hydrodynamic interactions among TLP members, such as columns and pontoons, and the structural damping are included in structural analysis. The spectral description used in spectral analysis of directional waves for the linear system of a TLP in the frequency domain is sufficient to completely define the structural responses. This is due to both the wave inputs and responses are stationary Gaussian random process of which the statistical properties in the amplitude domain are well known. The numerical results for the linear motion responses and tension variations in regular waves are compared with the experimental and numerical ones, which are obtained in Yoshida et al.(1983). The results of comparison confirmed the validity of the proposed approach.

Convergence performance comparison using combination of ML-SVM, PCA, VBM and GMM for detection of AD (알츠하이머 병의 검출을 위한 ML-SVM, PCA, VBM, GMM을 결합한 융합적 성능 비교)

  • Alam, Saurar;Kwon, Goo-Rak
    • Journal of the Korea Convergence Society
    • /
    • v.7 no.4
    • /
    • pp.1-7
    • /
    • 2016
  • Structural MRI(sMRI) imaging is used to extract morphometric features after Grey Matter (GM), White Matter (WM) for several univariate and multivariate method, and Cerebro-spinal Fluid (CSF) segmentation. A new approach is applied for the diagnosis of very mild to mild AD. We propose the classification method of Alzheimer disease patients from normal controls by combining morphometric features and Gaussian Mixture Models parameters along with MMSE (Mini Mental State Examination) score. The combined features are fed into Multi-kernel SVM classifier after getting rid of curse of dimensionality using principal component analysis. The experimenral results of the proposed diagnosis method yield up to 96% stratification accuracy with Multi-kernel SVM along with high sensitivity and specificity above 90%.

Integration of Kriging Algorithm and Remote Sensing Data and Uncertainty Analysis for Environmental Thematic Mapping: A Case Study of Sediment Grain Size Mapping (지표환경 주제도 작성을 위한 크리깅 기법과 원격탐사 자료의 통합 및 불확실성 분석 -입도분포지도 사례 연구-)

  • Park, No-Wook;Jang, Dong-Ho
    • Journal of the Korean Geographical Society
    • /
    • v.44 no.3
    • /
    • pp.395-409
    • /
    • 2009
  • The objective of this paper is to illustrate that kriging can provide an effective framework both for integrating remote sensing data and for uncertainty modeling through a case study of sediment grain size mapping with remote sensing data. Landsat TM data which show reasonable relationships with grain size values are used as secondary information for sediment grain size mapping near the eastern part of Anmyeondo and Cheonsuman bay. The case study results showed that uncertainty attached to prediction at unsampled locations was significantly reduced by integrating remote sensing data through the analysis of conditional variance from conditional cumulative distribution functions. It is expected that the kriging-based approach presented in this paper would be efficient integration and analysis methodologies for any environmental thematic mapping using secondary information as well as sediment grain size mapping.