• Title/Summary/Keyword: Expectation and Maximization Algorithm

Search Result 161, Processing Time 0.03 seconds

Improvement of Analytic Reconstruction Algorithms Using a Sinogram Interpolation Method for Sparse-angular Sampling with a Photon-counting Detector

  • Kim, Dohyeon;Jo, Byungdu;Park, Su-Jin;Kim, Hyemi;Kim, Hee-Joung
    • Progress in Medical Physics
    • /
    • v.27 no.3
    • /
    • pp.105-110
    • /
    • 2016
  • Sparse angular sampling has been studied recently owing to its potential to decrease the radiation exposure from computed tomography (CT). In this study, we investigated the analytic reconstruction algorithm in sparse angular sampling using the sinogram interpolation method for improving image quality and computation speed. A prototype of the spectral CT system, which has a 64-pixel Cadmium Zinc Telluride (CZT)-based photon-counting detector, was used. The source-to-detector distance and the source-to-center of rotation distance were 1,200 and 1,015 mm, respectively. Two energy bins (23~33 keV and 34~44 keV) were set to obtain two reconstruction images. We used a PMMA phantom with height and radius of 50.0 mm and 17.5 mm, respectively. The phantom contained iodine, gadolinium, calcification, and lipid. The Feld-kamp-Davis-Kress (FDK) with the sinogram interpolation method and Maximum Likelihood Expectation Maximization (MLEM) algorithm were used to reconstruct the images. We evaluated the signal-to-noise ratio (SNR) of the materials. The SNRs of iodine, calcification, and liquid lipid were increased by 167.03%, 157.93%, and 41.77%, respectively, with the 23~33 keV energy bin using the sinogram interpolation method. The SNRs of iodine, calcification, and liquid state lipid were also increased by 107.01%, 13.58%, and 27.39%, respectively, with the 34~44 keV energy bin using the sinogram interpolation method. Although the FDK algorithm with the sinogram interpolation did not produce better results than the MLEM algorithm, it did result in comparable image quality to that of the MLEM algorithm. We believe that the sinogram interpolation method can be applied in various reconstruction studies using the analytic reconstruction algorithm. Therefore, the sinogram interpolation method can improve the image quality in sparse-angular sampling and be applied to CT applications.

Statistical Methods for Multivariate Missing Data in Health Survey Research (보건조사연구에서 다변량결측치가 내포된 자료를 효율적으로 분석하기 위한 통계학적 방법)

  • Kim, Dong-Kee;Park, Eun-Cheol;Sohn, Myong-Sei;Kim, Han-Joong;Park, Hyung-Uk;Ahn, Chae-Hyung;Lim, Jong-Gun;Song, Ki-Jun
    • Journal of Preventive Medicine and Public Health
    • /
    • v.31 no.4 s.63
    • /
    • pp.875-884
    • /
    • 1998
  • Missing observations are common in medical research and health survey research. Several statistical methods to handle the missing data problem have been proposed. The EM algorithm (Expectation-Maximization algorithm) is one of the ways of efficiently handling the missing data problem based on sufficient statistics. In this paper, we developed statistical models and methods for survey data with multivariate missing observations. Especially, we adopted the EM algorithm to handle the multivariate missing observations. We assume that the multivariate observations follow a multivariate normal distribution, where the mean vector and the covariance matrix are primarily of interest. We applied the proposed statistical method to analyze data from a health survey. The data set we used came from a physician survey on Resource-Based Relative Value Scale(RBRVS). In addition to the EM algorithm, we applied the complete case analysis, which uses only completely observed cases, and the available case analysis, which utilizes all available information. The residual and normal probability plots were evaluated to access the assumption of normality. We found that the residual sum of squares from the EM algorithm was smaller than those of the complete-case and the available-case analyses.

  • PDF

Blind Channel Estimation through Clustering in Backscatter Communication Systems (후방산란 통신시스템에서 군집화를 통한 블라인드 채널 추정)

  • Kim, Soo-Hyun;Lee, Donggu;Sun, Young-Ghyu;Sim, Issac;Hwang, Yu-Min;Shin, Yoan;Kim, Dong-In;Kim, Jin-Young
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.2
    • /
    • pp.81-86
    • /
    • 2020
  • Ambient backscatter communication has a drawback in which the transmission power is limited because the data is transmitted using the ambient RF signal. In order to improve transmission efficiency between transceiver, a channel estimator capable of estimating channel state at a receiver is needed. In this paper, we consider the K-means algorithm to improve the performance of the channel estimator based on EM algorithm. The simulation uses MSE as a performance parameter to verify the performance of the proposed channel estimator. The initial value setting through K-means shows improved performance compared to the channel estimation method using the general EM algorithm.

Extraction of Corresponding Points Using EMSAC Algorithm (EMSAC 알고리듬을 이용한 대응점 추출에 관한 연구)

  • Ye, Soo-Young;Jeon, Ah-Young;Jeon, Gye-Rok;Nam, Ki-Gon
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.4 s.316
    • /
    • pp.44-50
    • /
    • 2007
  • In this paper, we proposed the algorithm for the extraction of the corresponding points from images. The proposed algorithm EMSAC is based on RANSAC and EM algorithms. In the RANSAC procedure, the N corresponding points are randomly selected from the observed total corresponding points to estimate the homography matrix, H. This procedure continues on its repetition until the optimum H are estimated within number of repetition maximum. Therefore, it takes much time and does not converge sometimes. To overcome the drawbacks, the EM algorithm was used for the selection of N corresponding points. The EM algorithm extracts the corresponding points with the highest probability density to estimate the optimum H. By the experiments, it is demonstrated that the proposed method has exact and fast performance on extraction of corresponding points by combining RANSAC with EM.

EM Algorithm with Initialization Based on Incremental ${\cal}k-means$ for GMM and Its Application to Speaker Identification (GMM을 위한 점진적 ${\cal}k-means$ 알고리즘에 의해 초기값을 갖는 EM알고리즘과 화자식별에의 적용)

  • Seo Changwoo;Hahn Hernsoo;Lee Kiyong;Lee Younjeong
    • The Journal of the Acoustical Society of Korea
    • /
    • v.24 no.3
    • /
    • pp.141-149
    • /
    • 2005
  • Tn general. Gaussian mixture model (GMM) is used to estimate the speaker model from the speech for speaker identification. The parameter estimates of the GMM are obtained by using the Expectation-Maximization (EM) algorithm for the maximum likelihood (ML) estimation. However the EM algorithm has such drawbacks that it depends heavily on the initialization and it needs the number of mixtures to be known. In this paper, to solve the above problems of the EM algorithm. we propose an EM algorithm with the initialization based on incremental ${\cal}k-means$ for GMM. The proposed method dynamically increases the number of mixtures one by one until finding the optimum number of mixtures. Whenever adding one mixture, we calculate the mutual relationship between it and one of other mixtures respectively. Finally. based on these mutual relationships. we can estimate the optimal number of mixtures which are statistically independent. The effectiveness of the proposed method is shown by the experiment for artificial data. Also. we performed the speaker identification by applying the proposed method comparing with other approaches.

The inference and estimation for latent discrete outcomes with a small sample

  • Choi, Hyung;Chung, Hwan
    • Communications for Statistical Applications and Methods
    • /
    • v.23 no.2
    • /
    • pp.131-146
    • /
    • 2016
  • In research on behavioral studies, significant attention has been paid to the stage-sequential process for longitudinal data. Latent class profile analysis (LCPA) is an useful method to study sequential patterns of the behavioral development by the two-step identification process: identifying a small number of latent classes at each measurement occasion and two or more homogeneous subgroups in which individuals exhibit a similar sequence of latent class membership over time. Maximum likelihood (ML) estimates for LCPA are easily obtained by expectation-maximization (EM) algorithm, and Bayesian inference can be implemented via Markov chain Monte Carlo (MCMC). However, unusual properties in the likelihood of LCPA can cause difficulties in ML and Bayesian inference as well as estimation in small samples. This article describes and addresses erratic problems that involve conventional ML and Bayesian estimates for LCPA with small samples. We argue that these problems can be alleviated with a small amount of prior input. This study evaluates the performance of likelihood and MCMC-based estimates with the proposed prior in drawing inference over repeated sampling. Our simulation shows that estimates from the proposed methods perform better than those from the conventional ML and Bayesian method.

Reliability Modeling and Analysis for a Unit with Multiple Causes of Failure (다수의 고장 원인을 갖는 기기의 신뢰성 모형화 및 분석)

  • Baek, Sang-Yeop;Lim, Tae-Jin;Lie, Chang-Hoon
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.21 no.4
    • /
    • pp.609-628
    • /
    • 1995
  • This paper presents a reliability model and a data-analytic procedure for a repairable unit subject to failures due to multiple non-identifiable causes. We regard a failure cause as a state and assume the life distribution for each cause to be exponential. Then we represent the dependency among the causes by a Markov switching model(MSM) and estimate the transition probabilities and failure rates by maximum likelihood(ML) method. The failure data are incomplete due to masked causes of failures. We propose a specific version of EM(expectation and maximization) algorithm for finding maximum likelihood estimator(MLE) under this situation. We also develop statistical procedures for determining the number of significant states and for testing independency between state transitions. Our model requires only the successive failure times of a unit to perform the statistical analysis. It works well even when the causes of failures are fully masked, which overcomes the major deficiency of competing risk models. It does not require the assumption of stationarity or independency which is essential in mixture models. The stationary probabilities of states can be easily calculated from the transition probabilities estimated in our model, so it covers mixture models in general. The results of simulations show the consistency of estimation and accuracy gradually increasing according to the difference of failure rates and the frequency of transitions among the states.

  • PDF

Investigation of nuclear material using a compact modified uniformly redundant array gamma camera

  • Lee, Taewoong;Kwak, Sung-Woo;Lee, Wonho
    • Nuclear Engineering and Technology
    • /
    • v.50 no.6
    • /
    • pp.923-928
    • /
    • 2018
  • We developed a compact gamma camera based on a modified uniformly redundant array coded aperture to investigate the position of a $UO_2$ pellet emitting characteristic X-rays (98.4 keV) and ${\gamma}-rays$ (185.7 keV). Experiments using an only-mask method and an antimask subtractive method were conducted, and the maximum-likelihood expectation maximization algorithm was used for image reconstruction. The images obtained via the antimask subtractive method were compared with those obtained using the only-mask method with regard to the signal-to-noise ratio. The reconstructed images of the antimask subtractive method were superior. The reconstructed images of the characteristic X-rays and the ${\gamma}-rays$ were combined with the obtained image using the optical camera. The combined images showed the precise position of the $UO_2$ pellet. According to the self-absorption ratios of the nuclear material and the minimum number of effective events for image reconstruction, we estimated the minimum detection time depending on the amount of nuclear material.

A Short-Term Traffic Information Prediction Model Using Bayesian Network (베이지안 네트워크를 이용한 단기 교통정보 예측모델)

  • Yu, Young-Jung;Cho, Mi-Gyung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.13 no.4
    • /
    • pp.765-773
    • /
    • 2009
  • Currently Telematics traffic information services have been various because we can collect real-time traffic information through Intelligent Transport System. In this paper, we proposed and implemented a short-term traffic information prediction model for giving to guarantee the traffic information with high quality in the near future. A Short-term prediction model is for forecasting traffic flows of each segment in the near future. Our prediction model gives an average speed on the each segment from 5 minutes later to 60 minutes later. We designed a Bayesian network for each segment with some casual nodes which makes an impact to the road situation in the future and found out its joint probability density function on the supposition of GMM(Gaussian Mixture Model) using EM(Expectation Maximization) algorithm with training real-time traffic data. To validate the precision of our prediction model we had conducted various experiments with real-time traffic data and computed RMSE(Root Mean Square Error) between a real speed and its prediction speed. As the result, our model gave 4.5, 4.8, 5.2 as an average value of RMSE about 10, 30, 60 minutes later, respectively.

Collective Interaction Filtering Approach for Detection of Group in Diverse Crowded Scenes

  • Wong, Pei Voon;Mustapha, Norwati;Affendey, Lilly Suriani;Khalid, Fatimah
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.2
    • /
    • pp.912-928
    • /
    • 2019
  • Crowd behavior analysis research has revealed a central role in helping people to find safety hazards or crime optimistic forecast. Thus, it is significant in the future video surveillance systems. Recently, the growing demand for safety monitoring has changed the awareness of video surveillance studies from analysis of individuals behavior to group behavior. Group detection is the process before crowd behavior analysis, which separates scene of individuals in a crowd into respective groups by understanding their complex relations. Most existing studies on group detection are scene-specific. Crowds with various densities, structures, and occlusion of each other are the challenges for group detection in diverse crowded scenes. Therefore, we propose a group detection approach called Collective Interaction Filtering to discover people motion interaction from trajectories. This approach is able to deduce people interaction with the Expectation-Maximization algorithm. The Collective Interaction Filtering approach accurately identifies groups by clustering trajectories in crowds with various densities, structures and occlusion of each other. It also tackles grouping consistency between frames. Experiments on the CUHK Crowd Dataset demonstrate that approach used in this study achieves better than previous methods which leads to latest results.