• Title/Summary/Keyword: 비모수적 기법

Search Result 131, Processing Time 0.028 seconds

3T MR Spin Echo T1 Weighted Image at Optimization of Flip Angle (3T MR 스핀에코 T1강조영상에서 적정의 숙임각)

  • Bae, Sung-Jin;Lim, Chung-Hwang
    • Journal of radiological science and technology
    • /
    • v.32 no.2
    • /
    • pp.177-182
    • /
    • 2009
  • Purpose : This study presents the optimization of flip angle (FA) to obtain higher contrast to noise ratio (CNR) and lower specific absorption rate (SAR). Materials and Method : T1-weighted images of the cerebrum of brain were obtained from 50$^\circ$ to 130$^\circ$ FA with 10$^\circ$ interval. Signal to noise ratios (SNRs) were calculated for white matter (WM), gray matter (GM), and background noise. The proper FA was analyzed by T-test statistics and Kruskal-wallis analysis using R1 = 1- exp ($\frac{-TR}{T1}$) and Ernst angle cos $\theta$ = exp ($\frac{-TR}{T1}$). Results : The SNR of WM at 130$^\circ$ FA is approximately 1.6 times higher than the SNR of WM at 50$^\circ$. The SNR of GM at 130$^\circ$ FA is approximately 1.9 times higher than the SNR of GM at 50$^\circ$. Although the SNRs of WM and GM showed similar trends with the change of FA values, the slowdown point of decrease after linear fitting were different. While the SNR of WM started decreasing at 120$^\circ$ FA, the SNR of GM started decreasing at less than 110$^\circ$. The highest SNRs of WM and GM were obtained at 130$^\circ$ FA. The highest CNRs, however, were obtained at 80$^\circ$ FA. Conclusion : Although SNR increased with the change of FA values from 50$^\circ$ to 130$^\circ$ at 3T SE T1WI, CNR was higher at 80$^\circ$ FA than at the usually used 90$^\circ$ FA. In addition, the SAR was decreased by using smaller FA. The CNR can be increased by using this optimized FA at 3T MR SE T1WI.

  • PDF

Identification of Uncertainty on the Reduction of Dead Storage in Soyang Dam Using Bayesian Stochastic Reliability Analysis (Bayesian 추계학적 신뢰도 기법을 이용한 소양강댐 퇴사용량 감소의 불확실성 분석)

  • Lee, Cheol-Eung;Kim, Sang Ug
    • Journal of Korea Water Resources Association
    • /
    • v.46 no.3
    • /
    • pp.315-326
    • /
    • 2013
  • Despite of the importance on the maintenance of a reservoir storage, relatively few studies have addressed the stochastic reliability analysis including uncertainty on the decrease of the reservoir storage by the sedimentation. Therefore, the stochastic gamma process under the reliability framework is developed and applied to estimate the reduction of the Soyang Dam reservoir storage in this paper. Especially, in the estimation of parameters of the stochastic gamma process, the Bayesian MCMC scheme using informative prior distribution is used to incorporate a wide variety of information related with the sedimentation. The results show that the selected informative prior distribution is reasonable because the uncertainty of the posterior distribution is reduced considerably compared to that of the prior distribution. Also, the range of the expected life time of the dead storage in Soyang Dam reservoir including uncertainty is estimated from 119.3 years to 183.5 years at 5% significance level. Finally, it is suggested that the improvement of the assessment strategy in this study can provide the valuable information to the decision makers who are in charge of the maintenance of a reservoir.

A comparison of synthetic data approaches using utility and disclosure risk measures (유용성과 노출 위험성 지표를 이용한 재현자료 기법 비교 연구)

  • Seongbin An;Trang Doan;Juhee Lee;Jiwoo Kim;Yong Jae Kim;Yunji Kim;Changwon Yoon;Sungkyu Jung;Dongha Kim;Sunghoon Kwon;Hang J Kim;Jeongyoun Ahn;Cheolwoo Park
    • The Korean Journal of Applied Statistics
    • /
    • v.36 no.2
    • /
    • pp.141-166
    • /
    • 2023
  • This paper investigates synthetic data generation methods and their evaluation measures. There have been increasing demands for releasing various types of data to the public for different purposes. At the same time, there are also unavoidable concerns about leaking critical or sensitive information. Many synthetic data generation methods have been proposed over the years in order to address these concerns and implemented in some countries, including Korea. The current study aims to introduce and compare three representative synthetic data generation approaches: Sequential regression, nonparametric Bayesian multiple imputations, and deep generative models. Several evaluation metrics that measure the utility and disclosure risk of synthetic data are also reviewed. We provide empirical comparisons of the three synthetic data generation approaches with respect to various evaluation measures. The findings of this work will help practitioners to have a better understanding of the advantages and disadvantages of those synthetic data methods.

List-event Data Resampling for Quantitative Improvement of PET Image (PET 영상의 정량적 개선을 위한 리스트-이벤트 데이터 재추출)

  • Woo, Sang-Keun;Ju, Jung Woo;Kim, Ji Min;Kang, Joo Hyun;Lim, Sang Moo;Kim, Kyeong Min
    • Progress in Medical Physics
    • /
    • v.23 no.4
    • /
    • pp.309-316
    • /
    • 2012
  • Multimodal-imaging technique has been rapidly developed for improvement of diagnosis and evaluation of therapeutic effects. In despite of integrated hardware, registration accuracy was decreased due to a discrepancy between multimodal image and insufficiency of count in accordance with different acquisition method of each modality. The purpose of this study was to improve the PET image by event data resampling through analysis of data format, noise and statistical properties of small animal PET list data. Inveon PET listmode data was acquired as static data for 10 min after 60 min of 37 MBq/0.1 ml $^{18}F$-FDG injection via tail vein. Listmode data format was consist of packet containing 48 bit in which divided 8 bit header and 40 bit payload space. Realigned sinogram was generated from resampled event data of original listmode by using adjustment of LOR location, simple event magnification and nonparametric bootstrap. Sinogram was reconstructed for imaging using OSEM 2D algorithm with 16 subset and 4 iterations. Prompt coincidence was 13,940,707 count measured from PET data header and 13,936,687 count measured from analysis of list-event data. In simple event magnification of PET data, maximum was improved from 1.336 to 1.743, but noise was also increased. Resampling efficiency of PET data was assessed from de-noised and improved image by shift operation of payload value of sequential packet. Bootstrap resampling technique provides the PET image which noise and statistical properties was improved. List-event data resampling method would be aid to improve registration accuracy and early diagnosis efficiency.

An Application of Artificial Intelligence System for Accuracy Improvement in Classification of Remotely Sensed Images (원격탐사 영상의 분류정확도 향상을 위한 인공지능형 시스템의 적용)

  • 양인태;한성만;박재국
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.20 no.1
    • /
    • pp.21-31
    • /
    • 2002
  • This study applied each Neural Networks theory and Fuzzy Set theory to improve accuracy in remotely sensed images. Remotely sensed data have been used to map land cover. The accuracy is dependent on a range of factors related to the data set and methods used. Thus, the accuracy of maps derived from conventional supervised image classification techniques is a function of factors related to the training, allocation, and testing stages of the classification. Conventional image classification techniques assume that all the pixels within the image are pure. That is, that they represent an area of homogeneous cover of a single land-cover class. But, this assumption is often untenable with pixels of mixed land-cover composition abundant in an image. Mixed pixels are a major problem in land-cover mapping applications. For each pixel, the strengths of class membership derived in the classification may be related to its land-cover composition. Fuzzy classification techniques are the concept of a pixel having a degree of membership to all classes is fundamental to fuzzy-sets-based techniques. A major problem with the fuzzy-sets and probabilistic methods is that they are slow and computational demanding. For analyzing large data sets and rapid processing, alterative techniques are required. One particularly attractive approach is the use of artificial neural networks. These are non-parametric techniques which have been shown to generally be capable of classifying data as or more accurately than conventional classifiers. An artificial neural networks, once trained, may classify data extremely rapidly as the classification process may be reduced to the solution of a large number of extremely simple calculations which may be performed in parallel.

Failure Time Prediction Capability Comparative Analysis of Software NHPP Reliability Model (소프트웨어 NHPP 신뢰성모형에 대한 고장시간 예측능력 비교분석 연구)

  • Kim, Hee-Cheul;Kim, Kyung-Soo
    • Journal of Digital Convergence
    • /
    • v.13 no.12
    • /
    • pp.143-149
    • /
    • 2015
  • This study aims to analyze the predict capability of some of the popular software NHPP reliability models(Goel-Okumo model, delayed S-shaped reliability model and Rayleigh distribution model). The predict capability analysis will be on two key factors, one pertaining to the degree of fitment on available failure data and the other for its prediction capability. Estimation of parameters for each model was used maximum likelihood estimation using first 80% of the failure data. Comparison of predict capability of models selected by validating against the last 20% of the available failure data. Through this study, findings can be used as priori information for the administrator to analyze the failure of software.

Stochastic disaggregation of daily rainfall based on K-Nearest neighbor resampling method (K번째 최근접 표본 재추출 방법에 의한 일 강우량의 추계학적 분해에 대한 연구)

  • Park, HeeSeong;Chung, GunHui
    • Journal of Korea Water Resources Association
    • /
    • v.49 no.4
    • /
    • pp.283-291
    • /
    • 2016
  • As the infrastructures and populations are the condensed in the mega city, urban flood management becomes very important due to the severe loss of lives and properties. For the more accurate calculation of runoff from the urban catchment, hourly or even minute rainfall data have been utilized. However, the time steps of the measured or forecasted data under climate change scenarios are longer than hourly, which causes the difficulty on the application. In this study, daily rainfall data was disaggregated into hourly using the stochastic method. Based on the historical hourly precipitation data, Gram Schmidt orthonormalization process and K-Nearest Neighbor Resampling (KNNR) method were applied to disaggregate daily precipitation into hourly. This method was originally developed to disaggregate yearly runoff data into monthly. Precipitation data has smaller probability density than runoff data, therefore, rainfall patterns considering the previous and next days were proposed as 7 different types. Disaggregated rainfall was resampled from the only same rainfall patterns to improve applicability. The proposed method was applied rainfall data observed at Seoul weather station where has 52 years hourly rainfall data and the disaggregated hourly data were compared to the measured data. The proposed method might be applied to disaggregate the climate change scenarios.

Groundwater level behavior analysis using kernel density estimation (비모수 핵밀도 함수를 이용한 지하수위 거동분석)

  • Jeong, Ji Hye;Kim, Jong Wook;Lee, Jeong Ju;Chun, Gun Il
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2017.05a
    • /
    • pp.381-381
    • /
    • 2017
  • 수자원 분야에 대한 기후변화의 영향은 홍수, 가뭄 등 극치 수문사상의 증가와 변동성 확대를 초래하는 것으로 알려져 있으며, 이에 따라 예년에 비해 발생빈도 및 심도가 증가한 가뭄에 대한 모니터링 및 피해경감을 위해 정부에서는 국민안전처를 비롯한 관계기관 합동으로 생활 공업 농업용수 등 분야별 가뭄정보를 제공하고 있다. 국토교통부와 환경부는 생활 및 공업용수 분야의 가뭄정보 제공을 위해 광역 지방 상수도를 이용하는 급수 지역과 마을상수도, 소규모급수시설 등 미급수지역의 용수수급 정보를 분석하여 가뭄 분석정보를 제공 중에 있다. 하지만, 미급수지역에 대한 가뭄 예?경보는 기준이 되는 수원정보의 부재로 기상 가뭄지수인 SPI6를 이용하여 정보를 생산하고 있다. 기상학적 가뭄 상황과 물부족에 의한 체감 가뭄은 차이가 있으며, 미급수 지역의 경우 지하수를 주 수원으로 사용하는 지역이 대부분으로 기상학적 가뭄지수인 SPI6를 이용한 가뭄정보로 실제 물수급 상황을 반영하기는 부족한 실정이다. 따라서 본 연구에서는 미급수지역의 주요 수원인 지하수의 수위 상황을 반영한 가뭄모니터링 기법을 개발하고자 하였으며, 가용량 분석이 현실적으로 어려운 지하수의 특성을 고려하여 수위 거동의 통계적 분석을 통해 가뭄을 모니터링 할 수 있는 방법으로 접근하였다. 국가지하수관측소 중 관측기간이 10년 이상이고 강우와의 상관성이 높은 관측소들을 선정한 후, 일수위 관측자료를 월별로 분리하여 1월~12월 각 월에 대해 핵밀도 함수 추정기법(kernel densitiy estimation)을 적용하여 월별 지하수위 분포 특성을 도출하였다. 각 관측소별 관측수위 분포에 대해 백분위수(percentile)를 이용하여, 25%~100% 사이는 정상, 10%~25% 사이는 주의단계, 5%~10% 사이는 심한가뭄, 5% 이하는 매우심함으로 가뭄의 단계를 구분하였다. 각 백분위수에 해당하는 수위 값은 추정된 Kernel Density와 Quantile Function을 이용하여 산정하였고, 최근 10일 평균수위를 현재의 수위로 설정하여 가뭄의 정도를 분류하였다. 분석된 결과는 관측소를 기점으로 역거리가중법(inverse distance weighting)을 통해 공간 분포를 시켰으며, 수문학적, 지질학적 동질성을 반영하기 위하여 유역도 및 수문지질도를 중첩한 공간연산을 통해 전국 지하수 가뭄상태를 나타내는 지하수위 등급분포도를 작성하였다. 실제 가뭄상황과의 상관성을 분석하기 위해 언론기사를 통해 확인된 가뭄시기와 백문위수 25%이하로 분석된 지하수 가뭄시기를 ROC(receiver operation characteristics) 분석을 통해 비교 검증하였다.

  • PDF

CO2 Emission and Productivity of Fossil-fueled Power Plants: A Luenberger Indicator Approach (CO2 배출량을 감안한 화력발전소의 생산성 변화 분석: Luenberger지수 접근법)

  • Kwon, Oh-Sang
    • Environmental and Resource Economics Review
    • /
    • v.19 no.4
    • /
    • pp.733-752
    • /
    • 2010
  • This study applies the Luenberger indicator approach to estimate productivity of the Korean fossil-fueled power plants. A panel data set of 25 power plants was used. The method incorporates $CO_2$ emission as an undesirable output and shows that ignoring $CO_2$ emission overestimates the productivity change. There are two sources of overestimation. First, the usual method estimates productivity change ignoring the increase in $CO_2$ emission that occurred during the study period. Second, the productivity change estimated by the usual method that does not incorporate $CO_2$ emission is very sensitively affected by the change in operation rate. The paper decomposes the productivity change into the efficiency change and the technical change. The results show that the two sources contribute to the productivity change almost equally. It is also shown that the size and the pattern of productivity change are dependent on the plants' fuel types. Non-LNG power plants which saved their energy consumption and thereby reduced their $CO_2$ emission have achieved relatively high rate of productivity improvement.

  • PDF

Analysis of Confidence Interval of Design Wave Height Estimated Using a Finite Number of Data (한정된 자료로 추정한 설계파고의 신뢰구간 분석)

  • Jeong, Weon-Mu;Cho, Hong-Yeon;Kim, Gunwoo
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.25 no.4
    • /
    • pp.191-199
    • /
    • 2013
  • It is estimated and analyzed that the design wave height and the confidence interval (hereafter CI) according to the return period using the fourteen-year wave data obtained at Pusan New Port. The functions used in the extreme value analysis are the Gumbel function, the Weibull function, and the Kernel function. The CI of the estimated wave heights was predicted using one of the Monte-Carlo simulation methods, the Bootstrap method. The analysis results of the estimated CI of the design wave height indicate that over 150 years of data is necessary in order to satisfy an approximately ${\pm}$10% CI. Also, estimating the number of practically possible data to be around 25~50, the allowable error was found to be approximately ${\pm}$16~22% for Type I PDF and ${\pm}$18~24% for Type III PDF. Whereas, the Kernel distribution method, a typical non-parametric method, shows that the CI of the method is below 40% in comparison with the CI of the other methods and the estimated design wave height is 1.2~1.6 m lower than that of the other methods.