• Title/Summary/Keyword: Variance Reduction Method

검색결과 129건 처리시간 0.023초

자기 유사성을 이용한 가우시안 노이즈 제거 알고리즘 (Gaussian Noise Reduction Algorithm using Self-similarity)

  • 전영은;엄민영;최윤식
    • 대한전자공학회논문지SP
    • /
    • 제44권5호
    • /
    • pp.1-10
    • /
    • 2007
  • 대부분의 자연 영상은 프랙탈 이론의 기반이 되는 자기 유사성이라는 특징을 가지고 있다. 비록 국부적으로 영상을 정상 신호라고 가정할 수 있지만 일반적으로 영상 신호는 에지나 코너 부분과 같은 불연속성을 가지고 있는 비정상 신호이다. 이 때문에 대부분의 선형 알고리즘의 성능 저하가 나타난다. 따라서 이러한 문제를 해결하기 위하여 본 논문에서는 영상 내에 포함되어 있는 자기 유사성을 이용하는 새로운 비선영 잡음 제거 알고리즘을 제안 한다. 이를 위해 우선 잡음 제거를 수행 할 위치의 화소 주변 화소들을 이용하여 평탄 영역인지를 판단한다. 평탄 영역일 경우 그 주변 픽셀들의 평균으로 잡음을 제거하고, 평탄 영역이 아닌 경우, 블록 MSE(block Mean Square Error) 관점에서 유사도가 높은 블록을 탐색하여 그 블록들의 중심 화소값들을 이용하여 잡음 제거를 수행한다. 실험 결과는 PSNR 측면에서 잡음 제거 성능이 약 $1{\sim}3dB$ 정도 향상됨을 보여준다. 또한 추정 이론 관점에서 추정자의 분산 분석 결과 가장 낮은 분산을 갖음을 보였다.

적응형 총변이 기법을 이용한 가우시안 잡음 제거 방법: CBCT 치과 영상에 적용 (Gaussian Noise Reduction Method using Adaptive Total Variation : Application to Cone-Beam Computed Tomography Dental Image)

  • 김중혁;김정채;김기덕;유선국
    • 전자공학회논문지SC
    • /
    • 제49권1호
    • /
    • pp.29-38
    • /
    • 2012
  • 의료 영상의 획득하는 과정에서 발생하는 잡음은 영상판독 및 진단을 방해하는 요소로 작용한다. 이러한 잡음으로 오염된 영상으로부터 원본영상을 복원하기 위하여 R.O.F(L.Rudin, S Osher, E. Fatemi)에 의해서 제안된 총변이 최적화 알고리즘은 정규화와 합도의 균형을 맞춰 잡음을 제거할 수 있는 방법이다. 그러나 잡음 제거율을 높이기 위한 반복연산을 수행하는 과정에서 발생하는 경계영역의 몽롱화 현상은 피할 수 없다. 본 논문에서는 총변이 최적화 알고리즘의 제어 파라미터를 잡음 분산과 영상의 지역분산 특성에 따라서 가변적으로 변환시켜 치아영상의 경계 영역의 왜곡을 최소화하고 전체 영상의 잡음을 제거하고자 하였다. CBCT 치아영상 464장을 대상으로 제안된 알고리즘을 적용한 결과, 기존의 R.O.F가 제안한 방법에 비해 PSNR측면에서 약 3dB 정도 향상됨을 보였다. 또한 처리된 결과영상을 3D 볼륨으로 재구성하여 비교한 결과, 기존의 방법보다 치아모델의 경계영역이 더 잘 보존됨을 보여주었다.

정적 영상에서 Noise Reduction Software의 이해와 적용 (The Understanding and Application of Noise Reduction Software in Static Images)

  • 이형진;송호준;승종민;최진욱;김진의;김현주
    • 핵의학기술
    • /
    • 제14권1호
    • /
    • pp.54-60
    • /
    • 2010
  • 본원에 도입된 새로운 소프트웨어는 SPECT나 전신 뼈 영상에만 국한되어 사용되어 지고 있지만 보다 효과적으로 다른 검사에 적용하기 위해 팬텀을 통한 실험과 영상의 비교를 통하여 그 유용성을 찾아보고자 하였다. 실험을 위하여 Body IEC phantom과 Jaszczak ECT phantom, Capillary를 이용한 실린더 팬텀을 이용하였고, 영상의 처리 전후의 계수, statistics를 비교해 보고 contrast ratio나 BKG의 변화들을 정량적으로 분석해 보았다. Capillary source를 이용한 FWHM 비교에서는 PIXON의 경우 처리 전후의 영상에서 차이가 거의 없었고, ASTONISH의 경우 처리 후의 영상이 우수해짐을 확인할 수 있었다. 반면 Standard deviation과 그에 따른 Variance는 PIXON은 다소 감소한 반면 ASTONISH는 큰 폭으로 증가함을 보였다. IEC phantom을 이용한 BKG variability 비교에서는 PIXON의 경우 전체적으로 감소한 반면 ASTONISH는 다소 증가하는 경향을 보였고, 각각의 sphere에 대한 contrast ratio도 두 가지 방법 모두 향상됨을 확인하였다. 영상의 스케일 면에서도 PIXON의 경우 처리 후에는 window width가 약 4-5배 증가하였지만 ASTONISH에서는 큰 차이가 없었다. 팬텀 실험 분석 후 ASTONISH는 정량적 분석을 위해 ROI를 그려야 하는 기타 검사와 대조도를 강조하는 검사에 적용 가능성을 보였고, PIXON은 획득계수가 부족하거나 SNR이 낮은 핵의학 검사에 유용하게 사용될 것으로 생각되었다. 영상의 분석 인자로 많이 사용되는 정량적인 수치들은 소프트웨어의 적용 후 대체로 향상되었지만 감마카메라의 차이보다 소프트웨어간의 알고리즘 특성으로 인한 결과영상의 차이가 많아 모든 핵의학 검사의 적용에 있어서 일관성을 유지하기는 어려울 것으로 사료된다. 또한 전신 뼈 영상과 같이 검사시간의 획기적 단축과 같은 수단으로는 우수한 영상의 질을 기대하기 어렵다. 새로운 소프트웨어의 도입 시 병원의 특성에 맞는 protocol과 임상 적용 전에 많은 연구가 필요할 것으로 사료된다.

  • PDF

난류 용탕 In-situ 합성 믹서의 설계 및 Cu-TiB2 나노 복합재료의 제조 (Design of Turbulent In-situ Mixing Mixer and Fabrication of Cu-TiB2 Nanocomposities)

  • 최백부;박정수;윤지훈;하만영;박용호;박익민
    • 한국재료학회지
    • /
    • 제17권1호
    • /
    • pp.11-17
    • /
    • 2007
  • Turbulent in-situ mixing process is a new material process technology to get dispersed phase in nanometer size by controlling reaction of liquid/solid, liquid/gas, flow ana solidification speed simultaneously. In this study, mixing which is the key technology to this synthesis method was studied by computational fluid dynamics. For the simulation of mixing of liquid metal, static mixers investigated. Two inlets for different liquid metal meet ana merge like 'Y' shape tube having various shapes and radios of curve. The performance of mixer was evaluated with quantitative analysis with coefficient of variance of mass fraction. Also, detailed plots of intersection were presented to understand effect of mixer shape on mixing. The simulations show that the Reynolds number (Re) is the important factor to mixing and dispersion of $TiB_2$ particles. Mixer was designed according to the simulation, and $Cu-TiB_2$ nano composites were evaluated. $TiB_2$ nano particles were uniformly dispersed when Re was 1000, and cluster formation and reduction in volume fraction of $TiB_2$ were found at higher Re.

Elucidating Energy Requirements in Alternative Methods of Robo Production

  • Akinoso, Rahman;Are, Oluwayemisi Teslima
    • Journal of Biosystems Engineering
    • /
    • 제43권2호
    • /
    • pp.128-137
    • /
    • 2018
  • Purpose: This study was designed to elucidate the energy-utilization patterns for five methods of robo production. Methods: Robo (fried melon cake) was produced using five different methods, and the energy used for each unit operation was calculated using standard equations. The sensory attributes of the products were determined by panelists. Data were analyzed using descriptive analysis and analysis of variance at p < 0.05. Results: The energy demands for processing 2.84 kg of melon seed into robo (fried melon cake) using processes 1 (traditional method), 2, 3, 4, and 5 (improved methods) were 50,599.5, 21,793.6, 20,379.7, 21,842.9, and 20,429.3 kJ, respectively. These are equivalent to energy intensities of 1,7816.7, 7,673.8, 7,175.9, 7,691.2, and 7,193.4 kJ/kg, respectively. For the traditional process, the frying operation consumed the highest energy (21,412.0 kJ), and the mixing operation consumed the lowest energy (675.0 kJ). For the semi-mechanized processes, the molding operation consumed the highest energy (6,120.0 kJ), and the dry milling consumed the lowest energy (14.4 kJ). Conclusions: The energy-consumption patterns were functions of the type of unit operation, the technology involved in the operations, and the size of the equipment used in the whole processing operation. Robo produced via the milling of dried melon seed before oil expression was rated highest with regard to the aroma and taste quality, as well as the overall acceptability of the sensory evaluation, and required the lowest energy consumption. Full mechanization of the process line has potential for further reduction of the energy demand.

980MPa급 열연 후판재 버링 공정의 변수 최적화 연구 (Study on the Optimization of Parameters for Burring Process Using 980MPa Hot-rolled Thick Sheet Metal)

  • 김상훈;도두이퉁;박종규;김영석
    • 소성∙가공
    • /
    • 제30권6호
    • /
    • pp.291-300
    • /
    • 2021
  • Currently, starting with electric vehicles, the application of ultra-high-strength steel sheets and light metals has expanded to improve mileage by reducing vehicle weight. At a time when internal combustion engine vehicles are rapidly changing to electric vehicles, the application of ultra-high-strength steel is expanding to satisfy both weight reductions and the performance safety of the chassis parts. There is an urgent need to improve the quality of parts without defects. It is particularly difficult to estimate the part formability through the finite element method (FEM) in the burring operation, so product design has been based on the hole expansion ratio (HER) and experience. In this study, design of experiment (DOE), analysis of variance (ANOVA), and regression analysis were combined to optimize the formability by adjusting the process variables affecting the burring formability of ultra-high-strength steel parts. The optimal variables were derived by analyzing the influence of variables and the correlation between the variables through FE analysis. Finally, the optimized process parameters were verified by comparing experiment with simulation. As for the main influence of each process variable, the initial hole diameter of the piercing process and the shape height of the preforming process had the greatest effects on burring formability, while the effect of a lower round of punching in the burring process was the least. Moreover, as the diameter of the initial hole increased, the thickness reduction rate in the burring part decreased, and the final burring height increased as the shape height during preforming increased.

Effects of different wind deflectors on wind loads for extra-large cooling towers

  • Ke, S.T.;Zhu, P.;Ge, Y.J.
    • Wind and Structures
    • /
    • 제28권5호
    • /
    • pp.299-313
    • /
    • 2019
  • In order to examine the effects of different wind deflectors on the wind load distribution characteristics of extra-large cooling towers, a comparative study of the distribution characteristics of wind pressures on the surface of three large cooling towers with typical wind deflectors and one tower without wind deflector was conducted using wind tunnel tests. These characteristics include aerodynamic parameters such as mean wind pressures, fluctuating wind pressures, peak factors, correlation coefficients, extreme wind pressures, drag coefficients and vorticity distribution. Then distribution regularities of different wind deflectors on global and local wind pressure of extra-large cooling towers was extracted, and finally the fitting formula of extreme wind pressure of the cooling towers with different wind deflectors was provided. The results showed that the large eddy simulation (LES) method used in this article could be used to accurately simulate wind loads of such extra-large cooling towers. The three typical wind deflectors could effectively reduce the average wind pressure of the negative pressure extreme regions in the central part of the tower, and were also effective in reducing the root of the variance of the fluctuating wind pressure in the upper-middle part of the windward side of the tower, with the curved air deflector showing particularly. All the different wind deflectors effectively reduced the wind pressure extremes of the middle and lower regions of the windward side of the tower and of the negative pressure extremes region, with the best effect occurring in the curved wind deflector. After the wind deflectors were installed the drag coefficient values of each layer of the middle and lower parts of the tower were significantly higher than that without wind deflector, but the effect on the drag coefficients of layers above the throat was weak. The peak factors for the windward side, the side and leeward side of the extra-large cooling towers with different wind deflectors were set as 3.29, 3.41 and 3.50, respectively.

Micro-CT evaluation of the removal of root fillings using rotary and reciprocating systems supplemented by XP-Endo Finisher, the Self-Adjusting File, or Er,Cr:YSGG laser

  • Gulsen Kiraz;Bulem Ureyen Kaya;Mert Ocak;Muhammet Bora Uzuner;Hakan Hamdi Celik
    • Restorative Dentistry and Endodontics
    • /
    • 제48권4호
    • /
    • pp.36.1-36.15
    • /
    • 2023
  • Objectives: This study aimed to compare the effectiveness of a single-file reciprocating system (WaveOne Gold, WOG) and a multi-file rotary system (ProTaper Universal Retreatment, PTUR) in removing canal filling from severely curved canals and to evaluate the possible adjunctive effects of XP-Endo Finisher (XPF), the Self-Adjusting File (SAF), and an erbium, chromium: yttrium, scandium, gallium garnet (Er,Cr:YSGG) laser using microcomputed tomography (µCT). Materials and Methods: Sixty-six curved mandibular molars were divided into 2 groups based on the retreatment technique and then into 3 based on the supplementary method. The residual filling volumes and root canals were evaluated with µCT before and after retreatment, and after the supplementary steps. The data were statistically analyzed with the t-test, Mann-Whitney U test, analysis of covariance, and factorial analysis of variance (p < 0.05). Results: PTUR and WOG showed no significant difference in removing filling materials (p > 0.05). The supplementary techniques were significantly more effective than reciprocating or rotary systems only (p < 0.01). The supplementary steps showed no significant differences in canal filling removal effectiveness (p > 0.05), but XPF showed less dentin reduction than the SAF and Er,Cr:YSGG laser (p < 0.01). Conclusions: The supplementary methods significantly decreased the volume of residual filling materials. XPF caused minimal changes in root canal volume and might be preferred for retreatment in curved root canals. Supplementary approaches after retreatment procedures may improve root canal cleanliness.

Cycle-Consistent Generative Adversarial Network: Effect on Radiation Dose Reduction and Image Quality Improvement in Ultralow-Dose CT for Evaluation of Pulmonary Tuberculosis

  • Chenggong Yan;Jie Lin;Haixia Li;Jun Xu;Tianjing Zhang;Hao Chen;Henry C. Woodruff;Guangyao Wu;Siqi Zhang;Yikai Xu;Philippe Lambin
    • Korean Journal of Radiology
    • /
    • 제22권6호
    • /
    • pp.983-993
    • /
    • 2021
  • Objective: To investigate the image quality of ultralow-dose CT (ULDCT) of the chest reconstructed using a cycle-consistent generative adversarial network (CycleGAN)-based deep learning method in the evaluation of pulmonary tuberculosis. Materials and Methods: Between June 2019 and November 2019, 103 patients (mean age, 40.8 ± 13.6 years; 61 men and 42 women) with pulmonary tuberculosis were prospectively enrolled to undergo standard-dose CT (120 kVp with automated exposure control), followed immediately by ULDCT (80 kVp and 10 mAs). The images of the two successive scans were used to train the CycleGAN framework for image-to-image translation. The denoising efficacy of the CycleGAN algorithm was compared with that of hybrid and model-based iterative reconstruction. Repeated-measures analysis of variance and Wilcoxon signed-rank test were performed to compare the objective measurements and the subjective image quality scores, respectively. Results: With the optimized CycleGAN denoising model, using the ULDCT images as input, the peak signal-to-noise ratio and structural similarity index improved by 2.0 dB and 0.21, respectively. The CycleGAN-generated denoised ULDCT images typically provided satisfactory image quality for optimal visibility of anatomic structures and pathological findings, with a lower level of image noise (mean ± standard deviation [SD], 19.5 ± 3.0 Hounsfield unit [HU]) than that of the hybrid (66.3 ± 10.5 HU, p < 0.001) and a similar noise level to model-based iterative reconstruction (19.6 ± 2.6 HU, p > 0.908). The CycleGAN-generated images showed the highest contrast-to-noise ratios for the pulmonary lesions, followed by the model-based and hybrid iterative reconstruction. The mean effective radiation dose of ULDCT was 0.12 mSv with a mean 93.9% reduction compared to standard-dose CT. Conclusion: The optimized CycleGAN technique may allow the synthesis of diagnostically acceptable images from ULDCT of the chest for the evaluation of pulmonary tuberculosis.

유통과학분야에서 탐색적 연구를 위한 요인분석 (Factor Analysis for Exploratory Research in the Distribution Science Field)

  • 임명성
    • 유통과학연구
    • /
    • 제13권9호
    • /
    • pp.103-112
    • /
    • 2015
  • Purpose - This paper aims to provide a step-by-step approach to factor analytic procedures, such as principal component analysis (PCA) and exploratory factor analysis (EFA), and to offer a guideline for factor analysis. Authors have argued that the results of PCA and EFA are substantially similar. Additionally, they assert that PCA is a more appropriate technique for factor analysis because PCA produces easily interpreted results that are likely to be the basis of better decisions. For these reasons, many researchers have used PCA as a technique instead of EFA. However, these techniques are clearly different. PCA should be used for data reduction. On the other hand, EFA has been tailored to identify any underlying factor structure, a set of measured variables that cause the manifest variables to covary. Thus, it is needed for a guideline and for procedures to use in factor analysis. To date, however, these two techniques have been indiscriminately misused. Research design, data, and methodology - This research conducted a literature review. For this, we summarized the meaningful and consistent arguments and drew up guidelines and suggested procedures for rigorous EFA. Results - PCA can be used instead of common factor analysis when all measured variables have high communality. However, common factor analysis is recommended for EFA. First, researchers should evaluate the sample size and check for sampling adequacy before conducting factor analysis. If these conditions are not satisfied, then the next steps cannot be followed. Sample size must be at least 100 with communality above 0.5 and a minimum subject to item ratio of at least 5:1, with a minimum of five items in EFA. Next, Bartlett's sphericity test and the Kaiser-Mayer-Olkin (KMO) measure should be assessed for sampling adequacy. The chi-square value for Bartlett's test should be significant. In addition, a KMO of more than 0.8 is recommended. The next step is to conduct a factor analysis. The analysis is composed of three stages. The first stage determines a rotation technique. Generally, ML or PAF will suggest to researchers the best results. Selection of one of the two techniques heavily hinges on data normality. ML requires normally distributed data; on the other hand, PAF does not. The second step is associated with determining the number of factors to retain in the EFA. The best way to determine the number of factors to retain is to apply three methods including eigenvalues greater than 1.0, the scree plot test, and the variance extracted. The last step is to select one of two rotation methods: orthogonal or oblique. If the research suggests some variables that are correlated to each other, then the oblique method should be selected for factor rotation because the method assumes all factors are correlated in the research. If not, the orthogonal method is possible for factor rotation. Conclusions - Recommendations are offered for the best factor analytic practice for empirical research.