• Title/Summary/Keyword: high accuracy reconstruction

Search Result 120, Processing Time 0.03 seconds

Performance Evaluation of Reconstruction Algorithms for DMIDR (DMIDR 장치의 재구성 알고리즘 별 성능 평가)

  • Kwak, In-Suk;Lee, Hyuk;Moon, Seung-Cheol
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.23 no.2
    • /
    • pp.29-37
    • /
    • 2019
  • Purpose DMIDR(Discovery Molecular Imaging Digital Ready, General Electric Healthcare, USA) is a PET/CT scanner designed to allow application of PSF(Point Spread Function), TOF(Time of Flight) and Q.Clear algorithm. Especially, Q.Clear is a reconstruction algorithm which can overcome the limitation of OSEM(Ordered Subset Expectation Maximization) and reduce the image noise based on voxel unit. The aim of this paper is to evaluate the performance of reconstruction algorithms and optimize the algorithm combination to improve the accurate SUV(Standardized Uptake Value) measurement and lesion detectability. Materials and Methods PET phantom was filled with $^{18}F-FDG$ radioactivity concentration ratio of hot to background was in a ratio of 2:1, 4:1 and 8:1. Scan was performed using the NEMA protocols. Scan data was reconstructed using combination of (1)VPFX(VUE point FX(TOF)), (2)VPHD-S(VUE Point HD+PSF), (3)VPFX-S (TOF+PSF), (4)QCHD-S-400((VUE Point HD+Q.Clear(${\beta}-strength$ 400)+PSF), (5)QCFX-S-400(TOF +Q.Clear(${\beta}-strength$ 400)+PSF), (6)QCHD-S-50(VUE Point HD+Q.Clear(${\beta}-strength$ 50)+PSF) and (7)QCFX-S-50(TOF+Q.Clear(${\beta}-strength$ 50)+PSF). CR(Contrast Recovery) and BV(Background Variability) were compared. Also, SNR(Signal to Noise Ratio) and RC(Recovery Coefficient) of counts and SUV were compared respectively. Results VPFX-S showed the highest CR value in sphere size of 10 and 13 mm, and QCFX-S-50 showed the highest value in spheres greater than 17 mm. In comparison of BV and SNR, QCFX-S-400 and QCHD-S-400 showed good results. The results of SUV measurement were proportional to the H/B ratio. RC for SUV is in inverse proportion to the H/B ratio and QCFX-S-50 showed highest value. In addition, reconstruction algorithm of Q.Clear using 400 of ${\beta}-strength$ showed lower value. Conclusion When higher ${\beta}-strength$ was applied Q.Clear showed better image quality by reducing the noise. On the contrary, lower ${\beta}-strength$ was applied Q.Clear showed that sharpness increase and PVE(Partial Volume Effect) decrease, so it is possible to measure SUV based on high RC comparing to conventional reconstruction conditions. An appropriate choice of these reconstruction algorithm can improve the accuracy and lesion detectability. In this reason, it is necessary to optimize the algorithm parameter according to the purpose.

Feasibility of Single-Shot Whole Thoracic Time-Resolved MR Angiography to Evaluate Patients with Multiple Pulmonary Arteriovenous Malformations

  • Jihoon Hong;Sang Yub Lee;Jae-Kwang Lim;Jongmin Lee;Jongmin Park;Jung Guen Cha;Hui Joong Lee;Donghyeon Kim
    • Korean Journal of Radiology
    • /
    • v.23 no.8
    • /
    • pp.794-802
    • /
    • 2022
  • Objective: To evaluate the feasibility of single-shot whole thoracic time-resolved MR angiography (TR-MRA) to identify the feeding arteries of pulmonary arteriovenous malformations (PAVMs) and reperfusion of the lesion after embolization in patients with multiple PAVMs. Materials and Methods: Nine patients (8 females and 1 male; age range, 23-65 years) with a total of 62 PAVMs who underwent percutaneous embolization for multiple PAVMs and were subsequently followed up using TR-MRA and CT obtained within 6 months from each other were retrospectively reviewed. All imaging analyses were performed by two independent readers blinded to clinical information. The visibility of the feeding arteries on maximum intensity projection (MIP) reconstruction and multiplanar reconstruction (MPR) TR-MRA images was evaluated by comparing them to CT as a reference. The accuracy of TR-MRA for diagnosing reperfusion of the PAVM after embolization was assessed in a subgroup with angiographic confirmation. The reliability between the readers in interpreting the TR-MRA results was analyzed using kappa (κ) statistics. Results: Feeding arteries were visible on the original MIP images of TR-MRA in 82.3% (51/62) and 85.5% (53/62) of readers 1 and 2, respectively. Using the MPR, the rates increased to 93.5% (58/62) and 95.2% (59/62), respectively (κ = 0.760 and 0.792, respectively). Factors for invisibility were the course of feeding arteries in the anteroposterior plane, proximity to large enhancing vessels, adjacency to the chest wall, pulsation of the heart, and small feeding arteries. Thirty-seven PAVMs in five patients had angiographic confirmation of reperfusion status after embolization (32 occlusions and 5 reperfusions). TR-MRA showed 100% (5/5) sensitivity and 100% (32/32, including three cases in which the feeding arteries were not visible on TR-MRA) specificity for both readers. Conclusion: Single-shot whole thoracic TR-MRA with MPR showed good visibility of the feeding arteries of PAVMs and high accuracy in diagnosing reperfusion after embolization. Single-shot whole thoracic TR-MRA may be a feasible method for the follow-up of patients with multiple PAVMs.

Privacy-Preserving Clustering on Time-Series Data Using Fourier Magnitudes (시계열 데이타 클러스터링에서 푸리에 진폭 기반의 프라이버시 보호)

  • Kim, Hea-Suk;Moon, Yang-Sae
    • Journal of KIISE:Databases
    • /
    • v.35 no.6
    • /
    • pp.481-494
    • /
    • 2008
  • In this paper we propose Fourier magnitudes based privacy preserving clustering on time-series data. The previous privacy-preserving method, called DFT coefficient method, has a critical problem in privacy-preservation itself since the original time-series data may be reconstructed from privacy-preserved data. In contrast, the proposed DFT magnitude method has an excellent characteristic that reconstructing the original data is almost impossible since it uses only DFT magnitudes except DFT phases. In this paper, we first explain why the reconstruction is easy in the DFT coefficient method, and why it is difficult in the DFT magnitude method. We then propose a notion of distance-order preservation which can be used both in estimating clustering accuracy and in selecting DFT magnitudes. Degree of distance-order preservation means how many time-series preserve their relative distance orders before and after privacy-preserving. Using this degree of distance-order preservation we present greedy strategies for selecting magnitudes in the DFT magnitude method. That is, those greedy strategies select DFT magnitudes to maximize the degree of distance-order preservation, and eventually we can achieve the relatively high clustering accuracy in the DFT magnitude method. Finally, we empirically show that the degree of distance-order preservation is an excellent measure that well reflects the clustering accuracy. In addition, experimental results show that our greedy strategies of the DFT magnitude method are comparable with the DFT coefficient method in the clustering accuracy. These results indicate that, compared with the DFT coefficient method, our DFT magnitude method provides the excellent degree of privacy-preservation as well as the comparable clustering accuracy.

Assessment of Coronary Stenosis Using Coronary CT Angiography in Patients with High Calcium Scores: Current Limitations and Future Perspectives (높은 칼슘 점수를 가진 환자에서 관상동맥 CT 조영술을 이용한 협착 평가의 한계와 전망)

  • Doo Kyoung Kang
    • Journal of the Korean Society of Radiology
    • /
    • v.85 no.2
    • /
    • pp.270-296
    • /
    • 2024
  • Coronary CT angiography (CCTA) is recognized for its role as a gatekeeper for invasive coronary angiography in patients suspected of coronary artery disease because it can detect significant coronary stenosis with high accuracy. However, heavy plaque in the coronary artery makes it difficult to visualize the lumen, which can lead to errors in the interpretation of the CCTA results. This is primarily due to the limited spatial resolution of CT scanners, resulting in blooming artifacts caused by calcium. However, coronary stenosis with high calcium scores often requires evaluation using CCTA. Technological methods to overcome these limitations include the introduction of high-resolution CT scanners, the development of reconstruction techniques, and the subtraction technique. Methods to improve reading ability, such as the setting of appropriate window width and height, and evaluation of the position of calcified plaque and residual visibility of the lumen in cross-sectional images, are also recommended.

Adaptive group of ink drop spread: a computer code to unfold neutron noise sources in reactor cores

  • Hosseini, Seyed Abolfazl;Afrakoti, Iman Esmaili Paeen
    • Nuclear Engineering and Technology
    • /
    • v.49 no.7
    • /
    • pp.1369-1378
    • /
    • 2017
  • The present paper reports the development of a computational code based on the Adaptive Group of Ink Drop Spread (AGIDS) for reconstruction of the neutron noise sources in reactor cores. AGIDS algorithm was developed as a fuzzy inference system based on the active learning method. The main idea of the active learning method is to break a multiple input-single output system into a single input-single output system. This leads to the ability to simulate a large system with high accuracy. In the present study, vibrating absorber-type neutron noise source in an International Atomic Energy Agency-two dimensional reactor core is considered in neutron noise calculation. The neutron noise distribution in the detectors was calculated using the Galerkin finite element method. Linear approximation of the shape function in each triangle element was used in the Galerkin finite element method. Both the real and imaginary parts of the calculated neutron distribution of the detectors were considered input data in the developed computational code based on AGIDS. The output of the computational code is the strength, frequency, and position (X and Y coordinates) of the neutron noise sources. The calculated fraction of variance unexplained error for output parameters including strength, frequency, and X and Y coordinates of the considered neutron noise sources were $0.002682{\sharp}/cm^3s$, 0.002682 Hz, and 0.004254 cm and 0.006140 cm, respectively.

iHaplor: A Hybrid Method for Haplotype Reconstruction

  • Jung, Ho-Youl;Heo, Jee-Yeon;Cho, Hye-Yeung;Ryu, Gil-Mi;Lee, Ju-Young;Koh, In-Song;Kimm, Ku-Chan;Oh, Berm-Seok
    • Proceedings of the Korean Society for Bioinformatics Conference
    • /
    • 2003.10a
    • /
    • pp.221-228
    • /
    • 2003
  • This paper presents a novel method that can identify the individual's haplotype from the given genotypes. Because of the limitation of the conventional single-locus analysis, haplotypes have gained increasing attention in the mapping of complex-disease genes. Conventionally there are two approaches which resolve the individual's haplotypes. One is the molecular haplotypings which have many potential limitations in cost and convenience. The other is the in-silico haplotypings which phase the haplotypes from the diploid genotyped populations, and are cost effective and high-throughput method. In-silico haplotyping is divided into two sub-categories - statistical and computational method. The former computes the frequencies of the common haplotypes, and then resolves the individual's haplotypes. The latter directly resolves the individual's haplotypes using the perfect phylogeny model first proposed by Dan Gusfield [7]. Our method combines two approaches in order to increase the accuracy and the running time. The individuals' haplotypes are resolved by considering the MLE (Maximum Likelihood Estimation) in the process of computing the frequencies of the common haplotypes.

  • PDF

Application of compressive sensing and variance considered machine to condition monitoring

  • Lee, Myung Jun;Jun, Jun Young;Park, Gyuhae;Kang, To;Han, Soon Woo
    • Smart Structures and Systems
    • /
    • v.22 no.2
    • /
    • pp.231-237
    • /
    • 2018
  • A significant data problem is encountered with condition monitoring because the sensors need to measure vibration data at a continuous and sometimes high sampling rate. In this study, compressive sensing approaches for condition monitoring are proposed to demonstrate their efficiency in handling a large amount of data and to improve the damage detection capability of the current condition monitoring process. Compressive sensing is a novel sensing/sampling paradigm that takes much fewer data than traditional data sampling methods. This sensing paradigm is applied to condition monitoring with an improved machine learning algorithm in this study. For the experiments, a built-in rotating system was used, and all data were compressively sampled to obtain compressed data. The optimal signal features were then selected without the signal reconstruction process. For damage classification, we used the Variance Considered Machine, utilizing only the compressed data. The experimental results show that the proposed compressive sensing method could effectively improve the data processing speed and the accuracy of condition monitoring of rotating systems.

3D Shape Descriptor for Segmenting Point Cloud Data

  • Park, So Young;Yoo, Eun Jin;Lee, Dong-Cheon;Lee, Yong Wook
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.30 no.6_2
    • /
    • pp.643-651
    • /
    • 2012
  • Object recognition belongs to high-level processing that is one of the difficult and challenging tasks in computer vision. Digital photogrammetry based on the computer vision paradigm has begun to emerge in the middle of 1980s. However, the ultimate goal of digital photogrammetry - intelligent and autonomous processing of surface reconstruction - is not achieved yet. Object recognition requires a robust shape description about objects. However, most of the shape descriptors aim to apply 2D space for image data. Therefore, such descriptors have to be extended to deal with 3D data such as LiDAR(Light Detection and Ranging) data obtained from ALS(Airborne Laser Scanner) system. This paper introduces extension of chain code to 3D object space with hierarchical approach for segmenting point cloud data. The experiment demonstrates effectiveness and robustness of the proposed method for shape description and point cloud data segmentation. Geometric characteristics of various roof types are well described that will be eventually base for the object modeling. Segmentation accuracy of the simulated data was evaluated by measuring coordinates of the corners on the segmented patch boundaries. The overall RMSE(Root Mean Square Error) is equivalent to the average distance between points, i.e., GSD(Ground Sampling Distance).

Automatic Building Reconstruction with Satellite Images and Digital Maps

  • Lee, Dong-Cheon;Yom, Jae-Hong;Shin, Sung-Woong;Oh, Jae-Hong;Park, Ki-Surk
    • ETRI Journal
    • /
    • v.33 no.4
    • /
    • pp.537-546
    • /
    • 2011
  • This paper introduces an automated method for building height recovery through the integration of high-resolution satellite images and digital vector maps. A cross-correlation matching method along the vertical line locus on the Ikonos images was deployed to recover building heights. The rational function models composed of rational polynomial coefficients were utilized to create a stereopair of the epipolar resampled Ikonos images. Building footprints from the digital maps were used for locating the vertical guideline along the building edges. The digital terrain model (DTM) was generated from the contour layer in the digital maps. The terrain height derived from the DTM at each foot of the buildings was used as the starting location for image matching. At a preset incremental value of height along the vertical guidelines derived from vertical line loci, an evaluation process that is based on the cross-correlation matching of the images was carried out to test if the top of the building has reached where maximum correlation occurs. The accuracy of the reconstructed buildings was evaluated by the comparison with manually digitized 3D building data derived from aerial photographs.

Reconstruction of gusty wind speed time series from autonomous data logger records

  • Amezcua, Javier;Munoz, Raul;Probst, Oliver
    • Wind and Structures
    • /
    • v.14 no.4
    • /
    • pp.337-357
    • /
    • 2011
  • The collection of wind speed time series by means of digital data loggers occurs in many domains, including civil engineering, environmental sciences and wind turbine technology. Since averaging intervals are often significantly larger than typical system time scales, the information lost has to be recovered in order to reconstruct the true dynamics of the system. In the present work we present a simple algorithm capable of generating a real-time wind speed time series from data logger records containing the average, maximum, and minimum values of the wind speed in a fixed interval, as well as the standard deviation. The signal is generated from a generalized random Fourier series. The spectrum can be matched to any desired theoretical or measured frequency distribution. Extreme values are specified through a postprocessing step based on the concept of constrained simulation. Applications of the algorithm to 10-min wind speed records logged at a test site at 60 m height above the ground show that the recorded 10-min values can be reproduced by the simulated time series to a high degree of accuracy.