• Title/Summary/Keyword: Medical Image Fusion

Search Result 80, Processing Time 0.029 seconds

Motion Correction in PET/CT Images (PET/CT 영상 움직임 보정)

  • Woo, Sang-Keun;Cheon, Gi-Jeong
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.42 no.2
    • /
    • pp.172-180
    • /
    • 2008
  • PET/CT fused image with anatomical and functional information have improved medical diagnosis and interpretation. This fusion has resulted in more precise localization and characterization of sites of radio-tracer uptake. However, a motion during whole-body imaging has been recognized as a source of image quality degradation and reduced the quantitative accuracy of PET/CT study. The respiratory motion problem is more challenging in combined PET/CT imaging. In combined PET/CT, CT is used to localize tumors and to correct for attenuation in the PET images. An accurate spatial registration of PET and CT image sets is a prerequisite for accurate diagnosis and SUV measurement. Correcting for the spatial mismatch caused by motion represents a particular challenge for the requisite registration accuracy as a result of differences in PET/CT image. This paper provides a brief summary of the materials and methods involved in multiple investigations of the correction for respiratory motion in PET/CT imaging, with the goal of improving image quality and quantitative accuracy.

A Study for Effects of Image Quality due to Scatter Ray produced by Increasing of Tube Voltage (관전압 증가에 기인한 산란선 발생의 화질 영향 연구)

  • Park, Ji-Koon;Jun, Je-Hoon;Yang, Sung-Woo;Kim, Kyo-Tae;Choi, Il-Hong;Kang, Sang-Sik
    • Journal of the Korean Society of Radiology
    • /
    • v.11 no.7
    • /
    • pp.663-669
    • /
    • 2017
  • In diagnostic medical imaging, it is essential to reduce the scattered radiation for the high medical image quality and low patient dose. Therefore, in this study, the influence of the scattered radiation on medical images was analyzed as the tube voltage increases. For this purpose, ANSI chest phantom was used to measure the scattering ratio, and the scattering effect on the image quality was investigated by RMS evaluation, RSD and NPS analysis. It was found that the scattering ratio with increasing x-ray tube voltage gradually increased to 48.8% at 73 kV tube voltage and to 80.1% at 93 kV tube voltage. As a result of RMS analysis for evaluating the image quality, RMS value according to increase of tube voltage was increased, resulting in low image quality. Also, the NPS value at 2.5 lp/mm spatial frequency was increased by 20% when the tube voltage was increased by 93 kV compared to the tube voltage of 73 kV. From this study, it can be seen that the scattering radiation have a significant effect on the image quality according to the increase of x-ray tube voltage. The results of this study can be used as basic data for the improvement of medical imaging quality.

The Usefulness Assessment of Attenuation Correction and Location Information in SPECT/CT (SPECT/CT에서 감쇠 보정 및 위치 정보의 유용성 평가)

  • Choi, Jong-Sook;Jung, Woo-Young;Shin, Sang-Ki;Cho, Shee-Man
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.12 no.3
    • /
    • pp.214-221
    • /
    • 2008
  • Purpose: We make a qualitative analysis of whether Fusion SPECT/CT can find lesion's anatomical sites better than existing SPECT or not, and we want to show the usefulness of SPECT/CT through finding out effects of CT attenuation correction on SPECT images. Materials and Method: 1. The evaluation of fusion images: This study comprised patients who was tested $^{131}I$-MIBG, Bone, $^{111}In$-Octreotide, Meckel's diverticulum, Parathyroid MIBI with Precedence 16 or Symbia T2 from 2008 Jan to Aug. We compared SPECT/CT image with non fusion image and make a qualitative analysis. 2. The evaluation of attenuation correction: We classified 38 patients who was tested 201Tl myocardial exam with Symbia T2 into 5 sections by using Cedars Sinai' QPS program - Ant, Inf, Lat, Septum, Apex. And we showed each section's perfusion states by percentage. We compared the each section's perfusion-states differences between CT AC and Non AC by average${\pm}$standard deviation. Results: 1. The evaluation of fusion images : In high energy $^{131}I$ cases, it was hard to grasp exact anatomical lesions due to difference between regions and surrounding lesions' uptake level. After combining with CT, we could grabs anatomical lesion more exactly. And in meckel's diverticulum case or to find lesions around bowels or organs with $^{111}In$ cases, it demonstrates its superiority. Bone SPECT/CT images help to distinguish between disk spaces certainly and give correct results. 2. The evaluation of attenuation correction: There is no significant difference statistically in Ant and Lat (p>0.05), but there is a meaningful difference in Inferior, Apex and Septum (p<0.05). AC perfusion at inferior wall in the 5 sections of myocardium: The perfusion difference between Non AC perfusion image ($68.58{\pm}7.55$) and CT corrected perfusion image ($76.84{\pm}6.52$) was the largest by $8.26{\pm}4.95$ (p<0.01, t=10.29). Conclusion: Nuclear medicine physicians can identify not only molecular image which shows functional activity of lesions but also anatomical location information of lesions with more accuracy using the combination of SPECT and CT systems. Of course this combination helps nuclear medicine physician find out the abnormal parts. Moreover combined data sets help separate between normal group and abnormal group in complicated body part. So clinicians can carry out diagnosis and treatment planning at the same time with a single test image. In addition, when we examine a myocardium in thorax where attenuation can occur easily, we can trust perfusion more in a certain region in SPECT test because CT provides the capability for accurate attenuation correction. In these reasons, we think we can prove the justice after treatment fusion image.

  • PDF

MEDU-Net+: a novel improved U-Net based on multi-scale encoder-decoder for medical image segmentation

  • Zhenzhen Yang;Xue Sun;Yongpeng, Yang;Xinyi Wu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.7
    • /
    • pp.1706-1725
    • /
    • 2024
  • The unique U-shaped structure of U-Net network makes it achieve good performance in image segmentation. This network is a lightweight network with a small number of parameters for small image segmentation datasets. However, when the medical image to be segmented contains a lot of detailed information, the segmentation results cannot fully meet the actual requirements. In order to achieve higher accuracy of medical image segmentation, a novel improved U-Net network architecture called multi-scale encoder-decoder U-Net+ (MEDU-Net+) is proposed in this paper. We design the GoogLeNet for achieving more information at the encoder of the proposed MEDU-Net+, and present the multi-scale feature extraction for fusing semantic information of different scales in the encoder and decoder. Meanwhile, we also introduce the layer-by-layer skip connection to connect the information of each layer, so that there is no need to encode the last layer and return the information. The proposed MEDU-Net+ divides the unknown depth network into each part of deconvolution layer to replace the direct connection of the encoder and decoder in U-Net. In addition, a new combined loss function is proposed to extract more edge information by combining the advantages of the generalized dice and the focal loss functions. Finally, we validate our proposed MEDU-Net+ MEDU-Net+ and other classic medical image segmentation networks on three medical image datasets. The experimental results show that our proposed MEDU-Net+ has prominent superior performance compared with other medical image segmentation networks.

AR monitoring technology for medical convergence (증강현실 모니터링 기술의 의료융합)

  • Lee, Kyung Sook;Lim, Wonbong;Moon, Young Lae
    • Journal of the Korea Convergence Society
    • /
    • v.9 no.2
    • /
    • pp.119-124
    • /
    • 2018
  • The augmented reality(AR) technology enables to acquire various image information at the same time by combining virtual image information with the user's viewpoint. These AR technologies have been used to visualize patients' organs and tissues during surgery and diagnosis in the fields of Image-Guide Operation, Surgical Training, and Image Diagnosis by medical convergence, and provides the most effective surgical methods. In this paper, we study the technical features and application methods of each element technology for medical fusion of AR technology. In the AR technology for medical convergence, display, marker recognition and image synthesis interface technology is essential for efficient medical image. Such AR technology is considered to be a way to drastically improve current medical technology in the fields of image guide surgery, surgical education, and imaging diagnosis.

Study on Image Processing Techniques Applying Artificial Intelligence-based Gray Scale and RGB scale

  • Lee, Sang-Hyun;Kim, Hyun-Tae
    • International Journal of Advanced Culture Technology
    • /
    • v.10 no.2
    • /
    • pp.252-259
    • /
    • 2022
  • Artificial intelligence is used in fusion with image processing techniques using cameras. Image processing technology is a technology that processes objects in an image received from a camera in real time, and is used in various fields such as security monitoring and medical image analysis. If such image processing reduces the accuracy of recognition, providing incorrect information to medical image analysis, security monitoring, etc. may cause serious problems. Therefore, this paper uses a mixture of YOLOv4-tiny model and image processing algorithm and uses the COCO dataset for learning. The image processing algorithm performs five image processing methods such as normalization, Gaussian distribution, Otsu algorithm, equalization, and gradient operation. For RGB images, three image processing methods are performed: equalization, Gaussian blur, and gamma correction proceed. Among the nine algorithms applied in this paper, the Equalization and Gaussian Blur model showed the highest object detection accuracy of 96%, and the gamma correction (RGB environment) model showed the highest object detection rate of 89% outdoors (daytime). The image binarization model showed the highest object detection rate at 89% outdoors (night).

Quantitative Feasibility Evaluation of 11C-Methionine Positron Emission Tomography Images in Gamma Knife Radiosurgery : Phantom-Based Study and Clinical Application

  • Lim, Sa-Hoe;Jung, Tae-Young;Jung, Shin;Kim, In-Young;Moon, Kyung-Sub;Kwon, Seong-Young;Jang, Woo-Youl
    • Journal of Korean Neurosurgical Society
    • /
    • v.62 no.4
    • /
    • pp.476-486
    • /
    • 2019
  • Objective : The functional information of $^{11}C$-methionine positron emission tomography (MET-PET) images can be applied for Gamma knife radiosurgery (GKR) and its image quality may affect defining the tumor. This study conducted the phantom-based evaluation for geometric accuracy and functional characteristic of diagnostic MET-PET image co-registered with stereotactic image in Leksell $GammaPlan^{(R)}$ (LGP) and also investigated clinical application of these images in metastatic brain tumors. Methods : Two types of cylindrical acrylic phantoms fabricated in-house were used for this study : the phantom with an array-shaped axial rod insert and the phantom with different sized tube indicators. The phantoms were mounted on the stereotactic frame and scanned using computed tomography (CT), magnetic resonance imaging (MRI), and PET system. Three-dimensional coordinate values on co-registered MET-PET images were compared with those on stereotactic CT image in LGP. MET uptake values of different sized indicators inside phantom were evaluated. We also evaluated the CT and MRI co-registered stereotactic MET-PET images with MR-enhancing volume and PET-metabolic tumor volume (MTV) in 14 metastatic brain tumors. Results : Imaging distortion of MET-PET was maintained stable at less than approximately 3% on mean value. There was no statistical difference in the geometric accuracy according to co-registered reference stereotactic images. In functional characteristic study for MET-PET image, the indicator on the lateral side of the phantom exhibited higher uptake than that on the medial side. This effect decreased as the size of the object increased. In 14 metastatic tumors, the median matching percentage between MR-enhancing volume and PET-MTV was 36.8% on PET/MR fusion images and 39.9% on PET/CT fusion images. Conclusion : The geometric accuracy of the diagnostic MET-PET co-registered with stereotactic MR in LGP is acceptable on phantom-based study. However, the MET-PET images could the limitations in providing exact stereotactic information in clinical study.

Comparison and analysis of chest X-ray-based deep learning loss function performance (흉부 X-ray 기반 딥 러닝 손실함수 성능 비교·분석)

  • Seo, Jin-Beom;Cho, Young-Bok
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.8
    • /
    • pp.1046-1052
    • /
    • 2021
  • Artificial intelligence is being applied in various industrial fields to the development of the fourth industry and the construction of high-performance computing environments. In the medical field, deep learning learning such as cancer, COVID-19, and bone age measurement was performed using medical images such as X-Ray, MRI, and PET and clinical data. In addition, ICT medical fusion technology is being researched by applying smart medical devices, IoT devices and deep learning algorithms. Among these techniques, medical image-based deep learning learning requires accurate finding of medical image biomarkers, minimal loss rate and high accuracy. Therefore, in this paper, we would like to compare and analyze the performance of the Cross-Entropy function used in the image classification algorithm of the loss function that derives the loss rate in the chest X-Ray image-based deep learning learning process.

Automatic Image Matching of Portal and Simulator Images Using courier Descriptors (후리에 표시자를 이용한 포탈영상과 시뮬레이터 영상의 자동결합)

  • 허수진
    • Journal of Biomedical Engineering Research
    • /
    • v.18 no.1
    • /
    • pp.9-16
    • /
    • 1997
  • We develop an automatic imaging matching technique for combining portal image and simulator image for improvements in localization of treatment in radiation therapy. Fusion of images from two imaging modalities is treated as follows. We archive images thxough a frame-yabber. The simulator and portal images are edge detected and enhanced with interpolated adaptive histouam equalization and combined using geometrical parameters relating the coordinates of two image data sets which are calculated using Fourier descriptors. We don't use any kind of imaging markers for patient's convenience. clinical use of this image matching technique for treatment planning will result in improvements in localization of treatment volumes and critical structures. These improvements will allow greater sparing of normal tissues and more precise delivery of energy to the desired irradiation volume.

  • PDF