• Title/Summary/Keyword: fusion imaging

Search Result 230, Processing Time 0.031 seconds

Initial experience of magnetic resonance imaging/ultrasonography fusion transperineal biopsy: Biopsy techniques and results for 75 patients

  • Tae, Jong Hyun;Shim, Ji Sung;Jin, Hyun Jung;Yoon, Sung Goo;No, Tae Il;Kim, Jae Yoon;Kang, Seok Ho;Cheon, Jun;Kang, Sung Gu
    • Investigative and Clinical Urology
    • /
    • v.59 no.6
    • /
    • pp.363-370
    • /
    • 2018
  • Purpose: The aim of this study is to describe the technique and to report early results of transperineal magnetic resonance imaging and ultrasonography (MRI-US) fusion biopsy. Materials and Methods: A total of 75 patients underwent MRI-US fusion transperineal biopsy. Targeted biopsy via MRI-US fusion imaging was carried out for cancer-suspicious lesions with additional systematic biopsy. Detection rates for overall and clinically significant prostate cancer (csPCa) were evaluated and compared between systematic and targeted biopsy. In addition, further investigation into the detection rate according to prostate imaging reporting and data system (PI-RADS) score was done. Results of repeat biopsies were also evaluated. Results: Overall cancer detection rate was 61.3% (46 patients) and the detection rate for csPCa was 42.7% (32 patients). Overall detection rates for systematic and targeted biopsy were 41.3% and 57.3% (p<0.05), respectively. Detection rates for csPCa were 26.7% and 41.3%, respectively (p<0.05). The cancer detection rates via MRI fusion target biopsy were 30.8% in PI-RADS 3, 62.1% in PI-RADS 4 and 89.4% in PI-RADS 5. Rates of csPCa missed by targeted biopsy and systematic biopsy were 0.0% and 25.0%, respectively. The cancer detection rate in repeat biopsies was 61.1% (11 among 18 patients) in which 55.5% of cancer suspected lesions were located in the anterior portion. Conclusions: Transperineal MRI-US fusion biopsy is useful for improving overall cancer detection rate and especially detection of csPCa. Transperineal MRI-US targeted biopsy show potential benefits to improve cancer detection rate in patients with high PIRADS score, tumor located at the anterior portion and in repeat biopsies.

Ghost-free High Dynamic Range Imaging Based on Brightness Bitmap and Hue-angle Constancy (밝기 비트맵과 색도 일관성을 이용한 무 잔상 High Dynamic Range 영상 생성)

  • Yuan, Xi;Ha, Ho-Gun;Lee, Cheol-Hee;Ha, Yeong-Ho
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.1
    • /
    • pp.111-120
    • /
    • 2015
  • HDR(High dynamic range) imaging is a technique to represent a dynamic range of real world. Exposure fusion is a method to obtain a pseudo-HDR image and it directly fuses multi-exposure images instead of generating the true-HDR image. However, it results ghost artifacts while fusing the multi-exposure images with moving objects. To solve this drawback, temporal consistency assessment is proposed to remove moving objects. Firstly, multi-level threshold bitmap and brightness bitmap are proposed. In addition, hue-angle constancy map between multi-exposure images is proposed for compensating a bitmap. Then, two bitmaps are combined as a temporal weight map. Spatial domain image quality assessment is used to generate a spatial weight map. Finally, two weight maps are applied at each multi-exposure image and combined to get the pseudo-HDR image. In experiments, the proposed method reduces ghost artifacts more than previous methods. The quantitative ghost-free evaluation of the proposed method is also less than others.

Application and Development of Integration Technique to Generate Land-cover and Soil Moisture Map Using High Resolution Optical and SAR images

  • Kim Ji-Eun;Park Sang-Eun;Kim Duk-jin;Kim Jun-su;Moon Wooil M.
    • Proceedings of the KSRS Conference
    • /
    • 2005.10a
    • /
    • pp.497-500
    • /
    • 2005
  • Research and development of remote sensing technique is necessary so that more accurate and extensive information may be obtained. To achieve this goal, the synthesized technique which integrates the high resolution optic and SAR image, and topographical information was examined to investigate the quantitative/qualitative characteristics of the Earth's surface environment. For this purpose, high-precision DEMs of Jeju-Island was generated and data fusion algorithm was developed in order to integrate the multi-spectral optic and polarimetric SAR image. Three dimensional land-cover and two dimensional soil moisture maps were generated conclusively so as to investigate the Earth's surface environments and extract the geophysical parameters.

  • PDF

Single Image Enhancement Using Inter-channel Correlation

  • Kim, Jin;Jeong, Soowoong;Kim, Yong-Ho;Lee, Sangkeun
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.2 no.3
    • /
    • pp.130-139
    • /
    • 2013
  • This paper proposes a new approach for enhancing digital images based on red channel information, which has the most analogous characteristics to invisible infrared rays. Specifically, a red channel in RGB space is used to analyze the image contents and improve the visual quality of the input images but it can cause unexpected problems, such as the over-enhancement of reddish input images. To resolve this problem, inter-channel correlations between the color channels were derived, and the weighting parameters for visually pleasant image fusion were estimated. Applying the parameters resulted in significant brightness as well as improvement in the dark and bright regions. Furthermore, simple contrast and color corrections were used to maintain the original contrast level and color tone. The main advantages of the proposed algorithm are 1) it can improve a given image considerably with a simple inter-channel correlation, 2) it can obtain a similar effect of using an extra infrared image, and 3) it is faster than other algorithms compared without artifacts including halo effects. The experimental results showed that the proposed approach could produce better natural images than the existing enhancement algorithms. Therefore, the proposed scheme can be a useful tool for improving the image quality in consumer imaging devices, such as compact cameras.

  • PDF

Comparison of Fusion Methods for Generating 250m MODIS Image

  • Kim, Sun-Hwa;Kang, Sung-Jin;Lee, Kyu-Sung
    • Korean Journal of Remote Sensing
    • /
    • v.26 no.3
    • /
    • pp.305-316
    • /
    • 2010
  • The MODerate Resolution Imaging Spectroradiometer (MODIS) sensor has 36 bands at 250m, 500m, 1km spatial resolution. However, 500m or 1km MODIS data exhibits a few limitations when low resolution data is applied at small areas that possess complex land cover types. In this study, we produce seven 250m spectral bands by fusing two MODIS 250m bands into five 500m bands. In order to recommend the best fusion method by which one acquires MODIS data, we compare seven fusion methods including the Brovey transform, principle components algorithm (PCA) fusion method, the Gram-Schmidt fusion method, the least mean and variance matching method, the least square fusion method, the discrete wavelet fusion method, and the wavelet-PCA fusion method. Results of the above fusion methods are compared using various evaluation indicators such as correlation, relative difference of mean, relative variation, deviation index, peak signal-to-noise ratio index and universal image quality index, as well as visual interpretation method. Among various fusion methods, the local mean and variance matching method provides the best fusion result for the visual interpretation and the evaluation indicators. The fusion algorithm of 250m MODIS data may be used to effectively improve the accuracy of various MODIS land products.

Fusion Techniques Comparison of GeoEye-1 Imagery

  • Kim, Yong-Hyun;Kim, Yong-Il;Kim, Youn-Soo
    • Korean Journal of Remote Sensing
    • /
    • v.25 no.6
    • /
    • pp.517-529
    • /
    • 2009
  • Many satellite image fusion techniques have been developed in order to produce a high resolution multispectral (MS) image by combining a high resolution panchromatic (PAN) image and a low resolution MS image. Heretofore, most high resolution image fusion techniques have used IKONOS and QuickBird images. Recently, GeoEye-1, offering the highest resolution of any commercial imaging system, was launched. In this study, we have experimented with GeoEye-1 images in order to evaluate which fusion algorithms are suitable for these images. This paper presents compares and evaluates the efficiency of five image fusion techniques, the $\grave{a}$ trous algorithm based additive wavelet transformation (AWT) fusion techniques, the Principal Component analysis (PCA) fusion technique, Gram-Schmidt (GS) spectral sharpening, Pansharp, and the Smoothing Filter based Intensity Modulation (SFIM) fusion technique, for the fusion of a GeoEye-1 image. The results of the experiment show that the AWT fusion techniques maintain more spatial detail of the PAN image and spectral information of the MS image than other image fusion techniques. Also, the Pansharp technique maintains information of the original PAN and MS images as well as the AWT fusion technique.

Multi-Level Anterior Interbody Fusion with Internal Fixation in Cervical Spine (다분절 경추 유합 및 내고정 수술결과)

  • Jeon, Woo-Youl;Bae, Jang-Ho;Jung, Byoung-Woo;Kim, Seong-Ho;Kim, Oh-Lyong;Choi, Byung-Yon;Cho, Soo-Ho
    • Journal of Korean Neurosurgical Society
    • /
    • v.30 no.sup1
    • /
    • pp.55-60
    • /
    • 2001
  • Objective : The purpose of the present study was to examine neurologic changes, fusion rate and degree of kyphosis from the surgical results of those patients who underwent multi-level anterior interbody fusion and internal fixation. Methods : Among 63 cases of the patients who received multi-level anterior interbody fusion and internal fixation in 5 years between 1995 to 1999 at the neurosurgery department, we performed a retrospective study in 52 cases that could be followed up with dynamic view imaging ; the results were compared and analyzed. The analysis was based on the results of history taking, physical findings and radiologic findings, and Odom criteria were used to classify those cases with neurologic changes. Results : Among those 52 cases in whom the follow-up was possible for at least a year and dynamic view imaging was possible, bone fusion was seen in 93% of the trauma cases and 95% in the non-trauma cases and overall bone fusion was observed in 94% of the cases. Bone fusion was seen in 93% of the autobone cases, 95% of the allobone cases, and 94% of the Mesh cases. Radiologic changes were observed by comparing the lateral view after surgery ; kyphosis was seen in 53% of the autobone cases, in 70% of the allobone cases, and in 35% of Mesh cases ; in 45% and 60% of the non-trauma cases and trauma cases, respectively ; and in 55% of the 2 level fusion cases and 46% of the 3 level fusion cases. Neurologic changes classified according to Odom criteria showed excellent result in 48% of all the cases, good in 23%, fair in 4%, and poor in 25%. Conclusion : Even those cases with multi-level fusion, a high fusion rate could be obtained by performing anterior interbody fusion and internal fixation ; those cases with kyphosis were related more with the presence or absence of posterior compartment injury rather than the fusion level ; and those trauma cases showed not much difference in the fusion rate compared with non-trauma cases but had a high possibility of kyphosis.

  • PDF

A Case Study of Land-cover Classification Based on Multi-resolution Data Fusion of MODIS and Landsat Satellite Images (MODIS 및 Landsat 위성영상의 다중 해상도 자료 융합 기반 토지 피복 분류의 사례 연구)

  • Kim, Yeseul
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1035-1046
    • /
    • 2022
  • This study evaluated the applicability of multi-resolution data fusion for land-cover classification. In the applicability evaluation, a spatial time-series geostatistical deconvolution/fusion model (STGDFM) was applied as a multi-resolution data fusion model. The study area was selected as some agricultural lands in Iowa State, United States. As input data for multi-resolution data fusion, Moderate Resolution Imaging Spectroradiometer (MODIS) and Landsat satellite images were used considering the landscape of study area. Based on this, synthetic Landsat images were generated at the missing date of Landsat images by applying STGDFM. Then, land-cover classification was performed using both the acquired Landsat images and the STGDFM fusion results as input data. In particular, to evaluate the applicability of multi-resolution data fusion, two classification results using only Landsat images and using both Landsat images and fusion results were compared and evaluated. As a result, in the classification result using only Landsat images, the mixed patterns were prominent in the corn and soybean cultivation areas, which are the main land-cover type in study area. In addition, the mixed patterns between land-cover types of vegetation such as hay and grain areas and grass areas were presented to be large. On the other hand, in the classification result using both Landsat images and fusion results, these mixed patterns between land-cover types of vegetation as well as corn and soybean were greatly alleviated. Due to this, the classification accuracy was improved by about 20%p in the classification result using both Landsat images and fusion results. It was considered that the missing of the Landsat images could be compensated for by reflecting the time-series spectral information of the MODIS images in the fusion results through STGDFM. This study confirmed that multi-resolution data fusion can be effectively applied to land-cover classification.