• 제목/요약/키워드: SSIM index

검색결과 66건 처리시간 0.023초

자기 지도 학습훈련 기반의 Noise2Void 네트워크를 이용한 PET 영상의 잡음 제거 평가: 팬텀 실험 (The Evaluation of Denoising PET Image Using Self Supervised Noise2Void Learning Training: A Phantom Study)

  • 윤석환;박찬록
    • 대한방사선기술학회지:방사선기술과학
    • /
    • 제44권6호
    • /
    • pp.655-661
    • /
    • 2021
  • Positron emission tomography (PET) images is affected by acquisition time, short acquisition times results in low gamma counts leading to degradation of image quality by statistical noise. Noise2Void(N2V) is self supervised denoising model that is convolutional neural network (CNN) based deep learning. The purpose of this study is to evaluate denoising performance of N2V for PET image with a short acquisition time. The phantom was scanned as a list mode for 10 min using Biograph mCT40 of PET/CT (Siemens Healthcare, Erlangen, Germany). We compared PET images using NEMA image-quality phantom for standard acquisition time (10 min), short acquisition time (2min) and simulated PET image (S2 min). To evaluate performance of N2V, the peak signal to noise ratio (PSNR), normalized root mean square error (NRMSE), structural similarity index (SSIM) and radio-activity recovery coefficient (RC) were used. The PSNR, NRMSE and SSIM for 2 min and S2 min PET images compared to 10min PET image were 30.983, 33.936, 9.954, 7.609 and 0.916, 0.934 respectively. The RC for spheres with S2 min PET image also met European Association of Nuclear Medicine Research Ltd. (EARL) FDG PET accreditation program. We confirmed generated S2 min PET image from N2V deep learning showed improvement results compared to 2 min PET image and The PET images on visual analysis were also comparable between 10 min and S2 min PET images. In conclusion, noisy PET image by means of short acquisition time using N2V denoising network model can be improved image quality without underestimation of radioactivity.

Synthesis of T2-weighted images from proton density images using a generative adversarial network in a temporomandibular joint magnetic resonance imaging protocol

  • Chena, Lee;Eun-Gyu, Ha;Yoon Joo, Choi;Kug Jin, Jeon;Sang-Sun, Han
    • Imaging Science in Dentistry
    • /
    • 제52권4호
    • /
    • pp.393-398
    • /
    • 2022
  • Purpose: This study proposed a generative adversarial network (GAN) model for T2-weighted image (WI) synthesis from proton density (PD)-WI in a temporomandibular joint(TMJ) magnetic resonance imaging (MRI) protocol. Materials and Methods: From January to November 2019, MRI scans for TMJ were reviewed and 308 imaging sets were collected. For training, 277 pairs of PD- and T2-WI sagittal TMJ images were used. Transfer learning of the pix2pix GAN model was utilized to generate T2-WI from PD-WI. Model performance was evaluated with the structural similarity index map (SSIM) and peak signal-to-noise ratio (PSNR) indices for 31 predicted T2-WI (pT2). The disc position was clinically diagnosed as anterior disc displacement with or without reduction, and joint effusion as present or absent. The true T2-WI-based diagnosis was regarded as the gold standard, to which pT2-based diagnoses were compared using Cohen's ĸ coefficient. Results: The mean SSIM and PSNR values were 0.4781(±0.0522) and 21.30(±1.51) dB, respectively. The pT2 protocol showed almost perfect agreement(ĸ=0.81) with the gold standard for disc position. The number of discordant cases was higher for normal disc position (17%) than for anterior displacement with reduction (2%) or without reduction (10%). The effusion diagnosis also showed almost perfect agreement(ĸ=0.88), with higher concordance for the presence (85%) than for the absence (77%) of effusion. Conclusion: The application of pT2 images for a TMJ MRI protocol useful for diagnosis, although the image quality of pT2 was not fully satisfactory. Further research is expected to enhance pT2 quality.

Deep survey using deep learning: generative adversarial network

  • Park, Youngjun;Choi, Yun-Young;Moon, Yong-Jae;Park, Eunsu;Lim, Beomdu;Kim, Taeyoung
    • 천문학회보
    • /
    • 제44권2호
    • /
    • pp.78.1-78.1
    • /
    • 2019
  • There are a huge number of faint objects that have not been observed due to the lack of large and deep surveys. In this study, we demonstrate that a deep learning approach can produce a better quality deep image from a single pass imaging so that could be an alternative of conventional image stacking technique or the expensive large and deep surveys. Using data from the Sloan Digital Sky Survey (SDSS) stripe 82 which provide repeatedly scanned imaging data, a training data set is constructed: g-, r-, and i-band images of single pass data as an input and r-band co-added image as a target. Out of 151 SDSS fields that have been repeatedly scanned 34 times, 120 fields were used for training and 31 fields for validation. The size of a frame selected for the training is 1k by 1k pixel scale. To avoid possible problems caused by the small number of training sets, frames are randomly selected within that field each iteration of training. Every 5000 iterations of training, the performance were evaluated with RMSE, peak signal-to-noise ratio which is given on logarithmic scale, structural symmetry index (SSIM) and difference in SSIM. We continued the training until a GAN model with the best performance is found. We apply the best GAN-model to NGC0941 located in SDSS stripe 82. By comparing the radial surface brightness and photometry error of images, we found the possibility that this technique could generate a deep image with statistics close to the stacked image from a single-pass image.

  • PDF

사후전산화단층촬영의 법의병리학 분야 활용을 위한 조건부 적대적 생성 신경망을 이용한 CT 영상의 해상도 개선: 팬텀 연구 (Enhancing CT Image Quality Using Conditional Generative Adversarial Networks for Applying Post-mortem Computed Tomography in Forensic Pathology: A Phantom Study)

  • 윤예빈;허진행;김예지;조혜진;윤용수
    • 대한방사선기술학회지:방사선기술과학
    • /
    • 제46권4호
    • /
    • pp.315-323
    • /
    • 2023
  • Post-mortem computed tomography (PMCT) is commonly employed in the field of forensic pathology. PMCT was mainly performed using a whole-body scan with a wide field of view (FOV), which lead to a decrease in spatial resolution due to the increased pixel size. This study aims to evaluate the potential for developing a super-resolution model based on conditional generative adversarial networks (CGAN) to enhance the image quality of CT. 1761 low-resolution images were obtained using a whole-body scan with a wide FOV of the head phantom, and 341 high-resolution images were obtained using the appropriate FOV for the head phantom. Of the 150 paired images in the total dataset, which were divided into training set (96 paired images) and validation set (54 paired images). Data augmentation was perform to improve the effectiveness of training by implementing rotations and flips. To evaluate the performance of the proposed model, we used the Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM) and Deep Image Structure and Texture Similarity (DISTS). Obtained the PSNR, SSIM, and DISTS values of the entire image and the Medial orbital wall, the zygomatic arch, and the temporal bone, where fractures often occur during head trauma. The proposed method demonstrated improvements in values of PSNR by 13.14%, SSIM by 13.10% and DISTS by 45.45% when compared to low-resolution images. The image quality of the three areas where fractures commonly occur during head trauma has also improved compared to low-resolution images.

COSMO-SkyMed 2 Image Color Mapping Using Random Forest Regression

  • Seo, Dae Kyo;Kim, Yong Hyun;Eo, Yang Dam;Park, Wan Yong
    • 한국측량학회지
    • /
    • 제35권4호
    • /
    • pp.319-326
    • /
    • 2017
  • SAR (Synthetic aperture radar) images are less affected by the weather compared to optical images and can be obtained at any time of the day. Therefore, SAR images are being actively utilized for military applications and natural disasters. However, because SAR data are in grayscale, it is difficult to perform visual analysis and to decipher details. In this study, we propose a color mapping method using RF (random forest) regression for enhancing the visual decipherability of SAR images. COSMO-SkyMed 2 and WorldView-3 images were obtained for the same area and RF regression was used to establish color configurations for performing color mapping. The results were compared with image fusion, a traditional color mapping method. The UIQI (universal image quality index), the SSIM (structural similarity) index, and CC (correlation coefficients) were used to evaluate the image quality. The color-mapped image based on the RF regression had a significantly higher quality than the images derived from the other methods. From the experimental result, the use of color mapping based on the RF regression for SAR images was confirmed.

Optical flow의 레벨 간소화와 잡음제거를 이용한 2D/3D 변환기법 연구 (A Study on 2D/3D image Conversion Method using Optical flow of Level Simplified and Noise Reduction)

  • 한현호;이강성;은종원;김진수;이상훈
    • 한국산학기술학회:학술대회논문집
    • /
    • 한국산학기술학회 2011년도 추계학술논문집 2부
    • /
    • pp.441-444
    • /
    • 2011
  • 본 논문은 2D/3D 영상 처리에서 깊이지도 생성을 위한 Optical flow에서 레벨을 간소화하여 연산량을 감소시키고 객체의 고유벡터를 이용하여 영상의 잡음을 제거하는 연구이다. Optical flow는 움직임추정 알고리즘의 하나로 두 프레임간의 픽셀의 변화 벡터 값을 나타내며 블록 매칭과 같은 알고리즘에 비해 정확도가 높다. 그러나 기존의 Optical flow는 긴 연산 시간과 카메라의 이동이나 조명의 변화에 민감한 문제가 있다. 이를 해결하기 위해 연산 시간의 단축을 위한 레벨 간소화 과정을 거치고 영상에서 고유벡터를 갖는 영역에 한해 Optical flow를 적용하여 잡음을 제거하는 방법을 제안하였다. 제안한 방법으로 2차원 영상을 3차원 입체 영상으로 변환하였고 SSIM(Structural SIMilarity Index)으로 최종 생성된 영상의 오차율을 분석하였다.

  • PDF

옷감 이미지 렌더링을 위한 Pix2Pix 기반의 Normal map 생성 (Normal map generation based on Pix2Pix for rendering fabric image)

  • 남현길;박종일
    • 한국방송∙미디어공학회:학술대회논문집
    • /
    • 한국방송∙미디어공학회 2020년도 하계학술대회
    • /
    • pp.257-260
    • /
    • 2020
  • 본 논문은 단일의 옷감 이미지로 가상의 그래픽 렌더링을 위해 Pix2Pix 방법을 이용하여 Normal map 을 생성하는 방법을 제시한다. 구체적으로 단일의 이미지를 이용해서 Normal map 를 생성하기 위해, Color image 와 Normal map 쌍의 training dataset 을 Pix2Pix 방법을 이용해서 학습시킨다 또한, test dataset 의 Color image 를 입력으로 넣어 생성된 Normal map 결과를 확인한다. 그리고 선행연구에서 사용되어오던 U-Net 방식의 방법과 본 논문에서 사용한 Pix2Pix 를 이용한 Normal map 생성 결과를 SSIM(Structural Similarity Index)으로 비교 평가한다. 또한, 생성된 Normal map 을 렌더링하고자 하는 가상 객체의 사이즈에 맞게 사이즈를 조정하여 OpenGL 로 렌더링한 결과를 확인한다. 본 논문을 통해서 단일의 패턴 이미지를 Pix2Pix 로 생성한 Normal map 으로 옷감의 디테일을 사실감 있게 표현할 수 있음을 확인할 수 있었다.

  • PDF

Restoration of Ghost Imaging in Atmospheric Turbulence Based on Deep Learning

  • Chenzhe Jiang;Banglian Xu;Leihong Zhang;Dawei Zhang
    • Current Optics and Photonics
    • /
    • 제7권6호
    • /
    • pp.655-664
    • /
    • 2023
  • Ghost imaging (GI) technology is developing rapidly, but there are inevitably some limitations such as the influence of atmospheric turbulence. In this paper, we study a ghost imaging system in atmospheric turbulence and use a gamma-gamma (GG) model to simulate the medium to strong range of turbulence distribution. With a compressed sensing (CS) algorithm and generative adversarial network (GAN), the image can be restored well. We analyze the performance of correlation imaging, the influence of atmospheric turbulence and the restoration algorithm's effects. The restored image's peak signal-to-noise ratio (PSNR) and structural similarity index map (SSIM) increased to 21.9 dB and 0.67 dB, respectively. This proves that deep learning (DL) methods can restore a distorted image well, and it has specific significance for computational imaging in noisy and fuzzy environments.

Efficient CT Image Denoising Using Deformable Convolutional AutoEncoder Model

  • Eon Seung, Seong;Seong Hyun, Han;Ji Hye, Heo;Dong Hoon, Lim
    • 한국컴퓨터정보학회논문지
    • /
    • 제28권3호
    • /
    • pp.25-33
    • /
    • 2023
  • CT 영상의 획득 및 전송 등의 과정에서 발생하는 잡음은 영상의 질을 저하시키는 요소로 작용한다. 따라서 이를 해결하기 위한 잡음제거는 영상처리에서 중요한 전처리 과정이다. 본 논문에서는 딥러닝의 convolutional autoencoder (CAE) 모형에서 기존 컨볼루션 연산 대신 deformable 컨볼루션 연산을 적용한 deformable convolutional autoencoder (DeCAE) 모형을 이용하여 잡음을 제거하고자 한다. 여기서 deformable 컨볼루션 연산은 기존 컨볼루션 연산보다 유연한 영역에서 영상의 특징들을 추출할 수 있다. 제안된 DeCAE 모형은 기존 CAE 모형과 같은 인코더-디코더 구조로 되어있으나 효율적인 잡음제거를 위해 인코더는 deformable 컨볼루션 층으로 구성하고, 디코더는 기존 컨볼루션 층으로 구성하였다. 본 논문에서 제안된 DeCAE 모형의 성능 평가를 위해 다양한 잡음, 즉, 가우시안 잡음, 임펄스 잡음 그리고 포아송 잡음에 의해 훼손된 CT 영상을 대상으로 실험하였다. 성능 실험 결과, DeCAE 모형은 전통적인 필터 즉, Mean 필터, Median 필터와 이를 개선한 Bilateral 필터, NL-means 방법 뿐만 아니라 기존의 CAE 모형보다 정성적이고, 정량적인 척도 즉, MAE (Mean Absolute Error), PSNR (Peak Signal-to-Noise Ratio) 그리고 SSIM (Structural Similarity Index Measure) 면에서 우수한 결과를 보였다.

그라운드-롤 제거를 위한 CNN과 GAN 기반 딥러닝 모델 비교 분석 (Comparison of CNN and GAN-based Deep Learning Models for Ground Roll Suppression)

  • 조상인;편석준
    • 지구물리와물리탐사
    • /
    • 제26권2호
    • /
    • pp.37-51
    • /
    • 2023
  • 그라운드-롤(ground roll)은 육상 탄성파 탐사 자료에서 가장 흔하게 나타나는 일관성 잡음(coherent noise)이며 탐사를 통해 얻고자 하는 반사 이벤트 신호보다 훨씬 큰 진폭을 가지고 있다. 따라서 탄성파 자료 처리에서 그라운드-롤 제거는 매우 중요하고 필수적인 과정이다. 그라운드-롤 제거를 위해 주파수-파수 필터링, 커브릿(curvelet) 변환 등 여러 제거 기술이 개발되어 왔으나 제거 성능과 효율성을 개선하기 위한 방법에 대한 수요는 여전히 존재한다. 최근에는 영상처리 분야에서 개발된 딥러닝 기법들을 활용하여 탄성파 자료의 그라운드-롤을 제거하고자 하는 연구도 다양하게 수행되고 있다. 이 논문에서는 그라운드-롤 제거를 위해 CNN (convolutional neural network) 또는 cGAN (conditional generative adversarial network)을 기반으로 하는 세가지 모델(DnCNN (De-noiseCNN), pix2pix, CycleGAN)을 적용한 연구들을 소개하고 수치 예제를 통해 상세히 설명하였다. 알고리듬 비교를 위해 동일한 현장에서 취득한 송신원 모음을 훈련 자료와 테스트 자료로 나누어 모델을 학습하고, 모델 성능을 평가하였다. 이러한 딥러닝 모델은 현장자료를 사용하여 훈련할 때, 그라운드-롤이 제거된 자료가 필요하므로 주파수-파수 필터링으로 그라운드-롤을 제거하여 정답자료로 사용하였다. 딥러닝 모델의 성능 평가 및 훈련 결과 비교는 정답 자료와의 유사성을 기본으로 상관계수와 SSIM (structural similarity index measure)과 같은 정량적 지표를 활용하였다. 결과적으로 DnCNN 모델이 가장 좋은 성능을 보였으며, 다른 모델들도 그라운드-롤 제거에 활용될 수 있음을 확인하였다.