• Title/Summary/Keyword: CT모델

Search Result 372, Processing Time 0.028 seconds

MICRO-CT를 이용한 쥐 대퇴골의 유한요소 해석

  • 변창환;오택열
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2004.05a
    • /
    • pp.296-296
    • /
    • 2004
  • 실험상의 어려움으로 인하여 생체역학 분야에서 유한요소법은 최선의 대안으로 여겨지고 있다. 또한, CT를 비롯한 여러 디지털 장비로부터 얻은 영상데이터를 이용하여 유한요소모델을 생성하는 방법으로 모델 생성 속도의 향상을 가져와 더욱 효율을 높이고 있다. 그러나, 일반 CT의 해상도를 가지고는 골(bone) 변화의 주를 이루고 있는 해면골(trabecular bone)의 변화를 파악하기에는 큰 어려움이 따른다. 이에 본 논문에서는 10~20$\mu\textrm{m}$ 정도까지의 해상도를 가지는 Micro-CT의 영상데이터를 이용하여 유한요소 모델을 생성하고 해석하여 보았다.(중략)

  • PDF

Artifact Reduction in Sparse-view Computed Tomography Image using Residual Learning Combined with Wavelet Transformation (Wavelet 변환과 결합한 잔차 학습을 이용한 희박뷰 전산화단층영상의 인공물 감소)

  • Lee, Seungwan
    • Journal of the Korean Society of Radiology
    • /
    • v.16 no.3
    • /
    • pp.295-302
    • /
    • 2022
  • Sparse-view computed tomography (CT) imaging technique is able to reduce radiation dose, ensure the uniformity of image characteristics among projections and suppress noise. However, the reconstructed images obtained by the sparse-view CT imaging technique suffer from severe artifacts, resulting in the distortion of image quality and internal structures. In this study, we proposed a convolutional neural network (CNN) with wavelet transformation and residual learning for reducing artifacts in sparse-view CT image, and the performance of the trained model was quantitatively analyzed. The CNN consisted of wavelet transformation, convolutional and inverse wavelet transformation layers, and input and output images were configured as sparse-view CT images and residual images, respectively. For training the CNN, the loss function was calculated by using mean squared error (MSE), and the Adam function was used as an optimizer. Result images were obtained by subtracting the residual images, which were predicted by the trained model, from sparse-view CT images. The quantitative accuracy of the result images were measured in terms of peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). The results showed that the trained model is able to improve the spatial resolution of the result images as well as reduce artifacts in sparse-view CT images effectively. Also, the trained model increased the PSNR and SSIM by 8.18% and 19.71% in comparison to the imaging model trained without wavelet transformation and residual learning, respectively. Therefore, the imaging model proposed in this study can restore the image quality of sparse-view CT image by reducing artifacts, improving spatial resolution and quantitative accuracy.

Development of Physical Human Bronchial Tree Models from X-ray CT Images (X선 CT영상으로부터 인체의 기관지 모델의 개발)

  • Won, Chul-Ho;Ro, Chul-Kyun
    • Journal of Sensor Science and Technology
    • /
    • v.11 no.5
    • /
    • pp.263-272
    • /
    • 2002
  • In this paper, we investigate the potential for retrieval of morphometric data from three dimensional images of conducting bronchus obtained by X-ray Computerized Tomography (CT) and to explore the potential for the use of rapid prototype machine to produce physical hollow bronchus casts for mathematical modeling and experimental verification of particle deposition models. We segment the bronchus of lung by mathematical morphology method from obtained images by CT. The surface data representing volumetric bronchus data in three dimensions are converted to STL(streolithography) file and three dimensional solid model is created by using input STL file and rapid prototype machine. Two physical hollow cast models are created from the CT images of bronchial tree phantom and living human bronchus. We evaluate the usefulness of the rapid prototype model of bronchial tree by comparing diameters of the cross sectional area bronchus segments of the original CT images and the rapid prototyping-derived models imaged by X-ray CT.

A Study on the Use of Contrast Agent and the Improvement of Body Part Classification Performance through Deep Learning-Based CT Scan Reconstruction (딥러닝 기반 CT 스캔 재구성을 통한 조영제 사용 및 신체 부위 분류 성능 향상 연구)

  • Seongwon Na;Yousun Ko;Kyung Won Kim
    • Journal of Broadcast Engineering
    • /
    • v.28 no.3
    • /
    • pp.293-301
    • /
    • 2023
  • Unstandardized medical data collection and management are still being conducted manually, and studies are being conducted to classify CT data using deep learning to solve this problem. However, most studies are developing models based only on the axial plane, which is a basic CT slice. Because CT images depict only human structures unlike general images, reconstructing CT scans alone can provide richer physical features. This study seeks to find ways to achieve higher performance through various methods of converting CT scan to 2D as well as axial planes. The training used 1042 CT scans from five body parts and collected 179 test sets and 448 with external datasets for model evaluation. To develop a deep learning model, we used InceptionResNetV2 pre-trained with ImageNet as a backbone and re-trained the entire layer of the model. As a result of the experiment, the reconstruction data model achieved 99.33% in body part classification, 1.12% higher than the axial model, and the axial model was higher only in brain and neck in contrast classification. In conclusion, it was possible to achieve more accurate performance when learning with data that shows better anatomical features than when trained with axial slice alone.

Study of machine learning model for predicting non-small cell lung cancer metastasis using image texture feature (Image texture feature를 이용하여 비소세포폐암 전이 예측 머신러닝 모델 연구)

  • Hye Min Ju;Sang-Keun Woo
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2023.07a
    • /
    • pp.313-315
    • /
    • 2023
  • 본 논문에서는 18F-FDG PET과 CT에서 추출한 영상인자를 이용하여 비소세포폐암의 전이를 예측하는 머신러닝 모델을 생성하였다. 18F-FDG는 종양의 포도당 대사 시 사용되며 이를 추적하여 환자의 암 세포를 진단하는데 사용되는 의료영상 기법 중 하나이다. PET과 CT 영상에서 추출한 이미지 특징은 종양의 생물학적 특성을 반영하며 해당 ROI로부터 계산되어 정량화된 값이다. 본 연구에서는 환자의 의료영상으로부터 image texture 프절 전이 예측에 있어 유의한 인자인지를 확인하기 위하여 AUC를 계산하고 단변량 분석을 진행하였다. PET과 CT에서 각각 4개(GLRLM_GLNU, SHAPE_Compacity only for 3D ROI, SHAPE_Volume_vx, SHAPE_Volume_mL)와 2개(NGLDM_Busyness, TLG_ml)의 image texture feature를 모델의 생성에 사용하였다. 생성된 각 모델의 성능을 평가하기 위해 accuracy와 AUC를 계산하였으며 그 결과 random forest(RF) 모델의 예측 정확도가 가장 높았다. 추출된 PET과 CT image texture feature를 함께 사용하여 모델을 훈련하였을 때가 각각 따로 사용하였을 때 보다 예측 성능이 개선됨을 확인하였다. 추출된 영상인자가 림프절 전이를 나타내는 바이오마커로서의 가능성을 확인할 수 있었으며 이러한 연구 결과를 바탕으로 개인별 의료 영상을 기반으로 한 비소세포폐암의 치료 전략을 수립할 수 있을 것이라 기대된다.

  • PDF

Synthesis of contrast CT image using deep learning network (딥러닝 네트워크를 이용한 조영증강 CT 영상 생성)

  • Woo, Sang-Keun
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2019.01a
    • /
    • pp.465-467
    • /
    • 2019
  • 본 논문에서는 영상생성이 가능한 딥러닝 네트워크를 이용하여 조영증강 CT 영상을 획득하는 연구를 수행하였다. CT는 고해상도 영상을 바탕으로 환자의 질병 및 암 세포 진단에 사용되는 의료영상 기법 중 하나이다. 특히, 조영제를 투여한 다음 CT 영상을 획득되는 영상을 조영증강 CT 영상이라 한다. 조영증강된 CT 영상은 물질의 구성 성분의 영상대비를 강조하여 임상의로 하여금 진단 및 치료반응 평가의 정확성을 향상시켜준다. 하지많은 수의 환자들이 조영제 부작용을 갖기 때문에 이에 해당되는 환자의 경우 조영증강 CT 영상 획득이 불가능해진다. 따라서 본 연구에서는 조영증강 영상을 얻지 못하는 환자 및 일반 환자의 불필요한 방사선의 노출을 최소화 하기 위하여 영상생성 딥러닝 기법을 이용하여 CT 영상에서 조영증강 CT 영상을 생성하는 연구를 진행하였다. 영상생성 딥러닝 네트워크는 generative adversarial network (GAN) 모델을 사용하였다. 연구결과 아무런 전처리도 거치지 않은 CT 영상을 이용하여 영상을 생성하는 것 보다 히스토그램 균일화 과정을 거친 영상이 더 좋은 결과를 나타냈으며 생성영상이 기존의 실제 영상과 영상의 구조적 유사도가 높음을 확인할 수 있다. 본 연구결과 딥러닝 영상생성 모델을 이용하여 조영증강 CT 영상을 생성할 수 있었으며, 이를 통하여 환자의 불필요한 방사선 피폭을 최소하며, 생성된 조영증강 CT 영상을 바탕으로 정확한 진단 및 치료반응 평가에 기여할 수 있을거라 기대된다.

  • PDF

Non-rigid Registration Method of Lung Parenchyma in Temporal Chest CT Scans using Region Binarization Modeling and Locally Deformable Model (영역 이진화 모델링과 지역적 변형 모델을 이용한 시간차 흉부 CT 영상의 폐 실질 비강체 정합 기법)

  • Kye, Hee-Won;Lee, Jeongjin
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.6
    • /
    • pp.700-707
    • /
    • 2013
  • In this paper, we propose a non-rigid registration method of lung parenchyma in temporal chest CT scans using region binarization modeling and locally deformable model. To cope with intensity differences between CT scans, we segment the lung vessel and parenchyma in each scan and perform binarization modeling. Then, we match them without referring any intensity information. We globally align two lung surfaces. Then, locally deformable transformation model is developed for the subsequent non-rigid registration. Subtracted quantification results after non-rigid registration are visualized by pre-defined color map. Experimental results showed that proposed registration method correctly aligned lung parenchyma in the full inspiration and expiration CT images for ten patients. Our non-rigid lung registration method may be useful for the assessment of various lung diseases by providing intuitive color-coded information of quantification results about lung parenchyma.

Efficient Osteoporosis Prediction Using A Pair of Ensemble Models

  • Choi, Se-Heon;Hwang, Dong-Hwan;Kim, Do-Hyeon;Bak, So-Hyeon;Kim, Yoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.12
    • /
    • pp.45-52
    • /
    • 2021
  • In this paper, we propose a prediction model for osteopenia and osteoporosis based on a convolutional neural network(CNN) using computed tomography(CT) images. In a single CT image, CNN had a limitation in utilizing important local features for diagnosis. So we propose a compound model which has two identical structures. As an input, two different texture images are used, which are converted from a single normalized CT image. The two networks train different information by using dissimilarity loss function. As a result, our model trains various features in a single CT image which includes important local features, then we ensemble them to improve the accuracy of predicting osteopenia and osteoporosis. In experiment results, our method shows an accuracy of 77.11% and the feature visualize of this model is confirmed by using Grad-CAM.

An Efficient CT Image Denoising using WT-GAN Model

  • Hae Chan Jeong;Dong Hoon Lim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.5
    • /
    • pp.21-29
    • /
    • 2024
  • Reducing the radiation dose during CT scanning can lower the risk of radiation exposure, but not only does the image resolution significantly deteriorate, but the effectiveness of diagnosis is reduced due to the generation of noise. Therefore, noise removal from CT images is a very important and essential processing process in the image restoration. Until now, there are limitations in removing only the noise by separating the noise and the original signal in the image area. In this paper, we aim to effectively remove noise from CT images using the wavelet transform-based GAN model, that is, the WT-GAN model in the frequency domain. The GAN model used here generates images with noise removed through a U-Net structured generator and a PatchGAN structured discriminator. To evaluate the performance of the WT-GAN model proposed in this paper, experiments were conducted on CT images damaged by various noises, namely Gaussian noise, Poisson noise, and speckle noise. As a result of the performance experiment, the WT-GAN model is better than the traditional filter, that is, the BM3D filter, as well as the existing deep learning models, such as DnCNN, CDAE model, and U-Net GAN model, in qualitative and quantitative measures, that is, PSNR (Peak Signal-to-Noise Ratio) and SSIM (Structural Similarity Index Measure) showed excellent results.

Performance Evaluation of U-net Deep Learning Model for Noise Reduction according to Various Hyper Parameters in Lung CT Images (폐 CT 영상에서의 노이즈 감소를 위한 U-net 딥러닝 모델의 다양한 학습 파라미터 적용에 따른 성능 평가)

  • Min-Gwan Lee;Chanrok Park
    • Journal of the Korean Society of Radiology
    • /
    • v.17 no.5
    • /
    • pp.709-715
    • /
    • 2023
  • In this study, the performance evaluation of image quality for noise reduction was implemented using the U-net deep learning architecture in computed tomography (CT) images. In order to generate input data, the Gaussian noise was applied to ground truth (GT) data, and datasets were consisted of 8:1:1 ratio of train, validation, and test sets among 1300 CT images. The Adagrad, Adam, and AdamW were used as optimizer function, and 10, 50 and 100 times for number of epochs were applied. In addition, learning rates of 0.01, 0.001, and 0.0001 were applied using the U-net deep learning model to compare the output image quality. To analyze the quantitative values, the peak signal to noise ratio (PSNR) and coefficient of variation (COV) were calculated. Based on the results, deep learning model was useful for noise reduction. We suggested that optimized hyper parameters for noise reduction in CT images were AdamW optimizer function, 100 times number of epochs and 0.0001 learning rates.