• Title/Summary/Keyword: Lung cancer segmentation

Search Result 10, Processing Time 0.025 seconds

A Study on Lung Cancer Segmentation Algorithm using Weighted Integration Loss on Volumetric Chest CT Image (흉부 볼륨 CT영상에서 Weighted Integration Loss을 이용한 폐암 분할 알고리즘 연구)

  • Jeong, Jin Gyo;Kim, Young Jae;Kim, Kwang Gi
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.5
    • /
    • pp.625-632
    • /
    • 2020
  • In the diagnosis of lung cancer, the tumor size is measured by the longest diameter of the tumor in the entire slice of the CT. In order to accurately estimate the size of the tumor, it is better to measure the volume, but there are some limitations in calculating the volume in the clinic. In this study, we propose an algorithm to segment lung cancer by applying a custom loss function that combines focal loss and dice loss to a U-Net model that shows high performance in segmentation problems in chest CT images. The combination of values of the various parameters in custom loss function was compared to the results of the model learned. The purposed loss function showed F1 score of 88.77%, precision of 87.31%, recall of 90.30% and average precision of 0.827 at α=0.25, γ=4, β=0.7. The performance of the proposed custom loss function showed good performance in lung cancer segmentation.

X-ray Image Segmentation using Multi-task Learning

  • Park, Sejin;Jeong, Woojin;Moon, Young Shik
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.3
    • /
    • pp.1104-1120
    • /
    • 2020
  • The chest X-rays are a common way to diagnose lung cancer or pneumonia. In particular, the finding of a lung nodule is the most important problem in the early detection of lung cancer. Recently, a lot of automatic diagnosis algorithms have been studied to find the lung nodules missed by doctors. The algorithms are typically based on segmentation network like U-Net. However, the occurrence of false positives that similar to lung nodules present outside the lungs can severely degrade performance. In this study, we propose a multi-task learning method that simultaneously learns the lung region and nodule-labeled data based on the prior knowledge that lung nodules exist only in the lung. The proposed method significantly reduces false positives outside the lung and improves the recognition rate of lung nodules to 83.8 F1 score compared to 66.6 F1 score of single task learning with U-net model. The experimental results on the JSRT public dataset demonstrate the effectiveness of the proposed method compared with other baseline methods.

Automatic Sputum Color Image Segmentation for Lung Cancer Diagnosis

  • Taher, Fatma;Werghi, Naoufel;Al-Ahmad, Hussain
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.1
    • /
    • pp.68-80
    • /
    • 2013
  • Lung cancer is considered to be the leading cause of cancer death worldwide. A technique commonly used consists of analyzing sputum images for detecting lung cancer cells. However, the analysis of sputum is time consuming and requires highly trained personnel to avoid errors. The manual screening of sputum samples has to be improved by using image processing techniques. In this paper we present a Computer Aided Diagnosis (CAD) system for early detection and diagnosis of lung cancer based on the analysis of the sputum color image with the aim to attain a high accuracy rate and to reduce the time consumed to analyze such sputum samples. In order to form general diagnostic rules, we present a framework for segmentation and extraction of sputum cells in sputum images using respectively, a Bayesian classification method followed by region detection and feature extraction techniques to determine the shape of the nuclei inside the sputum cells. The final results will be used for a (CAD) system for early detection of lung cancer. We analyzed the performance of a Bayesian classification with respect to the color space representation and quantification. Our methods were validated via a series of experimentation conducted with a data set of 100 images. Our evaluation criteria were based on sensitivity, specificity and accuracy.

Enhanced Lung Cancer Segmentation with Deep Supervision and Hybrid Lesion Focal Loss in Chest CT Images (흉부 CT 영상에서 심층 감독 및 하이브리드 병변 초점 손실 함수를 활용한 폐암 분할 개선)

  • Min Jin Lee;Yoon-Seon Oh;Helen Hong
    • Journal of the Korea Computer Graphics Society
    • /
    • v.30 no.1
    • /
    • pp.11-17
    • /
    • 2024
  • Lung cancer segmentation in chest CT images is challenging due to the varying sizes of tumors and the presence of surrounding structures with similar intensity values. To address these issues, we propose a lung cancer segmentation network that incorporates deep supervision and utilizes UNet3+ as the backbone. Additionally, we propose a hybrid lesion focal loss function comprising three components: pixel-based, region-based, and shape-based, which allows us to focus on the smaller tumor regions relative to the background and consider shape information for handling ambiguous boundaries. We validate our proposed method through comparative experiments with UNet and UNet3+ and demonstrate that our proposed method achieves superior performance in terms of Dice Similarity Coefficient (DSC) for tumors of all sizes.

Lung Segmentation Considering Global and Local Properties in Chest X-ray Images (흉부 X선 영상에서의 전역 및 지역 특성을 고려한 폐 영역 분할 연구)

  • Jeon, Woong-Gi;Kim, Tae-Yun;Kim, Sung Jun;Choi, Heung-Kuk;Kim, Kwang Gi
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.7
    • /
    • pp.829-840
    • /
    • 2013
  • In this paper, we propose a new lung segmentation method for chest x-ray images which can take both global and local properties into account. Firstly, the initial lung segmentation is computed by applying the active shape model (ASM) which keeps the shape of deformable model from the pre-learned model and searches the image boundaries. At the second segmentation stage, we also applied the localizing region-based active contour model (LRACM) for correcting various regional errors in the initial segmentation. Finally, to measure the similarities, we calculated the Dice coefficient of the segmented area using each semiautomatic method with the result of the manually segmented area by a radiologist. The comparison experiments were performed using 5 lung x-ray images. In our experiment, the Dice coefficient with manually segmented area was $95.33%{\pm}0.93%$ for the proposed method. Effective segmentation methods will be essential for the development of computer-aided diagnosis systems for a more accurate early diagnosis and prognosis regarding lung cancer in chest x-ray images.

Boundary and Reverse Attention Module for Lung Nodule Segmentation in CT Images (CT 영상에서 폐 결절 분할을 위한 경계 및 역 어텐션 기법)

  • Hwang, Gyeongyeon;Ji, Yewon;Yoon, Hakyoung;Lee, Sang Jun
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.17 no.5
    • /
    • pp.265-272
    • /
    • 2022
  • As the risk of lung cancer has increased, early-stage detection and treatment of cancers have received a lot of attention. Among various medical imaging approaches, computer tomography (CT) has been widely utilized to examine the size and growth rate of lung nodules. However, the process of manual examination is a time-consuming task, and it causes physical and mental fatigue for medical professionals. Recently, many computer-aided diagnostic methods have been proposed to reduce the workload of medical professionals. In recent studies, encoder-decoder architectures have shown reliable performances in medical image segmentation, and it is adopted to predict lesion candidates. However, localizing nodules in lung CT images is a challenging problem due to the extremely small sizes and unstructured shapes of nodules. To solve these problems, we utilize atrous spatial pyramid pooling (ASPP) to minimize the loss of information for a general U-Net baseline model to extract rich representations from various receptive fields. Moreover, we propose mixed-up attention mechanism of reverse, boundary and convolutional block attention module (CBAM) to improve the accuracy of segmentation small scale of various shapes. The performance of the proposed model is compared with several previous attention mechanisms on the LIDC-IDRI dataset, and experimental results demonstrate that reverse, boundary, and CBAM (RB-CBAM) are effective in the segmentation of small nodules.

Volume and Mass Doubling Time of Lung Adenocarcinoma according to WHO Histologic Classification

  • Jung Hee Hong;Samina Park;Hyungjin Kim;Jin Mo Goo;In Kyu Park;Chang Hyun Kang;Young Tae Kim;Soon Ho Yoon
    • Korean Journal of Radiology
    • /
    • v.22 no.3
    • /
    • pp.464-475
    • /
    • 2021
  • Objective: This study aimed to evaluate the tumor doubling time of invasive lung adenocarcinoma according to the International Association of the Study for Lung Cancer (IASLC)/American Thoracic Society (ATS)/European Respiratory Society (ERS) histologic classification. Materials and Methods: Among the 2905 patients with surgically resected lung adenocarcinoma, we retrospectively included 172 patients (mean age, 65.6 ± 9.0 years) who had paired thin-section non-contrast chest computed tomography (CT) scans at least 84 days apart with the same CT parameters, along with 10 patients with squamous cell carcinoma (mean age, 70.9 ± 7.4 years) for comparison. Three-dimensional semiautomatic segmentation of nodules was performed to calculate the volume doubling time (VDT), mass doubling time (MDT), and specific growth rate (SGR) of volume and mass. Multivariate linear regression, one-way analysis of variance, and receiver operating characteristic curve analyses were performed. Results: The median VDT and MDT of lung cancers were as follows: acinar, 603.2 and 639.5 days; lepidic, 1140.6 and 970.1 days; solid/micropapillary, 232.7 and 221.8 days; papillary, 599.0 and 624.3 days; invasive mucinous, 440.7 and 438.2 days; and squamous cell carcinoma, 149.1 and 146.1 days, respectively. The adjusted SGR of volume and mass of the solid-/micropapillary-predominant subtypes were significantly shorter than those of the acinar-, lepidic-, and papillary-predominant subtypes. The histologic subtype was independently associated with tumor doubling time. A VDT of 465.2 days and an MDT of 437.5 days yielded areas under the curve of 0.791 and 0.795, respectively, for distinguishing solid-/micropapillary-predominant subtypes from other subtypes of lung adenocarcinoma. Conclusion: The tumor doubling time of invasive lung adenocarcinoma differed according to the IASCL/ATS/ERS histologic classification.

Automated Lung Segmentation on Chest Computed Tomography Images with Extensive Lung Parenchymal Abnormalities Using a Deep Neural Network

  • Seung-Jin Yoo;Soon Ho Yoon;Jong Hyuk Lee;Ki Hwan Kim;Hyoung In Choi;Sang Joon Park;Jin Mo Goo
    • Korean Journal of Radiology
    • /
    • v.22 no.3
    • /
    • pp.476-488
    • /
    • 2021
  • Objective: We aimed to develop a deep neural network for segmenting lung parenchyma with extensive pathological conditions on non-contrast chest computed tomography (CT) images. Materials and Methods: Thin-section non-contrast chest CT images from 203 patients (115 males, 88 females; age range, 31-89 years) between January 2017 and May 2017 were included in the study, of which 150 cases had extensive lung parenchymal disease involving more than 40% of the parenchymal area. Parenchymal diseases included interstitial lung disease (ILD), emphysema, nontuberculous mycobacterial lung disease, tuberculous destroyed lung, pneumonia, lung cancer, and other diseases. Five experienced radiologists manually drew the margin of the lungs, slice by slice, on CT images. The dataset used to develop the network consisted of 157 cases for training, 20 cases for development, and 26 cases for internal validation. Two-dimensional (2D) U-Net and three-dimensional (3D) U-Net models were used for the task. The network was trained to segment the lung parenchyma as a whole and segment the right and left lung separately. The University Hospitals of Geneva ILD dataset, which contained high-resolution CT images of ILD, was used for external validation. Results: The Dice similarity coefficients for internal validation were 99.6 ± 0.3% (2D U-Net whole lung model), 99.5 ± 0.3% (2D U-Net separate lung model), 99.4 ± 0.5% (3D U-Net whole lung model), and 99.4 ± 0.5% (3D U-Net separate lung model). The Dice similarity coefficients for the external validation dataset were 98.4 ± 1.0% (2D U-Net whole lung model) and 98.4 ± 1.0% (2D U-Net separate lung model). In 31 cases, where the extent of ILD was larger than 75% of the lung parenchymal area, the Dice similarity coefficients were 97.9 ± 1.3% (2D U-Net whole lung model) and 98.0 ± 1.2% (2D U-Net separate lung model). Conclusion: The deep neural network achieved excellent performance in automatically delineating the boundaries of lung parenchyma with extensive pathological conditions on non-contrast chest CT images.

4-Dimensional dose evaluation using deformable image registration in respiratory gated radiotherapy for lung cancer (폐암의 호흡동조방사선치료 시 변형영상정합을 이용한 4차원 선량평가)

  • Um, Ki Cheon;Yoo, Soon Mi;Yoon, In Ha;Back, Geum Mun
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.30 no.1_2
    • /
    • pp.83-95
    • /
    • 2018
  • Purpose : After planning the Respiratory Gated Radiotherapy for Lung cancer, the movement and volume change of sparing normal structures nearby target are not often considered during dose evaluation. This study carried out 4-D dose evaluation which reflects the movement of normal structures at certain phase of Respiratory Gated Radiotherapy, by using Deformable Image Registration that is well used for Adaptive Radiotherapy. Moreover, the study discussed the need of analysis and established some recommendations, regarding the normal structures's movement and volume change due to Patient's breathing pattern during evaluation of treatment plans. Materials and methods : The subjects were taken from 10 lung cancer patients who received Respiratory Gated Radiotherapy. Using Eclipse(Ver 13.6 Varian, USA), the structures seen in the top phase of CT image was equally set via Propagation or Segmentation Wizard menu, and the structure's movement and volume were analyzed by Center-to Center method. Also, image from each phase and the dose distribution were deformed into top phase CT image, for 4-dimensional dose evaluation, via VELOCITY Program. Also, Using $QUASAR^{TM}$ Phantom(Modus Medical Devices) and $GAFCHROMIC^{TM}$ EBT3 Film(Ashland, USA), verification carried out 4-D dose distribution for 4-D gamma pass rate. Result : The movement of the Inspiration and expiration phase was the most significant in axial direction of right lung, as $0.989{\pm}0.34cm$, and was the least significant in lateral direction of spinal cord, as -0.001 cm. The volume of right lung showed the greatest rate of change as 33.5 %. The maximal and minimal difference in PTV Conformity Index and Homogeneity Index between 3-dimensional dose evaluation and 4-dimensional dose evaluation, was 0.076, 0.021 and 0.011, 0.0 respectfully. The difference of 0.0045~2.76 % was determined in normal structures, using 4-D dose evaluation. 4-D gamma pass rate of every patients passed reference of 95 % gamma pass rate. Conclusion : PTV Conformity Index was more significant in all patients using 4-D dose evaluation, but no significant difference was observed between two dose evaluations for Homogeneity Index. 4-D dose distribution was shown more homogeneous dose compared to 3D dose distribution, by considering the movement from breathing which helps to fill out the PTV margin area. There was difference of 0.004~2.76 % in 4D evaluation of normal structure, and there was significant difference between two evaluation methods in all normal structures, except spinal cord. This study shows that normal structures could be underestimated by 3-D dose evaluation. Therefore, 4-D dose evaluation with Deformable Image Registration will be considered when the dose change is expected in normal structures due to patient's breathing pattern. 4-D dose evaluation with Deformable Image Registration is considered to be a more realistic dose evaluation method by reflecting the movement of normal structures from patient's breathing pattern.

  • PDF

Effect of MRI Media Contrast on PET/MRI (PET/MRI에 있어 MRI 조영제가 PET에 미치는 영향)

  • Kim, Jae Il;Kim, In Soo;Lee, Hong Jae;Kim, Jin Eui
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.18 no.1
    • /
    • pp.19-25
    • /
    • 2014
  • Purpose: Integrated PET/MRI has been developed recently has become a lot of help to the point oncologic, neological, cardiological nuclear medicine. By using this PET/MRI, a ${\mu}-map$ is created some special MRI sequence which may be divided parts of the body for attenuation correction. However, because an MRI contrast agent is necessary in order to obtain an more MRI information, we will evaluate to see an effect of SUV on PET image that corrected attenuation by MRI with contrast agent. Materials and Methods: As PET/MRI machine, Biograph mMR (Siemens, Germany) was used. For phantom test, 1mCi $^{18}F-FDG$ was injected in cylinderical uniformity phantom, and then acquire PET data about 10 minutes with VIBE-DIXON, UTE MRI sequence image for attenuation correction. T1 weighted contrast media, 4 cc DOTAREM (GUERBET, FRANCE) was injected in a same phatnom, and then PET data, MRI data were acquired by same methodes. Using this PET, non-contrast MRI and contrast MRI, it was reconstructed attenuation correction PET image, in which we evanuated the difference of SUVs. Additionally, for let a high desity of contrast media, 500 cc 2 plastic bottles were used. We injected $^{18}F-FDG$ with 5 cc DOTAREM in first bottle. At second bottle, only $^{18}F-FDG$ was injected. and then we evaluated a SUVs reconstructed by same methods. For clinical patient study, rectal caner-pancreas cancer patients were selected. we evaluated SUVs of PET image corrected attenuastion by contrast weighted MRI and non-contrast MRI. Results: For a phantom study, although VIBE DIXON MRI signal with contrast media is 433% higher than non-contrast media MRI, the signals intensity of ${\mu}-map$, attenuation corrected PET are same together. In case of high contrast media density, image distortion is appeared on ${\mu}-map$ and PET images. For clinical a patient study, VIBE DIXON MRI signal on lesion portion is increased in 495% by using DOTAREM. But there are no significant differences at ${\mu}-map$, non AC PET, AC-PET image whether using contrast media or not. In case of whole body PET/MRI study, %diff between contras and non contrast MRAC at lung, liver, renal cortex, femoral head, myocardium, bladder, muscle are -4.32%, -2.48%, -8.05%, -3.14%, 2.30%, 1.53%, 6.49% at each other. Conclusion: In integrated PET/MRI, a segmentation ${\mu}-map$ method is used for correcting attenuation of PET signal. although MRI signal for attenuation correciton change by using contrast media, ${\mu}-map$ will not change, and then MRAC PET signal will not change too. Therefore, MRI contrast media dose not affect for attenuation correction PET. As well, not only When we make a flow of PET/MRI protocol, order of PET and MRI sequence dose not matter, but It's possible to compare PET images before and after contrast agent injection.

  • PDF