• Title/Summary/Keyword: Source Imaging

Search Result 550, Processing Time 0.039 seconds

Study on the calibration phantom and metal artifacts using virtual monochromatic images from dual energy CT (듀얼 에너지 CT의 가상 단색 영상을 이용한 영상 교정 팬텀과 금속 인공음영에 관한 연구)

  • Lee, Jun seong;Lee, Seung hoon;Park, Ju gyung;Lee, Sun young;Kim, Jin ki
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.29 no.1
    • /
    • pp.77-84
    • /
    • 2017
  • Purpose: To evaluate the image quality improvement and dosimetric effects on virtual monochromatic images of a Dual Source-Dual Energy CT(DS-DECT) for radiotherapy planning. Materials and Methods: Dual energy(80/Sn 140 kVp) and single energy(120 kVp) scans were obtained with dual source CT scanner. Virtual monochromatic images were reconstructed at 40-140 keV for the catphan phantom study. The solid water-equivalent phantom for dosimetry performs an analytical calculation, which is implemented in TPS, of a 10 MV, $10{\times}10cm^2$ photon beam incident into the solid phantom with the existence of stainless steel. The dose profiles along the central axis at depths were discussed. The dosimetric consequences in computed treatment plans were evaluated based on polychromatic images at 120 kVp. Results: The magnitude of differences was large at lower monochromatic energy levels. The measurements at over 70 keV shows stable HU for polystyrene, acrylic. For CT to ED conversion curve, the shape of the curve at 120 kVp was close to that at 80 keV. 105 keV virtual monochromatic images were more successful than other energies at reducing streak artifacts, which some residual artifacts remained in the corrected image. The dose-calculation variations in radiotherapy treatment planning do not exceed ${\pm}0.7%$. Conclusion: Radiation doses with dual energy CT imaging can be lower than those with single energy CT imaging. The virtual monochromatic images were useful for the revision of CT number, which can be improved for target coverage and electron densities distribution.

  • PDF

The comparison of lesion localization methods in breast lymphoscintigraphy (Breast lymphoscintigraphy 검사 시 체표윤곽을 나타내는 방법의 비교)

  • Yeon, Joon ho;Hong, Gun chul;Kim, Soo yung;Choi, Sung wook
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.19 no.2
    • /
    • pp.74-80
    • /
    • 2015
  • Purpose Breast lymphoscintigraphy is an important technique to present for body surface precisely, which shows a lymph node metastasis of malignant tumors at an early stage and is performed before and after surgery in patients with breast cancer. In this study, we evaluated several methods of body outline imaging to present exact location of lesions, as well as compared respective exposure doses. Materials and Methods RANDO phantom and SYMBIA T-16 were used for obtaining imaging. A lesion and an injection site were created by inserting a point source of 0.11 MBq on the axillary sentinel lymph node and 37 MBq on the right breast, respectively. The first method for acquiring the image was used by drawing the body surface of phantom for 30 sec using $Na^{99m}TcO_4$ as a point source. The second, the image was acquired with $^{57}Co$ flood source for 30 seconds on the rear side and the left side of the phantom, the image as the third method was obtained using a syringe filled with 37 MBq of $Na^{99m}TcO_4$ in 10 ml of saline, and as the fourth, we used a photon energy and scatter energy of $^{99m}Tc$ emitting from phantom without any addition radiation exposure. Finally, the image was fused the scout image and the basal image of SPECT/CT using MATLAB$^{(R)}$ program. Anterior and lateral images were acquired for 3 min, and radiation exposure was measured by the personal exposure dosimeter. We conducted preference of 10 images from nuclear medicine doctors by the survey. Results TBR values of anterior and right image in the first to fifth method were 334.9 and 117.2 ($1^{st}$), 266.1 and 124.4 ($2^{nd}$), 117.4 and 99.6 ($3^{rd}$), 3.2 and 7.6 ($4^{th}$), and 565.6 and 141.8 ($5^{th}$). And also exposure doses of these method were 2, 2, 2, 0, and $30{\mu}Sv$, respectively. Among five methods, the fifth method showed the highest TBR value as well as exposure dose, where as the fourth method showed the lowest TBR value and exposure dose. As a result, the last method ($5^{th}$) is the best method and the fourth method is the worst method in this study. Conclusion Scout method of SPECT/CT can be useful that provides the best values of TBR and the best score of survey result. Even though personal exposure dose when patients take scout of SPECT/CT was higher than another scan, it was slight level comparison to 1 mSv as the dose limit to non-radiation workers. If the scout is possible to less than 80 kV, exposure dose can be reduced, and also useful lesion localization provided.

  • PDF

Evaluation of Usability Both Oblique Verification for Inserted Fiducial Marker of Prostate Cancer Patients (Fiducial Marker가 삽입된 전립선암 환자를 대상으로 한 양사방향 촬영의 유용성 평가)

  • Kim, Koon Joo;Lee, Jung Jin;Kim, Sung Gi;Lim, Hyun Sil;Kim, Wan Sun;Kang, Su Man
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.25 no.2
    • /
    • pp.123-129
    • /
    • 2013
  • Purpose: The way check the movement of the fiducial marker insertion in the treatment of patients with prostate cancer. However the existing methods of fiducial marker verification process difficult to identify the specific location of the marker behind the femur and pelvic bone. So to study the evaluation of maker match with using kilo voltage (KV) X-ray by On-board imager to both oblique verification method. Materials and Methods: Five patients were selected for rectal ballooning and inserted fiducial marker. Compare the position of the fiducial marker of reference plan 2D/2D Anterior/Posterior verification method and 2D/2D both oblique verification method. So to measurement the shift score of X, Y, Z (axis) and measure exposure dose given to patients and compare matching time. Results: 2 dimensional OBI KV X-ray imaging using two-dimensional matching image are orthogonal, so locating fiducial marker matching clear and useful DRR (digital reconstruction radiography) OBI souce angle ($45^{\circ}/315^{\circ}$) matching most useful. 2D/2D both oblique verification method was able to see clearly marker behind the pelvic bone. Also matching time can be reduced accordingly. According to the method of each matching results for each patient in each treatment fraction, X, Y, and Z axis the Mean $value{\pm}SD$ (standard deviation) is X axis (AP/LAT: $0.4{\pm}1.67$, OBLIQUE: $0.4{\pm}1.82$) mm, Y axis (AP/LAT: $0.7{\pm}1.73$, OBLIQUE: $0.2{\pm}1.77$) mm, Z axis (AP/LAT: $0.8{\pm}1.94$, OBLIQUE:$1.5{\pm}2.8$) mm. In addition, the KV X-ray source dose radiation exposure given to the patient taking average when AP/LAT matching is (0.1/2.1) cGY, when $315^{\circ}/45^{\circ}$ matching is (0.27/0.26) cGY. Conclusion: In conclusion for inserted fiducial marker of prostate cancer patients 2D/2D both oblique matching method is more accurate verification than 2D/2D AP/LAT matching method. Also the matching time less than the 2D/2D AP/LAT matching method. Taken as the amount of radiation exposure to patients less than was possible. Suggest would improve the treatment quality of care patients more useful to establish a protocol such as case.

  • PDF

A Study on the Development Trend of Artificial Intelligence Using Text Mining Technique: Focused on Open Source Software Projects on Github (텍스트 마이닝 기법을 활용한 인공지능 기술개발 동향 분석 연구: 깃허브 상의 오픈 소스 소프트웨어 프로젝트를 대상으로)

  • Chong, JiSeon;Kim, Dongsung;Lee, Hong Joo;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.1-19
    • /
    • 2019
  • Artificial intelligence (AI) is one of the main driving forces leading the Fourth Industrial Revolution. The technologies associated with AI have already shown superior abilities that are equal to or better than people in many fields including image and speech recognition. Particularly, many efforts have been actively given to identify the current technology trends and analyze development directions of it, because AI technologies can be utilized in a wide range of fields including medical, financial, manufacturing, service, and education fields. Major platforms that can develop complex AI algorithms for learning, reasoning, and recognition have been open to the public as open source projects. As a result, technologies and services that utilize them have increased rapidly. It has been confirmed as one of the major reasons for the fast development of AI technologies. Additionally, the spread of the technology is greatly in debt to open source software, developed by major global companies, supporting natural language recognition, speech recognition, and image recognition. Therefore, this study aimed to identify the practical trend of AI technology development by analyzing OSS projects associated with AI, which have been developed by the online collaboration of many parties. This study searched and collected a list of major projects related to AI, which were generated from 2000 to July 2018 on Github. This study confirmed the development trends of major technologies in detail by applying text mining technique targeting topic information, which indicates the characteristics of the collected projects and technical fields. The results of the analysis showed that the number of software development projects by year was less than 100 projects per year until 2013. However, it increased to 229 projects in 2014 and 597 projects in 2015. Particularly, the number of open source projects related to AI increased rapidly in 2016 (2,559 OSS projects). It was confirmed that the number of projects initiated in 2017 was 14,213, which is almost four-folds of the number of total projects generated from 2009 to 2016 (3,555 projects). The number of projects initiated from Jan to Jul 2018 was 8,737. The development trend of AI-related technologies was evaluated by dividing the study period into three phases. The appearance frequency of topics indicate the technology trends of AI-related OSS projects. The results showed that the natural language processing technology has continued to be at the top in all years. It implied that OSS had been developed continuously. Until 2015, Python, C ++, and Java, programming languages, were listed as the top ten frequently appeared topics. However, after 2016, programming languages other than Python disappeared from the top ten topics. Instead of them, platforms supporting the development of AI algorithms, such as TensorFlow and Keras, are showing high appearance frequency. Additionally, reinforcement learning algorithms and convolutional neural networks, which have been used in various fields, were frequently appeared topics. The results of topic network analysis showed that the most important topics of degree centrality were similar to those of appearance frequency. The main difference was that visualization and medical imaging topics were found at the top of the list, although they were not in the top of the list from 2009 to 2012. The results indicated that OSS was developed in the medical field in order to utilize the AI technology. Moreover, although the computer vision was in the top 10 of the appearance frequency list from 2013 to 2015, they were not in the top 10 of the degree centrality. The topics at the top of the degree centrality list were similar to those at the top of the appearance frequency list. It was found that the ranks of the composite neural network and reinforcement learning were changed slightly. The trend of technology development was examined using the appearance frequency of topics and degree centrality. The results showed that machine learning revealed the highest frequency and the highest degree centrality in all years. Moreover, it is noteworthy that, although the deep learning topic showed a low frequency and a low degree centrality between 2009 and 2012, their ranks abruptly increased between 2013 and 2015. It was confirmed that in recent years both technologies had high appearance frequency and degree centrality. TensorFlow first appeared during the phase of 2013-2015, and the appearance frequency and degree centrality of it soared between 2016 and 2018 to be at the top of the lists after deep learning, python. Computer vision and reinforcement learning did not show an abrupt increase or decrease, and they had relatively low appearance frequency and degree centrality compared with the above-mentioned topics. Based on these analysis results, it is possible to identify the fields in which AI technologies are actively developed. The results of this study can be used as a baseline dataset for more empirical analysis on future technology trends that can be converged.

Truncation Artifact Reduction Using Weighted Normalization Method in Prototype R/F Chest Digital Tomosynthesis (CDT) System (프로토타입 R/F 흉부 디지털 단층영상합성장치 시스템에서 잘림 아티팩트 감소를 위한 가중 정규화 접근법에 대한 연구)

  • Son, Junyoung;Choi, Sunghoon;Lee, Donghoon;Kim, Hee-Joung
    • Journal of the Korean Society of Radiology
    • /
    • v.13 no.1
    • /
    • pp.111-118
    • /
    • 2019
  • Chest digital tomosynthesis has become a practical imaging modality because it can solve the problem of anatomy overlapping in conventional chest radiography. However, because of both limited scan angle and finite-size detector, a portion of chest cannot be represented in some or all of the projection. These bring a discontinuity in intensity across the field of view boundaries in the reconstructed slices, which we refer to as the truncation artifacts. The purpose of this study was to reduce truncation artifacts using a weighted normalization approach and to investigate the performance of this approach for our prototype chest digital tomosynthesis system. The system source-to-image distance was 1100 mm, and the center of rotation of X-ray source was located on 100 mm above the detector surface. After obtaining 41 projection views with ${\pm}20^{\circ}$ degrees, tomosynthesis slices were reconstructed with the filtered back projection algorithm. For quantitative evaluation, peak signal to noise ratio and structure similarity index values were evaluated after reconstructing reference image using simulation, and mean value of specific direction values was evaluated using real data. Simulation results showed that the peak signal to noise ratio and structure similarity index was improved respectively. In the case of the experimental results showed that the effect of artifact in the mean value of specific direction of the reconstructed image was reduced. In conclusion, the weighted normalization method improves the quality of image by reducing truncation artifacts. These results suggested that weighted normalization method could improve the image quality of chest digital tomosynthesis.

Ex vivo Morphometric Analysis of Coronary Stent using Micro-Computed Tomography (미세단층촬영기법을 이용한 관상동맥 스텐트의 동물 모델 분석)

  • Bae, In-Ho;Koh, Jeong-Tae;Lim, Kyung-Seob;Park, Dae-Sung;Kim, Jong-Min;Jeong, Myung-Ho
    • Journal of the Korean Society of Radiology
    • /
    • v.6 no.2
    • /
    • pp.93-98
    • /
    • 2012
  • Micro-computed tomography (microCT) is an important tool for preclinical vascular imaging, with micron-level resolution. This non-destructive means of imaging allows for rapid collection of 2D and 3D reconstructions to visualize specimens prior to destructive analysis such as pathological analysis. Objectives. The aim of this study was to suggest a method for ex vivo, postmortem examination of stented arterial segments with microCT. And ex vivo evaluation of stents such as bare metal or drug eluting stents on in-stent restenosis (ISR) in rabbit model was performed. The bare metal stent (BMS) and drug eluting stent (DES, paclitaxel) were implanted in the left or right iliac arteries alternatively in eight New Zealand white rabbits. After 4 weeks of post-implantation, the part of iliac arteries surrounding the stent were removed carefully and processed for microCT. Prior to microCT analysis, a contrast medium was loaded to lumen of stents. All samples were subjected to an X-ray source operating at 50 kV and 200 ${\mu}A$ by using a 3D isotropic resolution. The region of interest was traced and measured by CTAN analytical software. Objects being exposed to radiation had different Hounsfield unit each other with values of approximately 1.2 at stent area, 0.12 ~ 0.17 at a contrast medium and 0 ~ 0.06 at outer area of stent. Based on above, further analyses were performed. As a result, the difference of lengths and volumes between expanded stents, which may relate to injury score in pathological analysis, was not different significantly. Moreover, ISR area of BMS was 1.6 times higher than that of DES, indicating that paclitaxel has inhibitory effect on cell proliferation and prevent infiltration of restenosis into lumen of stent. And ISR area of BMS was higher ($1.52{\pm}0.48mm^2$) than that of DES ($0.94{\pm}0.42mm^2$), indicating that paclitaxel has inhibitory effect on cell proliferation and prevent infiltration of restenosis into lumen of stent. Though it was not statistically significant, it showed that the extent of neointema of mid-region of stents was relatively higher than that of anterior and posterior region in parts of BMS as showing cross-sectional 2-D image. suggest that microCT can be utilized as an accessorial tool for pathological analysis.

Usefulness of Acoustic Noise Reduction in Brain MRI Using Quiet-T2 (뇌 자기공명영상에서 Quiet-T2 기법을 이용한 소음감소의 유용성)

  • Lee, SeJy;Kim, Young-Keun
    • Journal of radiological science and technology
    • /
    • v.39 no.1
    • /
    • pp.51-57
    • /
    • 2016
  • Acoustic noise during magnetic resonance imaging (MRI) is the main source for patient discomfort. we report our preliminary experience with this technique in neuroimaging with regard to subjective and objective noise levels and image quality. 60 patients(29 males, 31 females, average age of 60.1) underwent routine brain MRI with 3.0 Tesla (MAGNETOM Tim Trio; Siemens, Germany) system and 12-channel head coil. Q-$T_2$ and $T_2$ sequence were performed. Measurement of sound pressure levels (SPL) and heart rate on Q-$T_2$ and $T_2$ was performed respectively. Quantitative analysis was carried out by measuring the SNR, CNR, and SIR values of Q-$T_2$, $T_2$ and a statistical analysis was performed using independent sample T-test. Qualitative analysis was evaluated by the eyes for the overall quality image of Q-$T_2$ and $T_2$. A 5-point evaluation scale was used, including excellent(5), good(4), fair(3), poor(2), and unacceptable(1). The average noise and peak noise decreased by $15dB_A$ and $10dB_A$ on $T_2$ and Q-$T_2$ test. Also, the average value of heartbeat rate was lower in Q-$T_2$ for 120 seconds in each test, but there was no statistical significance. The quantitative analysis showed that there was no significant difference between CNR and SIR, and there was a significant difference (p<0.05) as SNR had a lower average value on Q-$T_2$. According to the qualitative analysis, the overall quality image of 59 case $T_2$ and Q-$T_2$ was evaluated as excellent at 5 points, and 1 case was evaluated as good at 4 points due to a motion artifact. Q-$T_2$ is a promising technique for acoustic noise reduction and improved patient comfort.

Seismic Imaging of a Tidal Flat: A Case Study for the Mineopo Area (조간대(갯벌)에서의 탄성파 탐사: 민어포 지역의 사례)

  • Jou, Hyeong-Tae;Kim, Han-Joon;Lee, Gwang-Hoon;Lee, Sang-Hoon;Jung, Baek-Hoon;Cho, Hyun-Moo;Jang, Nam-Do
    • Geophysics and Geophysical Exploration
    • /
    • v.11 no.3
    • /
    • pp.197-203
    • /
    • 2008
  • A shallow high-resolution seismic reflection survey was carried out at the Mineopo tidal flat on the western coast of Korea. The purpose of the survey was to investigate shallow sedimentary structure of the tidal flat associated with the recent sea level change. A total of 795 shots were generated at 1 m interval from a 5-kg hammer source and recorded on 48 channels of 100 Hz geophones along two mutually perpendicular profiles. The water-saturated ground condition resulted in suppressed ground rolls by significantly decreasing rigidity. In addition, seismic velocities over 1500 m/s provided easy segregation of reflected arrivals from lower velocity noise. As a consequence, seismic sections from the study area show significantly higher resolution and signal to noise ratio than conventional land seismic sections. The tidal flat consists of 5 sedimentary sequences above acoustic basement. The seismic sections reveal the continuous structure of the tidal flat formed in association with sea level rise during the Holocene.

Development of a MTF Measurement System for an Infrared Optical System (적외선 광학계용 MTF 측정장치 개발)

  • Son, Byoung-Ho;Lee, Hoi-Yoon;Song, Jae-Bong;Yang, Ho-Soon;Lee, Yun-Woo
    • Korean Journal of Optics and Photonics
    • /
    • v.26 no.3
    • /
    • pp.162-167
    • /
    • 2015
  • In this paper, we developed a MTF (Modulation Transfer Function) measurement system using a knife-edge scanning method for infrared optics. It consists of an objective part to generate the target image, a collimator to make the beam parallel, and a detector to analyze the image. We used a tungsten filament as the light source and MCT (Mercury Cadmium Telluride) to detect the mid-infrared(wavelength $3-5{\mu}m$) image. We measured the MTF of a standard lens (f=5, material ZnSe) to test this instrument and compared the result to the theoretical value calculated using the ZEMAX commercial software. It was found that the difference was within ${\pm}0.035$ at the cut-off frequency (50 1/mm). Also, we calculated the A-type measurement uncertainty to check the reliability of the measurement. The result showed only 0.002 at 20 1/mm in spatial frequency, which means very little variation in the MTF measurement under the same conditions.

Characterization of New Avalanche Photodiode Arrays for Positron Emission Tomography

  • Song, Tae-Yong;Park, Yong;Chung, Yong-Hyun;Jung, Jin-Ho;Jeong, Myung-Hwan;Min, Byung-Jun;Hong, Key-Jo;Choe, Yearn-Seong;Lee, Kyung-Han
    • Proceedings of the Korean Society of Medical Physics Conference
    • /
    • 2003.09a
    • /
    • pp.45-45
    • /
    • 2003
  • The aim of this study was the characterization and performance validation of new prototype avalanche photodiode (APD) arrays for positron emission tomography (PET). Two different APD array prototypes (noted A and B) developed by Radiation Monitoring Device (RMD) have been investigated. Principal characteristics of the two APD array were measured and compared. In order to characterize and evaluate the APD performance, capacitance, doping concentration, quantum efficiency, gain and dark current were measured. The doping concentration that shows the impurity distribution within an APD pixel as a function of depth was derived from the relationship between capacitance and bias voltage. Quantum efficiency was measured using a mercury vapor light source and a monochromator used to select a wavelength within the range of 300 to 700 nm. Quantum efficiency measurements were done at 500 V, for which the APD gain is equal to one. For the gain measurements, a pencil beam with 450 nm in wavelength was illuminating the center of each pixel. The APD dark currents were measured as a function of gain and bias. A linear fitting method was used to determine the value of surface and bulk leakage currents. Mean quantum efficiencies measured at 400 and 450 nm were 0.41 and 0.54, for array A, and 0.50 and 0.65 for array B. Mean gain at a bias voltage of 1700 V, was 617.6 for array A and 515.7 for type B. The values based on linear fitting were 0.08${\pm}$0.02 nA 38.40${\pm}$6.26 nA, 0.08${\pm}$0.0l nA 36.87${\pm}$5.19 nA, and 0.05${\pm}$0.00 nA, 21.80${\pm}$1.30 nA in bulk surface leakage current for array A and B respectively. Results of characterization demonstrate the importance of performance measurement validating the capability of APD array as the detector for PET imaging.

  • PDF