• Title/Summary/Keyword: Images quality

Search Result 3,212, Processing Time 0.036 seconds

Evaluation of TOF MR Angiography and Imaging for the Half Scan Factor of Cerebral Artery (유속신호증강효과의 자기공명혈관조영술을 이용한 뇌혈관검사에서 Half Scan Factor 적용한 영상 평가)

  • Choi, Young Jae;Kweon, Dae Cheol
    • Journal of the Korean Magnetics Society
    • /
    • v.26 no.3
    • /
    • pp.92-98
    • /
    • 2016
  • To aim of this study was to assess the full scan and half scan of imaging with half scan factor. Patients without a cerebral vascular disease (n = 30) and were subject to the full scan half scan, and set a region of interest in the cerebral artery from the three regions (C1, C2, C3) in the range of 7 to 8 mm. MIP (maximum intensity projection) to reconstruct the images in signal strength SNR (signal to noise ration), PSNR (peak signal noise to ratio), RMSE (root mean square error), MAE (mean absolute error) and calculated by paired t-test for use by statistics were analyzed. Scan time was half scan (4 minutes 53 seconds), the full scan (6 minutes 04 seconds). The mean measurement range (7.21 mm) of all the ROI in the brain blood vessel, was the SNR of the first C1 is completely scanned (58.66 dB), half-scan (62.10 dB), a positive correlation ($r^2=0.503$), for the second C2 SNR is completely scanned (70.30 dB), half-scan (74.67 dB) the amount of correlation ($r^2=0.575$), third C3 of a complete scan SNR (70.33 dB), half scan SNR (74.64 dB) in the amount of correlation between the It was analyzed with ($r^2=0.523$). Comparative full scan with half of SNR ($4.75{\pm}0.26dB$), PSNR ($21.87{\pm}0.28dB$), RMSE ($48.88{\pm}1.61$), was calculated as MAE ($25.56{\pm}2.2$). SNR is also applied to examine the half-scans are not many differences in the quality of the two scan methods were not statistically significant in the scan (p-value > .05) image takes less time than a full scan was used.

The Study on the Reduction of Patient Surface Dose Through the use of Copper Filter in a Digital Chest Radiography (디지털 흉부 촬영에서 구리필터사용에 따른 환자 표면선량 감소효과에 관한 연구)

  • Shin, Soo-In;Kim, Chong-Yeal;Kim, Sung-Chul
    • Journal of radiological science and technology
    • /
    • v.31 no.3
    • /
    • pp.223-228
    • /
    • 2008
  • The most critical point in the medical use of radiation is to minimize the patient's entrance dose while maintaining the diagnostic function. Low-energy photons (long wave X-ray) among diagnostic X-rays are unnecessary because they are mostly absorbed and contribute the increase of patient's entrance dose. The most effective method to eliminate the low-energy photons is to use the filtering plate. The experiments were performed by observing the image quality. The skin entrance dose was 0.3 mmCu (copper) filter. A total of 80 images were prepared as two sets of 40 cuts. In the first set (of 40 cuts), 20 cuts were prepared for the non-filter set and another 20 cuts for the Cu filter of signal + noise image set. In the second set of 40 cuts, 20 cuts were prepared for the non-filter set and another 20 cuts for the Cu filter of non-signal image (noisy image) with random location of diameter 4 mm and 3 mm thickness of acryl disc for ROC signal at the chest phantom. P(S/s) and P(S/n) were calculated and the ROC curve was described in terms of sensitivity and specificity. Accuracy were evaluated after reading by five radiologists. The number of optically observable lesions was counted through ANSI chest phantom and contrast-detail phantom by recommendation of AAPM when non-filter or Cu filter was used, and the skin entrance dose was also measured for both conditions. As the result of the study, when the Cu filter was applied, favorable outcomes were observed on, the ROC Curve was located on the upper left area, sensitivity, accuracy and the number of CD phantom lesions were reasonable. Furthermore, if skin entrance dose was reduced, the use of additional filtration may be required to be considered in many other cases.

  • PDF

A study of the destructive styles from Contemporary Paintings - Focused on distinguishing enmity-destruction and self-destruction - (현대회화에서 드러난 해체의 형식론에 관한 연구 -타의적 해체와 자의적 해체의 성격규정을 중심으로-)

  • Park Ki-Woong
    • Journal of Science of Art and Design
    • /
    • v.7
    • /
    • pp.5-63
    • /
    • 2005
  • Generally, the meanings of destruction are related in the meaning of demolition, breakdown, into fragments ... and so on, and the similar meanings are twist, crush, demolish, split, cut, into pieces , break up ... etc. Further, it has related in the cruelty and destructive heart which are linked with orgy, Sadism, Necrophilia and so on. The meanings are also expressed by the initial , which are deprivation, deface, defame, deform, degrade, delegitimize, denounce , deride, destroy, devalue, as well as debase, debunk, declaim, declassify, decry, delete, denigrate, deprecate, despise or detract ...and so on. Dario Gamboni has discussed the meaning in his book as two categories Iconoclasm and Vandalism. And the similar meanings could be found in the words which has the initial of , like abase, abate, abhor, abjure, abolish, abridge, abuse ...and so on. Even though the distinct meanings of Iconoclasm and Vandalism, it is not easy to distinguish clearly between the differences when the results are accomplished in contemporary paintings because of the similarity of the results. In korean vocabulary there are no similar words to distinguish between the meanings of destruction and deconstruction, and the deconstruction is not recorded in the general dictionaries. However the meaning of is diminishing, separation, contrast and so on. So the unification of the word as do-construction is not construct, minus construction, reverse construction. And Vincent Ditch explained that there are the meaning of destroy the text. From Jacques Derrida, the deconstruction strategy is to criticise the world of traditional metaphysics and logocentrism, and not to reconstruire the philosophical meaning of texts but $d\'{e}construire$ them. And Saussure emphasized that the signifers could have more meaning that there can be more signified in traditional texts in the art. as a result, deconstruction is explained that there are many signified meanings in a signifer. In this thesis , from using the meanings of destruction and deconstruction, to distinguish the expressive skills in contemporary art works are arising. Therefore, special methods which are linked in the destruction styles are selected. As a result, the two different purposes of destruction is arising, one is enmity destruction and the other is self destruction another word, auto destruction or destruction to create The enmity destruction can be distinguished by the two category Iconoclasm and Vandalism. They come from the moment of different historical aspect is arising and want to attack the Icon or masterpiece this concept is from the study of John Philips and especially iconoclasm is linked with religious and artistic heart, but Vandalism is come from the political attack. Sometime, this distinguish is not clearly arising, because the two aspects are co-related in the attack. As a result, firstly, the Iconoclastic controversy had arisen in the methods of Dadaism which has developed by Man Ray, Francis Picabia and Marcel Duchamp. They want to attack the pre-established master-pieces and painting spaces, and they had 'non-artistic attitude' not to be art. Since 1980, the German artist Anselm Kiefer adapted the methods and made them his special skills so he had tried to paint tough brush strokes and draw with hugh pallette image line and fire and water images , they can be the image attack as the Iconoclasm. secondly, the model of vandalism is to be done by hammer, drill, canon and so on. the method is to attack the content of painting. Further, the object of destruction is bound by cords and iron lings to demolish or to declare the authority of pre-statues; it symbolize the pre-authority is gone already. Self-destruction based paintings are clearly different in the purpose of approaching the art work. First of all, they can be auto-destruction, creative destruction and metamorphosis destruction, which is linked with the skill the material aspect and basic stature, and sign destruction or signifier destruction, which is link with the inner meaning destruction that is considered as the Semiotical destruction in post-modern paintings. Since 1960, the auto destruction is based on the method of firing, melting, grinding and similar skills, which is linked with Neo-Dada and reverse-assemblage. Metamorphosis destruction is strongly linked with the basic inner heart price and quality, so it can be resulted in the changedness of expectation and recognition. Tony Cragg has developed the skills to metamorphose the wood as stone or iron as cloth and stone as sponge and rubber and so on. The researcher has developed the same style in the series of since 2003. The other self-destructive methods are found in the skill of sign destruction. In the methods the meaning of the art is not fixed as one or two, but is developed multi-meaning and differ from original starting situation, so Jacques Derrida called the difference meaning in deconstruction. It is the destruction of textes. These methods are accomplished by David Salle, Francesco Clemente, and recently Tracy Emin, who has developed the attacking heart in the spectators' emotion. Sometime in the method of self-destruction, it is based on horror and shock, the method is explored by Demian Hirst and Jakes and Dinos Chapman. Their destructive styles stimulate ambivalent heart and destroy original sign of girl and animals.

  • PDF

An Estimation of Concentration of Asian Dust (PM10) Using WRF-SMOKE-CMAQ (MADRID) During Springtime in the Korean Peninsula (WRF-SMOKE-CMAQ(MADRID)을 이용한 한반도 봄철 황사(PM10)의 농도 추정)

  • Moon, Yun-Seob;Lim, Yun-Kyu;Lee, Kang-Yeol
    • Journal of the Korean earth science society
    • /
    • v.32 no.3
    • /
    • pp.276-293
    • /
    • 2011
  • In this study a modeling system consisting of Weather Research and Forecasting (WRF), Sparse Matrix Operator Kernel Emissions (SMOKE), the Community Multiscale Air Quality (CMAQ) model, and the CMAQ-Model of Aerosol Dynamics, Reaction, Ionization, and Dissolution (MADRID) model has been applied to estimate enhancements of $PM_{10}$ during Asian dust events in Korea. In particular, 5 experimental formulas were applied to the WRF-SMOKE-CMAQ (MADRID) model to estimate Asian dust emissions from source locations for major Asian dust events in China and Mongolia: the US Environmental Protection Agency (EPA) model, the Goddard Global Ozone Chemistry Aerosol Radiation and Transport (GOCART) model, and the Dust Entrainment and Deposition (DEAD) model, as well as formulas by Park and In (2003), and Wang et al. (2000). According to the weather map, backward trajectory and satellite image analyses, Asian dust is generated by a strong downwind associated with the upper trough from a stagnation wave due to development of the upper jet stream, and transport of Asian dust to Korea shows up behind a surface front related to the cut-off low (known as comma type cloud) in satellite images. In the WRF-SMOKE-CMAQ modeling to estimate the PM10 concentration, Wang et al.'s experimental formula was depicted well in the temporal and spatial distribution of Asian dusts, and the GOCART model was low in mean bias errors and root mean square errors. Also, in the vertical profile analysis of Asian dusts using Wang et al's experimental formula, strong Asian dust with a concentration of more than $800\;{\mu}g/m^3$ for the period of March 31 to April 1, 2007 was transported under the boundary layer (about 1 km high), and weak Asian dust with a concentration of less than $400\;{\mu}g/m^3$ for the period of 16-17 March 2009 was transported above the boundary layer (about 1-3 km high). Furthermore, the difference between the CMAQ model and the CMAQ-MADRID model for the period of March 31 to April 1, 2007, in terms of PM10 concentration, was seen to be large in the East Asia area: the CMAQ-MADRID model showed the concentration to be about $25\;{\mu}g/m^3$ higher than the CMAQ model. In addition, the $PM_{10}$ concentration removed by the cloud liquid phase mechanism within the CMAQ-MADRID model was shown in the maximum $15\;{\mu}g/m^3$ in the Eastern Asia area.

Error Analysis of Delivered Dose Reconstruction Using Cone-beam CT and MLC Log Data (콘빔 CT 및 MLC 로그데이터를 이용한 전달 선량 재구성 시 오차 분석)

  • Cheong, Kwang-Ho;Park, So-Ah;Kang, Sei-Kwon;Hwang, Tae-Jin;Lee, Me-Yeon;Kim, Kyoung-Joo;Bae, Hoon-Sik;Oh, Do-Hoon
    • Progress in Medical Physics
    • /
    • v.21 no.4
    • /
    • pp.332-339
    • /
    • 2010
  • We aimed to setup an adaptive radiation therapy platform using cone-beam CT (CBCT) and multileaf collimator (MLC) log data and also intended to analyze a trend of dose calculation errors during the procedure based on a phantom study. We took CT and CBCT images of Catphan-600 (The Phantom Laboratory, USA) phantom, and made a simple step-and-shoot intensity-modulated radiation therapy (IMRT) plan based on the CT. Original plan doses were recalculated based on the CT ($CT_{plan}$) and the CBCT ($CBCT_{plan}$). Delivered monitor unit weights and leaves-positions during beam delivery for each MLC segment were extracted from the MLC log data then we reconstructed delivered doses based on the CT ($CT_{recon}$) and CBCT ($CBCT_{recon}$) respectively using the extracted information. Dose calculation errors were evaluated by two-dimensional dose discrepancies ($CT_{plan}$ was the benchmark), gamma index and dose-volume histograms (DVHs). From the dose differences and DVHs, it was estimated that the delivered dose was slightly greater than the planned dose; however, it was insignificant. Gamma index result showed that dose calculation error on CBCT using planned or reconstructed data were relatively greater than CT based calculation. In addition, there were significant discrepancies on the edge of each beam while those were less than errors due to inconsistency of CT and CBCT. $CBCT_{recon}$ showed coupled effects of above two kinds of errors; however, total error was decreased even though overall uncertainty for the evaluation of delivered dose on the CBCT was increased. Therefore, it is necessary to evaluate dose calculation errors separately as a setup error, dose calculation error due to CBCT image quality and reconstructed dose error which is actually what we want to know.

Biceps Femoris Tendon and Lateral Collateral Ligament: Analysis of Insertion Pattern Using MRI (대퇴이두건과 외측 측부인대: 자기공명영상을 이용한 부착형태 유형의 분석)

  • Shin, Yun Kyung;Ryu, Kyung Nam;Park, Ji Seon;Lee, Jung Eun;Jin, Wook;Park, So Young;Yoon, So Hee;Lee, Kyung Ryeol
    • Investigative Magnetic Resonance Imaging
    • /
    • v.18 no.3
    • /
    • pp.225-231
    • /
    • 2014
  • Purpose : The biceps femoris tendon (BFT) and lateral collateral ligament (LCL) in the knee were formerly known to form a conjoined tendon at the fibular attachment site. However, the BFT and LCL are attached into the fibular head in various patterns. We classified insertion patterns of the BFT and LCL using MR imaging, and analyzed whether the LCL attaches to the fibular head or not. Materials and Methods: A total of 494 consecutive knee MRIs of 470 patients taken between July 2012 and December 2012 were retrospectively reviewed. There were 224 males and 246 females, and patient age varied from 10 to 88 (mean, 48.6). The exclusion criteria were previous surgery and poor image quality. Using 3T fat-suppressed proton density-weighted axial images, the fibular insertion patterns of the BFT and LCL were classified into following types: type I (the LCL passes between the anterior arm and direct arm of the BFT's long head), type II (the LCL joins with anterior arm of the long head of the BFT), type III (the BFT and LCL join to form a conjoined tendon), type IV (the LCL passes laterally around the anterior margin of the BFT), and type V (the LCL passes posteriorly to the direct arm of the BFT's long head). Results: Among the 494 cases of the knee MRI, there were 433 (87.65%) type I cases, 21 (4.25%) type II cases, 2 (0.4%) type III cases, 16 (3.23%) type IV cases, and 22 (4.45%) type V cases. There were 26 cases (5.26%) in which the LCL and BFT were not attached into the fibular head. Conclusion: The fibular attachment pattern of the BFT and LCL shows diverse types in MR imaging. The LCL does not adhere to the head in some patients.

Radiopharmceutical Factors in the Prepartion of $^{99m}Tc-HMPAO$ Images of the Brain (뇌스캔용 $^{99m}Tc-HM-PAO$의 방사성 동위원소표지에 영향을 미치는 인자에 대한 연구)

  • Yeom, Mi-Kyoung;Kim, Sang-Eun;Lee, Dong-Soo;Chung, June-Key;Lee, Myung-Chul;Koh, Chang-Soon
    • The Korean Journal of Nuclear Medicine
    • /
    • v.25 no.1
    • /
    • pp.117-121
    • /
    • 1991
  • Technetium-99m-hexamethylpropyleneamine oxime $(^{99m}Tc-HM-PAO)$ is a neutral-lipophilic chelate which is used for scanning cerebral blood flow. The labeling efficiencies of $^{99m}Tc-HM-PAO$ is known to be sensitive to the amount of pertechnetate added and the quality of the pertechnetate. Because of these factors, the manufacture recommends that HM-PAO kits be reconstituted with a maximum of 30 mCi pertechnetate which was eluted <4 hr earlier from a generator which had been eluted < 24 hr previously. So we measured the labelling efficiencies and the decomposition rate constant according to the amount of pertechnetate added, the volume of pertechnette added, and generator in-growth time. We used the 3-system chromatographic methods (paper & ITLC-SG chromatography) which analyzed the labelling efficiencies of the $^{99m}Tc-HM-PAO$. There was no significant difference in labelling efficiencies between variable pertechnetate acitvities added. ($39.9{\pm}4.9\;mCi:\;87.8{\pm}5.1\;(%)$, $60.8{\pm}5.0\;mCi:\;90.7{\pm}2.2\;(%)$, $79.0{\pm}6.0\;mCi:\;86.8{\pm}3.9\;(%)$, $106.6{\pm}11.6\;mCi:\;87.7{\pm}1.2\;(%)$, p>0.05) No significant difference in labelling efficiencies were found between pertechnetate of 4ml and 5ml. (4ml : $89.1{\pm}3.2(%)$, 5ml: $87.3{\pm}4.0(%)$, p>0.05). There was no difference between 1-6 and 10-48 hr of generator in-growth time. (1-6 hr: $87.8{\pm}4.0(%)$, 10-48 hr: $89.6{\pm}1.6(%)$, p>0.05) The mean value of decomposition rate constant was $0.196{\pm}0.097\;(hr^{-1})$, and there were no difference according to the amount of pertecnetate added and the volume of pertecnetate added, ($39.9{\pm}4.9\;mCi:\;0.208{\pm}0.059\;(hr^{-1})$, $60.8{\pm}5.0\;mCi:\;0.191{\pm}0.100\;(hr^{-1})$ $79.0{\pm}6.0\;mCi:\;0.192{\pm}0.118\;(hr^{-1})$, $106.6{\pm}11.6\;mCi:\;0.212{\pm}0.030\;(hr^{-1})$, p>0.05, 4 ml: $0.200{\pm}0.074\;(hr^{-1})$, Sml: $0.193{\pm}0.115\;(hr^{-1})$, p>0.05). In the case of using the first eluate, the labelling efficiency of $^{99m}Tc-HM-PAO$ W3S 82.1%. These data suggest that there were no significant alteration in labelling efficiency of $^{99m}Tc-HM-PAO$ according to the considerable range of pertechnetate activities and volume added, and generator in-growth time. Also, it was shown that one vial of HM-PAO kit supplied the $^{99m}Tc-HM-PAO$ which was used for 3-4 patients.

  • PDF

A Computer Simulation for Small Animal Iodine-125 SPECT Development (소동물 Iodine-125 SPECT 개발을 위한 컴퓨터 시뮬레이션)

  • Jung, Jin-Ho;Choi, Yong;Chung, Yong-Hyun;Song, Tae-Yong;Jeong, Myung-Hwan;Hong, Key-Jo;Min, Byung-Jun;Choe, Yearn-Seong;Lee, Kyung-Han;Kim, Byung-Tae
    • The Korean Journal of Nuclear Medicine
    • /
    • v.38 no.1
    • /
    • pp.74-84
    • /
    • 2004
  • Purpose: Since I-125 emits low energy (27-35 keV) radiation, thinner crystal and collimator could be employed and, hence, it is favorable to obtain high quality images. The purpose of this study was to derive the optimized parameters of I-125 SPECT using a new simulation tool, GATE (Geant4 Application for Tomographic Emission). Materials and Methods: To validate the simulation method, gamma camera developed by Weisenberger et al. was modeled. Nal(T1) plate crystal was used and its thickness was determined by calculating detection efficiency. Spatial resolution and sensitivity curves were estimated by changing variable parameters for parallel-hole and pinhole collimator. Peformances of I-125 SPECT equipped with the optimal collimator were also estimated. Results: in the validation study, simulations were found to agree well with experimental measurements in spatial resolution (4%) and sensitivity (3%). In order to acquire 98% gamma ray detection efficiency, Nal(T1) thickness was determined to be 1 mm. Hole diameter (mm), length (mm) and shape were chosen to be 0.2:5:square and 0.5:10:hexagonal for high resolution (HR) and general purpose (GP) parallel-hole collimator, respectively. Hole diameter, channel height and acceptance angle of pinhole (PH) collimator were determined to be 0.25 mm, 0.1 mm and 90 degree. The spatial resolutions of reconstructed image of the I-125 SPECT employing HR:GP:PH were 1.2:1.7:0.8 mm. The sensitivities of HR:GP:PH were 39.7:71.9:5.5 cps/MBq. Conclusion: The optimal crystal and collimator parameters for I-125 Imaging were derived by simulation using GATE. The results indicate that excellent resolution and sensitivity imaging is feasible using I-125 SPECT.

$^{99m}Tc$ Labeling Kit Preparation and Characteristics of Anti-NCA-95 Monoclonal Antibody (항 NCA-95 단일클론항체의 $^{99m}Tc$표지 키트 제조 및 특성 연구)

  • Hong, Mee-Kyoung;Jeong, Jae-Min;Chung, June-Key;Choi, Seok-Rye;Kim, Chae-Kyun;Lee, Yong-Jin;Lee, Dong-Soo;Lee, Myung-Chul;Koh, Chang-Soon
    • The Korean Journal of Nuclear Medicine
    • /
    • v.30 no.4
    • /
    • pp.541-547
    • /
    • 1996
  • The previous monoclonal antibody labeling method for bone marrow immunoscintigrapy was complicated and laborious for clinical application. Also it showed a relatively low labeling efficiency. To improve this procedure, we compared several direct labeling methods of $^{99m}Tc$. 1) The labeling efficiency in the method using gluconate as a transchelator was low (40-70%), but immunoscintigraphy using this radiotracer produced a clear image. 2) To improve labeling efficiency, ${\beta}$-mercaptoethanol was removed after reduction. The labeling efficiency was improved up to 70-80%, but the radioactivity of the blood pool was high. 3) The higest labeling efficiency (>90%) and best quality images could be obtained by using MDP as a transchelating agent. It did not require additional procedures for separation of labeled antibodies. The immunoreactivity of this antibody was 60%. Residual MDP which can be taken up by the bone could be removed by PD-10 column. The reduced antibodies were stable with a high labeling efficiency (>90%) for up to 47 days by deep freezing. We concluded that the improved procedure for $^{99m}Tc$ labeling of anti-NCA-95 monoclonal antibody using MDP as a transchelating agent will be a simple and useful method for clinical application.

  • PDF

A Performance Comparison of Super Resolution Model with Different Activation Functions (활성함수 변화에 따른 초해상화 모델 성능 비교)

  • Yoo, Youngjun;Kim, Daehee;Lee, Jaekoo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.10
    • /
    • pp.303-308
    • /
    • 2020
  • The ReLU(Rectified Linear Unit) function has been dominantly used as a standard activation function in most deep artificial neural network models since it was proposed. Later, Leaky ReLU, Swish, and Mish activation functions were presented to replace ReLU, which showed improved performance over existing ReLU function in image classification task. Therefore, we recognized the need to experiment with whether performance improvements could be achieved by replacing the RELU with other activation functions in the super resolution task. In this paper, the performance was compared by changing the activation functions in EDSR model, which showed stable performance in the super resolution task. As a result, in experiments conducted with changing the activation function of EDSR, when the resolution was converted to double, the existing activation function, ReLU, showed similar or higher performance than the other activation functions used in the experiment. When the resolution was converted to four times, Leaky ReLU and Swish function showed slightly improved performance over ReLU. PSNR and SSIM, which can quantitatively evaluate the quality of images, were able to identify average performance improvements of 0.06%, 0.05% when using Leaky ReLU, and average performance improvements of 0.06% and 0.03% when using Swish. When the resolution is converted to eight times, the Mish function shows a slight average performance improvement over the ReLU. Using Mish, PSNR and SSIM were able to identify an average of 0.06% and 0.02% performance improvement over the RELU. In conclusion, Leaky ReLU and Swish showed improved performance compared to ReLU for super resolution that converts resolution four times and Mish showed improved performance compared to ReLU for super resolution that converts resolution eight times. In future study, we should conduct comparative experiments to replace activation functions with Leaky ReLU, Swish and Mish to improve performance in other super resolution models.