• Title/Summary/Keyword: Evaluation Algorithm

Search Result 3,445, Processing Time 0.034 seconds

Resolution Evaluation of a Pinhole Collimator according to the Aperture Diameter using Micro Deluxe Phantom (Micro Deluxe Phantom을 통한 핀홀 콜리메이터 초점의 직경별 분해능 평가)

  • An, Byung Ho;Yeon, Joon Ho;Kim, Soo Young;Choi, Sung Wook
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.19 no.1
    • /
    • pp.3-11
    • /
    • 2015
  • Purpose It is hard to obtain high quality images of knee and T.M joint because of a lot of soft tissues in the knee and T.M joint area. Most conventional system for high resolution scintigraphy was used by 4 mm aperture pinhole collimator. Performance comparison of high-resolution pinhole SPECT for Micro deluxe phantom using conventional system. the aim of this study is to evaluate performance of each aperture according to the diameter size and the usefulness of 24-hour delayed bone scintigraphy. Materials and Methods In this study 6 mm, 8 mm diameter pinhole collimators were mounted on Siemens E.CAM systems. In order to evaluate performance evaluation of each aperture and Micro Deluxe phantom was used for performance comparison of conventional SPECT system, Projection data were obtained with 9 degree increment per 30 second. Transverse images were reconstructed using dedicated OSEM algorithm with recovery of detector blurring. $^{99m}Tc-HDP$ source was used for 24-hour delayed bone scintigraphy. Results The knee joint images obtained with 24-hour delay were improved more than those obtained with 3-hour delay in our study. The 6 mm and 8 mm pinhole collimators FWHM have improved by 28% SNR and Uniformity have improved by 35%, Contrast has improved by 7% in 24-hour delayed knee joint image. While in 24-hour delayed T.M joint image of the 6 mm and 8 mm pinhole collimators FWHM have decreased by 60% SNR has decreased by 20% and Uniformity has decreased by 25%, Contrast has decreased significantly. Conclusion Pinhole collimators with 6 mm and 8 mm diameter could offer a superior performance for 24-hour delayed bone scintigraphy. The use of 24-hour delayed image provides additional benefits for pinhole scintigraphy of knee joint. Therefore, we expect that it is useful for precise diagnosis of knee joint and it is applicable to others joint imaging.

  • PDF

Evaluation of beam delivery accuracy for Small sized lung SBRT in low density lung tissue (Small sized lung SBRT 치료시 폐 실질 조직에서의 계획선량 전달 정확성 평가)

  • Oh, Hye Gyung;Son, Sang Jun;Park, Jang Pil;Lee, Je Hee
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.31 no.1
    • /
    • pp.7-15
    • /
    • 2019
  • Purpose: The purpose of this study is to evaluate beam delivery accuracy for small sized lung SBRT through experiment. In order to assess the accuracy, Eclipse TPS(Treatment planning system) equipped Acuros XB and radiochromic film were used for the dose distribution. Comparing calculated and measured dose distribution, evaluated the margin for PTV(Planning target volume) in lung tissue. Materials and Methods : Acquiring CT images for Rando phantom, planned virtual target volume by size(diameter 2, 3, 4, 5 cm) in right lung. All plans were normalized to the target Volume=prescribed 95 % with 6MV FFF VMAT 2 Arc. To compare with calculated and measured dose distribution, film was inserted in rando phantom and irradiated in axial direction. The indexes of evaluation are percentage difference(%Diff) for absolute dose, RMSE(Root-mean-square-error) value for relative dose, coverage ratio and average dose in PTV. Results: The maximum difference at center point was -4.65 % in diameter 2 cm size. And the RMSE value between the calculated and measured off-axis dose distribution indicated that the measured dose distribution in diameter 2 cm was different from calculated and inaccurate compare to diameter 5 cm. In addition, Distance prescribed 95 % dose($D_{95}$) in diameter 2 cm was not covered in PTV and average dose value was lowest in all sizes. Conclusion: This study demonstrated that small sized PTV was not enough covered with prescribed dose in low density lung tissue. All indexes of experimental results in diameter 2 cm were much different from other sizes. It is showed that minimized PTV is not accurate and affects the results of radiation therapy. It is considered that extended margin at small PTV in low density lung tissue for enhancing target center dose is necessary and don't need to constraint Maximum dose in optimization.

Comparison of Ultrasound Image Quality using Edge Enhancement Mask (경계면 강조 마스크를 이용한 초음파 영상 화질 비교)

  • Jung-Min, Son;Jun-Haeng, Lee
    • Journal of the Korean Society of Radiology
    • /
    • v.17 no.1
    • /
    • pp.157-165
    • /
    • 2023
  • Ultrasound imaging uses sound waves of frequencies to cause physical actions such as reflection, absorption, refraction, and transmission at the edge between different tissues. Improvement is needed because there is a lot of noise due to the characteristics of the data generated from the ultrasound equipment, and it is difficult to grasp the shape of the tissue to be actually observed because the edge is vague. The edge enhancement method is used as a method to solve the case where the edge surface looks clumped due to a decrease in image quality. In this paper, as a method to strengthen the interface, the quality improvement was confirmed by strengthening the interface, which is the high-frequency part, in each image using an unsharpening mask and high boost. The mask filtering used for each image was evaluated by measuring PSNR and SNR. Abdominal, head, heart, liver, kidney, breast, and fetal images were obtained from Philips epiq5g and affiniti70g and Alpinion E-cube 15 ultrasound equipment. The program used to implement the algorithm was implemented with MATLAB R2022a of MathWorks. The unsharpening and high-boost mask array size was set to 3*3, and the laplacian filter, a spatial filter used to create outline-enhanced images, was applied equally to both masks. ImageJ program was used for quantitative evaluation of image quality. As a result of applying the mask filter to various ultrasound images, the subjective image quality showed that the overall contour lines of the image were clearly visible when unsharpening and high-boost mask were applied to the original image. When comparing the quantitative image quality, the image quality of the image to which the unsharpening mask and the high boost mask were applied was evaluated higher than that of the original image. In the portal vein, head, gallbladder, and kidney images, the SNR, PSNR, RMSE and MAE of the image to which the high-boost mask was applied were measured to be high. Conversely, for images of the heart, breast, and fetus, SNR, PSNR, RMSE and MAE values were measured as images with the unsharpening mask applied. It is thought that using the optimal mask according to the image will help to improve the image quality, and the contour information was provided to improve the image quality.

A Study on the Digital Drawing of Archaeological Relics Using Open-Source Software (오픈소스 소프트웨어를 활용한 고고 유물의 디지털 실측 연구)

  • LEE Hosun;AHN Hyoungki
    • Korean Journal of Heritage: History & Science
    • /
    • v.57 no.1
    • /
    • pp.82-108
    • /
    • 2024
  • With the transition of archaeological recording method's transition from analog to digital, the 3D scanning technology has been actively adopted within the field. Research on the digital archaeological digital data gathered from 3D scanning and photogrammetry is continuously being conducted. However, due to cost and manpower issues, most buried cultural heritage organizations are hesitating to adopt such digital technology. This paper aims to present a digital recording method of relics utilizing open-source software and photogrammetry technology, which is believed to be the most efficient method among 3D scanning methods. The digital recording process of relics consists of three stages: acquiring a 3D model, creating a joining map with the edited 3D model, and creating an digital drawing. In order to enhance the accessibility, this method only utilizes open-source software throughout the entire process. The results of this study confirms that in terms of quantitative evaluation, the deviation of numerical measurement between the actual artifact and the 3D model was minimal. In addition, the results of quantitative quality analysis from the open-source software and the commercial software showed high similarity. However, the data processing time was overwhelmingly fast for commercial software, which is believed to be a result of high computational speed from the improved algorithm. In qualitative evaluation, some differences in mesh and texture quality occurred. In the 3D model generated by opensource software, following problems occurred: noise on the mesh surface, harsh surface of the mesh, and difficulty in confirming the production marks of relics and the expression of patterns. However, some of the open source software did generate the quality comparable to that of commercial software in quantitative and qualitative evaluations. Open-source software for editing 3D models was able to not only post-process, match, and merge the 3D model, but also scale adjustment, join surface production, and render image necessary for the actual measurement of relics. The final completed drawing was tracked by the CAD program, which is also an open-source software. In archaeological research, photogrammetry is very applicable to various processes, including excavation, writing reports, and research on numerical data from 3D models. With the breakthrough development of computer vision, the types of open-source software have been diversified and the performance has significantly improved. With the high accessibility to such digital technology, the acquisition of 3D model data in archaeology will be used as basic data for preservation and active research of cultural heritage.

An Intelligent Decision Support System for Selecting Promising Technologies for R&D based on Time-series Patent Analysis (R&D 기술 선정을 위한 시계열 특허 분석 기반 지능형 의사결정지원시스템)

  • Lee, Choongseok;Lee, Suk Joo;Choi, Byounggu
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.79-96
    • /
    • 2012
  • As the pace of competition dramatically accelerates and the complexity of change grows, a variety of research have been conducted to improve firms' short-term performance and to enhance firms' long-term survival. In particular, researchers and practitioners have paid their attention to identify promising technologies that lead competitive advantage to a firm. Discovery of promising technology depends on how a firm evaluates the value of technologies, thus many evaluating methods have been proposed. Experts' opinion based approaches have been widely accepted to predict the value of technologies. Whereas this approach provides in-depth analysis and ensures validity of analysis results, it is usually cost-and time-ineffective and is limited to qualitative evaluation. Considerable studies attempt to forecast the value of technology by using patent information to overcome the limitation of experts' opinion based approach. Patent based technology evaluation has served as a valuable assessment approach of the technological forecasting because it contains a full and practical description of technology with uniform structure. Furthermore, it provides information that is not divulged in any other sources. Although patent information based approach has contributed to our understanding of prediction of promising technologies, it has some limitations because prediction has been made based on the past patent information, and the interpretations of patent analyses are not consistent. In order to fill this gap, this study proposes a technology forecasting methodology by integrating patent information approach and artificial intelligence method. The methodology consists of three modules : evaluation of technologies promising, implementation of technologies value prediction model, and recommendation of promising technologies. In the first module, technologies promising is evaluated from three different and complementary dimensions; impact, fusion, and diffusion perspectives. The impact of technologies refers to their influence on future technologies development and improvement, and is also clearly associated with their monetary value. The fusion of technologies denotes the extent to which a technology fuses different technologies, and represents the breadth of search underlying the technology. The fusion of technologies can be calculated based on technology or patent, thus this study measures two types of fusion index; fusion index per technology and fusion index per patent. Finally, the diffusion of technologies denotes their degree of applicability across scientific and technological fields. In the same vein, diffusion index per technology and diffusion index per patent are considered respectively. In the second module, technologies value prediction model is implemented using artificial intelligence method. This studies use the values of five indexes (i.e., impact index, fusion index per technology, fusion index per patent, diffusion index per technology and diffusion index per patent) at different time (e.g., t-n, t-n-1, t-n-2, ${\cdots}$) as input variables. The out variables are values of five indexes at time t, which is used for learning. The learning method adopted in this study is backpropagation algorithm. In the third module, this study recommends final promising technologies based on analytic hierarchy process. AHP provides relative importance of each index, leading to final promising index for technology. Applicability of the proposed methodology is tested by using U.S. patents in international patent class G06F (i.e., electronic digital data processing) from 2000 to 2008. The results show that mean absolute error value for prediction produced by the proposed methodology is lower than the value produced by multiple regression analysis in cases of fusion indexes. However, mean absolute error value of the proposed methodology is slightly higher than the value of multiple regression analysis. These unexpected results may be explained, in part, by small number of patents. Since this study only uses patent data in class G06F, number of sample patent data is relatively small, leading to incomplete learning to satisfy complex artificial intelligence structure. In addition, fusion index per technology and impact index are found to be important criteria to predict promising technology. This study attempts to extend the existing knowledge by proposing a new methodology for prediction technology value by integrating patent information analysis and artificial intelligence network. It helps managers who want to technology develop planning and policy maker who want to implement technology policy by providing quantitative prediction methodology. In addition, this study could help other researchers by proving a deeper understanding of the complex technological forecasting field.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

Individual Thinking Style leads its Emotional Perception: Development of Web-style Design Evaluation Model and Recommendation Algorithm Depending on Consumer Regulatory Focus (사고가 시각을 바꾼다: 조절 초점에 따른 소비자 감성 기반 웹 스타일 평가 모형 및 추천 알고리즘 개발)

  • Kim, Keon-Woo;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.171-196
    • /
    • 2018
  • With the development of the web, two-way communication and evaluation became possible and marketing paradigms shifted. In order to meet the needs of consumers, web design trends are continuously responding to consumer feedback. As the web becomes more and more important, both academics and businesses are studying consumer emotions and satisfaction on the web. However, some consumer characteristics are not well considered. Demographic characteristics such as age and sex have been studied extensively, but few studies consider psychological characteristics such as regulatory focus (i.e., emotional regulation). In this study, we analyze the effect of web style on consumer emotion. Many studies analyze the relationship between the web and regulatory focus, but most concentrate on the purpose of web use, particularly motivation and information search, rather than on web style and design. The web communicates with users through visual elements. Because the human brain is influenced by all five senses, both design factors and emotional responses are important in the web environment. Therefore, in this study, we examine the relationship between consumer emotion and satisfaction and web style and design. Previous studies have considered the effects of web layout, structure, and color on emotions. In this study, however, we excluded these web components, in contrast to earlier studies, and analyzed the relationship between consumer satisfaction and emotional indexes of web-style only. To perform this analysis, we collected consumer surveys presenting 40 web style themes to 204 consumers. Each consumer evaluated four themes. The emotional adjectives evaluated by consumers were composed of 18 contrast pairs, and the upper emotional indexes were extracted through factor analysis. The emotional indexes were 'softness,' 'modernity,' 'clearness,' and 'jam.' Hypotheses were established based on the assumption that emotional indexes have different effects on consumer satisfaction. After the analysis, hypotheses 1, 2, and 3 were accepted and hypothesis 4 was rejected. While hypothesis 4 was rejected, its effect on consumer satisfaction was negative, not positive. This means that emotional indexes such as 'softness,' 'modernity,' and 'clearness' have a positive effect on consumer satisfaction. In other words, consumers prefer emotions that are soft, emotional, natural, rounded, dynamic, modern, elaborate, unique, bright, pure, and clear. 'Jam' has a negative effect on consumer satisfaction. It means, consumer prefer the emotion which is empty, plain, and simple. Regulatory focus shows differences in motivation and propensity in various domains. It is important to consider organizational behavior and decision making according to the regulatory focus tendency, and it affects not only political, cultural, ethical judgments and behavior but also broad psychological problems. Regulatory focus also differs from emotional response. Promotion focus responds more strongly to positive emotional responses. On the other hand, prevention focus has a strong response to negative emotions. Web style is a type of service, and consumer satisfaction is affected not only by cognitive evaluation but also by emotion. This emotional response depends on whether the consumer will benefit or harm himself. Therefore, it is necessary to confirm the difference of the consumer's emotional response according to the regulatory focus which is one of the characteristics and viewpoint of the consumers about the web style. After MMR analysis result, hypothesis 5.3 was accepted, and hypothesis 5.4 was rejected. But hypothesis 5.4 supported in the opposite direction to the hypothesis. After validation, we confirmed the mechanism of emotional response according to the tendency of regulatory focus. Using the results, we developed the structure of web-style recommendation system and recommend methods through regulatory focus. We classified the regulatory focus group in to three categories that promotion, grey, prevention. Then, we suggest web-style recommend method along the group. If we further develop this study, we expect that the existing regulatory focus theory can be extended not only to the motivational part but also to the emotional behavioral response according to the regulatory focus tendency. Moreover, we believe that it is possible to recommend web-style according to regulatory focus and emotional desire which consumers most prefer.

Evaluating efficiency of Coaxial MLC VMAT plan for spine SBRT (Spine SBRT 치료시 Coaxial MLC VMAT plan의 유용성 평가)

  • Son, Sang Jun;Mun, Jun Ki;Kim, Dae Ho;Yoo, Suk Hyun
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.26 no.2
    • /
    • pp.313-320
    • /
    • 2014
  • Purpose : The purpose of the study is to evaluate the efficiency of Coaxial MLC VMAT plan (Using $273^{\circ}$ and $350^{\circ}$ collimator angle) That the leaf motion direction aligned with axis of OAR (Organ at risk, It means spinal cord or cauda equine in this study.) compare to Universal MLC VMAT plan (using $30^{\circ}$ and $330^{\circ}$ collimator angle) for spine SBRT. Materials and Methods : The 10 cases of spine SBRT that treated with VMAT planned by Coaxial MLC and Varian TBX were enrolled. Those cases were planned by Eclipse (Ver. 10.0.42, Varian, USA), PRO3 (Progressive Resolution Optimizer 10.0.28) and AAA (Anisotropic Analytic Algorithm Ver. 10.0.28) with coplanar $360^{\circ}$ arcs and 10MV FFF (Flattening filter free). Each arc has $273^{\circ}$ and $350^{\circ}$ collimator angle, respectively. The Universal MLC VMAT plans are based on existing treatment plans. Those plans have the same parameters of existing treatment plans but collimator angle. To minimize the dose difference that shows up randomly on optimizing, all plans were optimized and calculated twice respectively. The calculation grid is 0.2 cm and all plans were normalized to the target V100%=90%. The indexes of evaluation are V10Gy, D0.03cc, Dmean of OAR (Organ at risk, It means spinal cord or cauda equine in this study.), H.I (Homogeneity index) of the target and total MU. All Coaxial VMAT plans were verified by gamma test with Mapcheck2 (Sun Nuclear Co., USA), Mapphan (Sun Nuclear Co., USA) and SNC patient (Sun Nuclear Co., USA Ver 6.1.2.18513). Results : The difference between the coaxial and the universal VMAT plans are follow. The coaxial VMAT plan is better in the V10Gy of OAR, Up to 4.1%, at least 0.4%, the average difference was 1.9% and In the D0.03cc of OAR, Up to 83.6 cGy, at least 2.2 cGy, the average difference was 33.3 cGy. In Dmean, Up to 34.8 cGy, at least -13.0 cGy, the average difference was 9.6 cGy that say the coaxial VMAT plans are better except few cases. H.I difference Up to 0.04, at least 0.01, the average difference was 0.02 and the difference of average total MU is 74.1 MU. The coaxial MLC VMAT plan is average 74.1 MU lesser then another. All IMRT verification gamma test results for the coaxial MLC VMAT plan passed over 90.0% at 1mm / 2%. Conclusion : Coaxial MLC VMAT treatment plan appeared to be favorable in most cases than the Universal MLC VMAT treatment planning. It is efficient in lowering the dose of the OAR V10Gy especially. As a result, the Coaxial MLC VMAT plan could be better than the Universal MLC VMAT plan in same MU.

The Comparison of Image Quality and Quantitative Indices by Wide Beam Reconstruction Method and Filtered Back Projection Method in Tl-201 Myocardial Perfusion SPECT (Tl-201 심근관류 SPECT 검사에서 광대역 재구성(Wide Beam Reconstruction: WBR) 방법과 여과 후 역투영법에 따른 영상의 질 및 정량적 지표 값 비교)

  • Yoon, Soon-Sang;Nam, Ki-Pyo;Shim, Dong-Oh;Kim, Dong-Seok
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.2
    • /
    • pp.122-127
    • /
    • 2010
  • Purpose: The Xpress3.$cardiac^{TM}$ which is a kind of wide beam reconstruction (WBR) method developed by UltraSPECT (Haifa, Israel) enables the acquisition of at quarter time while maintaining image quality. The purpose of this study is to investigate the usefulness of WBR method for decreasing scan times and to compare to it with filtered back projection (FBP), which is the method routinely used. Materials and Methods: Phantom and clinical studies were performed. The anthropomorphic torso phantom was made on an equality with counts from patient's body. The Tl-201 concentrations in the compartments were 74 kBq (2 ${\mu}Ci$)/cc in myocardium, 11.1 kBq (0.3 ${\mu}Ci$)/cc in soft tissue, and 2.59 kBq (0.07 ${\mu}Ci$)/cc in lung. The non-gated Tl-201 myocardial perfusion SPECT data were acquired with the phantom. The former study was scanned for 50 seconds per frame with FBP method, and the latter study was acquired for 13 seconds per frame with WBR method. Using the Xeleris ver. 2.0551, full width at half maximum (FWHM) and average image contrast were compared. In clinical studies, we analyzed the 30 patients who were examined by Tl-201 gated myocardial perfusion SPECT in department of nuclear medicine at Asan Medical Center from January to April 2010. The patients were imaged at full time (50 second per frame) with FBP algorithm and again quarter-time (13 second per frame) with the WBR algorithm. Using the 4D MSPECT (4DM), Quantitative Perfusion SPECT (QPS), and Quantitative Gated SPECT (QGS) software, the summed stress score (SSS), summed rest score (SRS), summed difference score, end-diastolic volume (EDV), end-systolic volume (ESV) and ejection fraction (EF) were analyzed for their correlations and statistical comparison by paired t-test. Results: As a result of the phantom study, the WBR method improved FWHM more than about 30% compared with FBP method (WBR data 5.47 mm, FBP data 7.07 mm). And the WBR method's average image contrast was also higher than FBP method's. However, in result of quantitative indices, SSS, SDS, SRS, EDV, ESV, EF, there were statistically significant differences from WBR and FBP(p<0.01). In the correlation of SSS, SDS, SRS, there were significant differences for WBR and FBP (0.18, 0.34, 0.08). But EDV, ESV, EF showed good correlation with WBR and FBP (0.88, 0.89, 0.71). Conclusion: From phantom study results, we confirmed that the WBR method reduces an acquisition time while improving an image quality compared with FBP method. However, we should consider significant differences in quantitative indices. And it needs to take an evaluation test to apply clinical study to find a cause of differences out between phantom and clinical results.

  • PDF

Evaluating efficiency of Split VMAT plan for prostate cancer radiotherapy involving pelvic lymph nodes (골반 림프선을 포함한 전립선암 치료 시 Split VMAT plan의 유용성 평가)

  • Mun, Jun Ki;Son, Sang Jun;Kim, Dae Ho;Seo, Seok Jin
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.27 no.2
    • /
    • pp.145-156
    • /
    • 2015
  • Purpose : The purpose of this study is to evaluate the efficiency of Split VMAT planning(Contouring rectum divided into an upper and a lower for reduce rectum dose) compare to Conventional VMAT planning(Contouring whole rectum) for prostate cancer radiotherapy involving pelvic lymph nodes. Materials and Methods : A total of 9 cases were enrolled. Each case received radiotherapy with Split VMAT planning to the prostate involving pelvic lymph nodes. Treatment was delivered using TrueBeam STX(Varian Medical Systems, USA) and planned on Eclipse(Ver. 10.0.42, Varian, USA), PRO3(Progressive Resolution Optimizer 10.0.28), AAA(Anisotropic Analytic Algorithm Ver. 10.0.28). Lower rectum contour was defined as starting 1cm superior and ending 1cm inferior to the prostate PTV, upper rectum is a part, except lower rectum from the whole rectum. Split VMAT plan parameters consisted of 10MV coplanar $360^{\circ}$ arcs. Each arc had $30^{\circ}$ and $30^{\circ}$ collimator angle, respectively. An SIB(Simultaneous Integrated Boost) treatment prescription was employed delivering 50.4Gy to pelvic lymph nodes and 63~70Gy to the prostate in 28 fractions. $D_{mean}$ of whole rectum on Split VMAT plan was applied for DVC(Dose Volume Constraint) of the whole rectum for Conventional VMAT plan. In addition, all parameters were set to be the same of existing treatment plans. To minimize the dose difference that shows up randomly on optimizing, all plans were optimized and calculated twice respectively using a 0.2cm grid. All plans were normalized to the prostate $PTV_{100%}$ = 90% or 95%. A comparison of $D_{mean}$ of whole rectum, upperr ectum, lower rectum, and bladder, $V_{50%}$ of upper rectum, total MU and H.I.(Homogeneity Index) and C.I.(Conformity Index) of the PTV was used for technique evaluation. All Split VMAT plans were verified by gamma test with portal dosimetry using EPID. Results : Using DVH analysis, a difference between the Conventional and the Split VMAT plans was demonstrated. The Split VMAT plan demonstrated better in the $D_{mean}$ of whole rectum, Up to 134.4 cGy, at least 43.5 cGy, the average difference was 75.6 cGy and in the $D_{mean}$ of upper rectum, Up to 1113.5 cGy, at least 87.2 cGy, the average difference was 550.5 cGy and in the $D_{mean}$ of lower rectum, Up to 100.5 cGy, at least -34.6 cGy, the average difference was 34.3 cGy and in the $D_{mean}$ of bladder, Up to 271 cGy, at least -55.5 cGy, the average difference was 117.8 cGy and in $V_{50%}$ of upper rectum, Up to 63.4%, at least 3.2%, the average difference was 23.2%. There was no significant difference on H.I., and C.I. of the PTV among two plans. The Split VMAT plan is average 77 MU more than another. All IMRT verification gamma test results for the Split VMAT plan passed over 90.0% at 2 mm / 2%. Conclusion : As a result, the Split VMAT plan appeared to be more favorable in most cases than the Conventional VMAT plan for prostate cancer radiotherapy involving pelvic lymph nodes. By using the split VMAT planning technique it was possible to reduce the upper rectum dose, thus reducing whole rectal dose when compared to conventional VMAT planning. Also using the split VMAT planning technique increase the treatment efficiency.

  • PDF