• Title/Summary/Keyword: higher order accuracy

Search Result 782, Processing Time 0.032 seconds

Evaluation of a Water-based Bolus Device for Radiotherapy to the Extremities in Kaposi's Sarcoma Patients (사지에 발병한 카포시육종의 방사선치료를 위한 물볼루스 기구의 유용성 고찰)

  • Ahn, Seung-Kwon;Kim, Yong-Bae;Lee, Ik-Jae;Song, Tae-Soo;Son, Dong-Min;Jang, Yung-Jae;Cho, Jung-Hee;Kim, Joo-Ho;Kim, Dong-Wook;Cho, Jae-Ho;Suh, Chang-Ok
    • Radiation Oncology Journal
    • /
    • v.26 no.3
    • /
    • pp.189-194
    • /
    • 2008
  • Purpose: We designed a water-based bolus device for radiation therapy in Kaposi's sarcoma. This study evaluated the usefulness of this new device and compared it with the currently used rice-based bolus. Materials and Methods: We fashioned a polystyrene box and cut a hole in order to insert patient's extremities while the patient was in the supine position. We used a vacuum-vinyl based polymer to reduce water leakage. Next, we eliminated air using a vacuum pump and a vacuum valve to reduce the air gap between the water and extremities in the vacuum-vinyl box. We performed CT scans to evaluate the density difference of the fabricated water-based bolus device when the device in which the rice-based bolus was placed directly, the rice-based bolus with polymer-vinyl packed rice, and the water were all put in. We analyzed the density change with the air gap volume using a planning system. In addition, we measured the homogeneity and dose in the low-extremities phantom, attached to six TLD, and wrapped film exposed in parallel-opposite fields with the LINAC under the same conditions as the set-up of the CT-simulator. Results: The density value of the rice-based bolus with the rice put in directly was 14% lower than that of the water-based bolus. Moreover, the value of the other experiments in the rice-based bolus with the polymer-vinyl packed rice showed an 18% reduction in density. The analysis of the EDR2 film revealed that the water-based bolus shows a more homogeneous dose plan, which was superior by $4{\sim}4.4%$ to the rice-base bolus. The mean TLD readings of the rice-based bolus, with the rice put directly into the polystyrene box had a 3.4% higher density value. Moreover, the density value in the case of the rice-based bolus with polymer-vinyl packed rice had a 4.3% higher reading compared to the water-based bolus. Conclusion: Our custom-made water-based bolus device increases the accuracy of the set-up by confirming the treatment field. It also improves the accuracy of the therapy owing to the reduction of the air gap using a vacuum pump and a vacuum valve. This set-up represents a promising alternative device for delivering a homogenous dose to the target volume.

Design and Implementation of a Similarity based Plant Disease Image Retrieval using Combined Descriptors and Inverse Proportion of Image Volumes (Descriptor 조합 및 동일 병명 이미지 수량 역비율 가중치를 적용한 유사도 기반 작물 질병 검색 기술 설계 및 구현)

  • Lim, Hye Jin;Jeong, Da Woon;Yoo, Seong Joon;Gu, Yeong Hyeon;Park, Jong Han
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.14 no.6
    • /
    • pp.30-43
    • /
    • 2018
  • Many studies have been carried out to retrieve images using colors, shapes, and textures which are characteristic of images. In addition, there is also progress in research related to the disease images of the crop. In this paper, to be a help to identify the disease occurred in crops grown in the agricultural field, we propose a similarity-based crop disease search system using the diseases image of horticulture crops. The proposed system improves the similarity retrieval performance compared to existing ones through the combination descriptor without using a single descriptor and applied the weight based calculation method to provide users with highly readable similarity search results. In this paper, a total of 13 Descriptors were used in combination. We used to retrieval of disease of six crops using a combination Descriptor, and a combination Descriptor with the highest average accuracy for each crop was selected as a combination Descriptor for the crop. The retrieved result were expressed as a percentage using the calculation method based on the ratio of disease names, and calculation method based on the weight. The calculation method based on the ratio of disease name has a problem in that number of images used in the query image and similarity search was output in a first order. To solve this problem, we used a calculation method based on weight. We applied the test image of each disease name to each of the two calculation methods to measure the classification performance of the retrieval results. We compared averages of retrieval performance for two calculation method for each crop. In cases of red pepper and apple, the performance of the calculation method based on the ratio of disease names was about 11.89% on average higher than that of the calculation method based on weight, respectively. In cases of chrysanthemum, strawberry, pear, and grape, the performance of the calculation method based on the weight was about 20.34% on average higher than that of the calculation method based on the ratio of disease names, respectively. In addition, the system proposed in this paper, UI/UX was configured conveniently via the feedback of actual users. Each system screen has a title and a description of the screen at the top, and was configured to display a user to conveniently view the information on the disease. The information of the disease searched based on the calculation method proposed above displays images and disease names of similar diseases. The system's environment is implemented for use with a web browser based on a pc environment and a web browser based on a mobile device environment.

A Methodology for Automatic Multi-Categorization of Single-Categorized Documents (단일 카테고리 문서의 다중 카테고리 자동확장 방법론)

  • Hong, Jin-Sung;Kim, Namgyu;Lee, Sangwon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.77-92
    • /
    • 2014
  • Recently, numerous documents including unstructured data and text have been created due to the rapid increase in the usage of social media and the Internet. Each document is usually provided with a specific category for the convenience of the users. In the past, the categorization was performed manually. However, in the case of manual categorization, not only can the accuracy of the categorization be not guaranteed but the categorization also requires a large amount of time and huge costs. Many studies have been conducted towards the automatic creation of categories to solve the limitations of manual categorization. Unfortunately, most of these methods cannot be applied to categorizing complex documents with multiple topics because the methods work by assuming that one document can be categorized into one category only. In order to overcome this limitation, some studies have attempted to categorize each document into multiple categories. However, they are also limited in that their learning process involves training using a multi-categorized document set. These methods therefore cannot be applied to multi-categorization of most documents unless multi-categorized training sets are provided. To overcome the limitation of the requirement of a multi-categorized training set by traditional multi-categorization algorithms, we propose a new methodology that can extend a category of a single-categorized document to multiple categorizes by analyzing relationships among categories, topics, and documents. First, we attempt to find the relationship between documents and topics by using the result of topic analysis for single-categorized documents. Second, we construct a correspondence table between topics and categories by investigating the relationship between them. Finally, we calculate the matching scores for each document to multiple categories. The results imply that a document can be classified into a certain category if and only if the matching score is higher than the predefined threshold. For example, we can classify a certain document into three categories that have larger matching scores than the predefined threshold. The main contribution of our study is that our methodology can improve the applicability of traditional multi-category classifiers by generating multi-categorized documents from single-categorized documents. Additionally, we propose a module for verifying the accuracy of the proposed methodology. For performance evaluation, we performed intensive experiments with news articles. News articles are clearly categorized based on the theme, whereas the use of vulgar language and slang is smaller than other usual text document. We collected news articles from July 2012 to June 2013. The articles exhibit large variations in terms of the number of types of categories. This is because readers have different levels of interest in each category. Additionally, the result is also attributed to the differences in the frequency of the events in each category. In order to minimize the distortion of the result from the number of articles in different categories, we extracted 3,000 articles equally from each of the eight categories. Therefore, the total number of articles used in our experiments was 24,000. The eight categories were "IT Science," "Economy," "Society," "Life and Culture," "World," "Sports," "Entertainment," and "Politics." By using the news articles that we collected, we calculated the document/category correspondence scores by utilizing topic/category and document/topics correspondence scores. The document/category correspondence score can be said to indicate the degree of correspondence of each document to a certain category. As a result, we could present two additional categories for each of the 23,089 documents. Precision, recall, and F-score were revealed to be 0.605, 0.629, and 0.617 respectively when only the top 1 predicted category was evaluated, whereas they were revealed to be 0.838, 0.290, and 0.431 when the top 1 - 3 predicted categories were considered. It was very interesting to find a large variation between the scores of the eight categories on precision, recall, and F-score.

A Study on the Improvement of Recommendation Accuracy by Using Category Association Rule Mining (카테고리 연관 규칙 마이닝을 활용한 추천 정확도 향상 기법)

  • Lee, Dongwon
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.27-42
    • /
    • 2020
  • Traditional companies with offline stores were unable to secure large display space due to the problems of cost. This limitation inevitably allowed limited kinds of products to be displayed on the shelves, which resulted in consumers being deprived of the opportunity to experience various items. Taking advantage of the virtual space called the Internet, online shopping goes beyond the limits of limitations in physical space of offline shopping and is now able to display numerous products on web pages that can satisfy consumers with a variety of needs. Paradoxically, however, this can also cause consumers to experience the difficulty of comparing and evaluating too many alternatives in their purchase decision-making process. As an effort to address this side effect, various kinds of consumer's purchase decision support systems have been studied, such as keyword-based item search service and recommender systems. These systems can reduce search time for items, prevent consumer from leaving while browsing, and contribute to the seller's increased sales. Among those systems, recommender systems based on association rule mining techniques can effectively detect interrelated products from transaction data such as orders. The association between products obtained by statistical analysis provides clues to predicting how interested consumers will be in another product. However, since its algorithm is based on the number of transactions, products not sold enough so far in the early days of launch may not be included in the list of recommendations even though they are highly likely to be sold. Such missing items may not have sufficient opportunities to be exposed to consumers to record sufficient sales, and then fall into a vicious cycle of a vicious cycle of declining sales and omission in the recommendation list. This situation is an inevitable outcome in situations in which recommendations are made based on past transaction histories, rather than on determining potential future sales possibilities. This study started with the idea that reflecting the means by which this potential possibility can be identified indirectly would help to select highly recommended products. In the light of the fact that the attributes of a product affect the consumer's purchasing decisions, this study was conducted to reflect them in the recommender systems. In other words, consumers who visit a product page have shown interest in the attributes of the product and would be also interested in other products with the same attributes. On such assumption, based on these attributes, the recommender system can select recommended products that can show a higher acceptance rate. Given that a category is one of the main attributes of a product, it can be a good indicator of not only direct associations between two items but also potential associations that have yet to be revealed. Based on this idea, the study devised a recommender system that reflects not only associations between products but also categories. Through regression analysis, two kinds of associations were combined to form a model that could predict the hit rate of recommendation. To evaluate the performance of the proposed model, another regression model was also developed based only on associations between products. Comparative experiments were designed to be similar to the environment in which products are actually recommended in online shopping malls. First, the association rules for all possible combinations of antecedent and consequent items were generated from the order data. Then, hit rates for each of the associated rules were predicted from the support and confidence that are calculated by each of the models. The comparative experiments using order data collected from an online shopping mall show that the recommendation accuracy can be improved by further reflecting not only the association between products but also categories in the recommendation of related products. The proposed model showed a 2 to 3 percent improvement in hit rates compared to the existing model. From a practical point of view, it is expected to have a positive effect on improving consumers' purchasing satisfaction and increasing sellers' sales.

KNU Korean Sentiment Lexicon: Bi-LSTM-based Method for Building a Korean Sentiment Lexicon (Bi-LSTM 기반의 한국어 감성사전 구축 방안)

  • Park, Sang-Min;Na, Chul-Won;Choi, Min-Seong;Lee, Da-Hee;On, Byung-Won
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.219-240
    • /
    • 2018
  • Sentiment analysis, which is one of the text mining techniques, is a method for extracting subjective content embedded in text documents. Recently, the sentiment analysis methods have been widely used in many fields. As good examples, data-driven surveys are based on analyzing the subjectivity of text data posted by users and market researches are conducted by analyzing users' review posts to quantify users' reputation on a target product. The basic method of sentiment analysis is to use sentiment dictionary (or lexicon), a list of sentiment vocabularies with positive, neutral, or negative semantics. In general, the meaning of many sentiment words is likely to be different across domains. For example, a sentiment word, 'sad' indicates negative meaning in many fields but a movie. In order to perform accurate sentiment analysis, we need to build the sentiment dictionary for a given domain. However, such a method of building the sentiment lexicon is time-consuming and various sentiment vocabularies are not included without the use of general-purpose sentiment lexicon. In order to address this problem, several studies have been carried out to construct the sentiment lexicon suitable for a specific domain based on 'OPEN HANGUL' and 'SentiWordNet', which are general-purpose sentiment lexicons. However, OPEN HANGUL is no longer being serviced and SentiWordNet does not work well because of language difference in the process of converting Korean word into English word. There are restrictions on the use of such general-purpose sentiment lexicons as seed data for building the sentiment lexicon for a specific domain. In this article, we construct 'KNU Korean Sentiment Lexicon (KNU-KSL)', a new general-purpose Korean sentiment dictionary that is more advanced than existing general-purpose lexicons. The proposed dictionary, which is a list of domain-independent sentiment words such as 'thank you', 'worthy', and 'impressed', is built to quickly construct the sentiment dictionary for a target domain. Especially, it constructs sentiment vocabularies by analyzing the glosses contained in Standard Korean Language Dictionary (SKLD) by the following procedures: First, we propose a sentiment classification model based on Bidirectional Long Short-Term Memory (Bi-LSTM). Second, the proposed deep learning model automatically classifies each of glosses to either positive or negative meaning. Third, positive words and phrases are extracted from the glosses classified as positive meaning, while negative words and phrases are extracted from the glosses classified as negative meaning. Our experimental results show that the average accuracy of the proposed sentiment classification model is up to 89.45%. In addition, the sentiment dictionary is more extended using various external sources including SentiWordNet, SenticNet, Emotional Verbs, and Sentiment Lexicon 0603. Furthermore, we add sentiment information about frequently used coined words and emoticons that are used mainly on the Web. The KNU-KSL contains a total of 14,843 sentiment vocabularies, each of which is one of 1-grams, 2-grams, phrases, and sentence patterns. Unlike existing sentiment dictionaries, it is composed of words that are not affected by particular domains. The recent trend on sentiment analysis is to use deep learning technique without sentiment dictionaries. The importance of developing sentiment dictionaries is declined gradually. However, one of recent studies shows that the words in the sentiment dictionary can be used as features of deep learning models, resulting in the sentiment analysis performed with higher accuracy (Teng, Z., 2016). This result indicates that the sentiment dictionary is used not only for sentiment analysis but also as features of deep learning models for improving accuracy. The proposed dictionary can be used as a basic data for constructing the sentiment lexicon of a particular domain and as features of deep learning models. It is also useful to automatically and quickly build large training sets for deep learning models.

Development of New Variables Affecting Movie Success and Prediction of Weekly Box Office Using Them Based on Machine Learning (영화 흥행에 영향을 미치는 새로운 변수 개발과 이를 이용한 머신러닝 기반의 주간 박스오피스 예측)

  • Song, Junga;Choi, Keunho;Kim, Gunwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.67-83
    • /
    • 2018
  • The Korean film industry with significant increase every year exceeded the number of cumulative audiences of 200 million people in 2013 finally. However, starting from 2015 the Korean film industry entered a period of low growth and experienced a negative growth after all in 2016. To overcome such difficulty, stakeholders like production company, distribution company, multiplex have attempted to maximize the market returns using strategies of predicting change of market and of responding to such market change immediately. Since a film is classified as one of experiential products, it is not easy to predict a box office record and the initial number of audiences before the film is released. And also, the number of audiences fluctuates with a variety of factors after the film is released. So, the production company and distribution company try to be guaranteed the number of screens at the opining time of a newly released by multiplex chains. However, the multiplex chains tend to open the screening schedule during only a week and then determine the number of screening of the forthcoming week based on the box office record and the evaluation of audiences. Many previous researches have conducted to deal with the prediction of box office records of films. In the early stage, the researches attempted to identify factors affecting the box office record. And nowadays, many studies have tried to apply various analytic techniques to the factors identified previously in order to improve the accuracy of prediction and to explain the effect of each factor instead of identifying new factors affecting the box office record. However, most of previous researches have limitations in that they used the total number of audiences from the opening to the end as a target variable, and this makes it difficult to predict and respond to the demand of market which changes dynamically. Therefore, the purpose of this study is to predict the weekly number of audiences of a newly released film so that the stakeholder can flexibly and elastically respond to the change of the number of audiences in the film. To that end, we considered the factors used in the previous studies affecting box office and developed new factors not used in previous studies such as the order of opening of movies, dynamics of sales. Along with the comprehensive factors, we used the machine learning method such as Random Forest, Multi Layer Perception, Support Vector Machine, and Naive Bays, to predict the number of cumulative visitors from the first week after a film release to the third week. At the point of the first and the second week, we predicted the cumulative number of visitors of the forthcoming week for a released film. And at the point of the third week, we predict the total number of visitors of the film. In addition, we predicted the total number of cumulative visitors also at the point of the both first week and second week using the same factors. As a result, we found the accuracy of predicting the number of visitors at the forthcoming week was higher than that of predicting the total number of them in all of three weeks, and also the accuracy of the Random Forest was the highest among the machine learning methods we used. This study has implications in that this study 1) considered various factors comprehensively which affect the box office record and merely addressed by other previous researches such as the weekly rating of audiences after release, the weekly rank of the film after release, and the weekly sales share after release, and 2) tried to predict and respond to the demand of market which changes dynamically by suggesting models which predicts the weekly number of audiences of newly released films so that the stakeholders can flexibly and elastically respond to the change of the number of audiences in the film.

A Study on Hepatomegaly and Facial Telangiectasia in a Group of the Insured (간종대(肝腫大)와 안면모세혈관확장(顔面毛細血管擴張)의 보험의학적연구(保險醫學的硏究))

  • Im, Young-Hoon
    • The Journal of the Korean life insurance medical association
    • /
    • v.4 no.1
    • /
    • pp.110-132
    • /
    • 1987
  • A study on hepatomegaly detected by abdominal palpation, and facial telangiectasia in a total of 3,418 insured persons medically examined at the Honam Medical Room of Dong Bang Life Insurance Company Ltd. from February, 1984 to August, 1985 was undertaken. The results were as follows: 1) Hepatomegaly was found in 383 cases(27.5%) among the 1,395 insureds of male and in 163 cases(8.1%) among the 2,023 insureds of female. The difference of incidence of hepatomegaly between all males and females showed statistical significance(p<0.001). In each age group, the incidence of hepatomegaly in :nale was higher than that in female. The incidence of hepatomegaly in each age group in male increased cnosiderably with age; it showed 11.6%,16.2%, 42.6% and 52.9% from second to sixth decade in order, thereafter in seventh decade it decreased to 26.7%, While the incidence of hepatomegaly in female increased slightly in each age group. 2) Facial telangiectasia was found in 318 cases(22.8%) among all males and in 157 cases(7.8%) among all females. The difference of incidence of telangiectasia between all males and females showed statistical significance(p<0.001). In each age group, the incidence of telangiectasia in male was higher than that in female, except of second decade. The incidence of facial telangiectasia in each age group in male increased considerably with age; while it increased slightly in female. 3) Facial telangiectasia accompanied by hepatomegaly was found in 235 cases(61.4%) among 383 cases of hepatomegaly in male and in 69 cases(42.3%) among 163 cases of hepatomegaly in female. The difference of incidence of telangiectasia between males and females show ed statistical significance(p<0.001). 4) Facial telangiectasia without spider angiomata accompanied by hepatomegaly was found in 201 cases(52.5%) among 383 cases of hepatomegaly in all males and in 67 casgs(41.4%) among 163 cases of hepatomegaly in all females; facial spider angiomata accompanied by hepatomegaly was found in 34 cases(8.9%) among 383 cases of hepatomegaly in all males and in 2 cases(1.2%) among 163 cases of hepatomegaly in all females. 5) Abnormal SGOT activity was found in 19 cases(7.9%) among 242 cases of hepatomegaly in all males and in one case(1.5%) among 67 cases of hepatomegaly in all females. The difference of incidence of abnormal SGOT activity showed statistical significance(p<0.001). The incidence of abnormal SGOT activity by the size of hepatomegaly, that is, palpated <1 finger's breadth, <2 fingers' breadth and ${\geqq}2$ fingers' breadth, revealed 2.2%, 6.0% and 60.0% respectively in all males, while abnormal SGOT activity was found only one case in fifth decade among 67 cases of hepatomegaly in all females. 6) In ordinary medical examination(the insured amount is low) abnormal SGOT activity was found in 7 cases(4.8%) among 146 cases of hepatomegaly palpated $1\frac{1}{2}$ fingers' breadth and under, while it was not found in 37 cases of the same sized hepatomegaly in all females. Above mentioned 7 cases are thought to be very significant because 7 cases occupy 35% in 20 cases of abnormal SGOT activity with hepatomegaly. 7) Abnormal SGOT activity was found in 12 cases(4.4%) among 273 cases of hepatomegaly of "not firm" consistency, while it was found in 8 cases(22.2%) among 36 cases of hepatomegaly of "firm" consistency. The difference of incidence of abnormal SGOT activity showed statistical significance(p<0.05). 8) Abnormal SGOT activity was found in 5 cases(17.9%) among 28 cases of spider angiomata with hepatomegaly, while it was found in 10 cases(7.3%) among 166 cases of telangiectasia without spider angiomata with hepatomegaly. Owing to a small number of cases, statistical significance was not recognized, but the incidence of abnormal SGOT activity in spider angiomata cases with hepatomegaly is apt to be higher than that in telangiectasia cases without spider angiomata with hepatomegaly. 9) The incidence of abnormal SGOT activity is apt to be higher with age in male group; abnormal SGOT activity was not found among 4 cases of hepatomegaly in second decade and it was 3.8% in third decade, 4.5% in fourth decade, 9.3% in fifth decade, 17.5% in sixth decade and 33.3% in seventh decade, while the incidence of it was only one case among 67 cases in all females. 10) It is believed that the performance of liver function test to the subjects with hepatomegaly even in ordinary medical examination(the insured amount is low) will give considerable contribution for medical selection of hepatomegaly risk. 11) Age of the insured(young or old), presence of facial telangiectasia or spider angiomata especially and their severity, and consistency of enlarged liver(firm or not) should be considered to increase accuracy in evaluating hepatomegaly risk.

  • PDF

The Evaluation of Reliability for the Combined Refractive Power of Overlapping Trial Lenses (중첩된 시험렌즈의 합성굴절력에 대한 신뢰도 평가)

  • Lee, Hyung Kyun;Kim, So Ra;Park, Mijung
    • Journal of Korean Ophthalmic Optics Society
    • /
    • v.20 no.3
    • /
    • pp.263-276
    • /
    • 2015
  • Purpose: The current study aimed to evaluate the reliability for the combined refractive power when a spherical lens and a cylindrical lens were overlapped in a trial frame. Methods: The refractive powers, central thickness and peripheral thickness of spherical trial lenses and cylindrical lenses with negative power were measured. The combined refractive power of the spherical and cylindrical lenses was measured by auto lens meter. Measurement was repeated by changing the insertion order, and their results were further compared with the calculated combined refractive power. Results: There was no correlation between the variation of central and peripheral thickness in trial lenses and that of the lens power. Among 79 trial lenses, 3 trial lenses wasn't met the international standard. The refractive power calculated by Gullstrand's formula that could compensate vertex distance had smaller difference with the estimated power when compared with that calculated by thin lens formula however, it was significantly different from the estimated power. The refractive powers were generally apparent regardless of the insertion order of a spherical lens and a cylindrical lens: thin lens formula > actual measurements > Gullstrand's formula. The error was only found in cylindrical power calculated by Gullstrand's formula when inserted a spherical lens inside and a cylindrical lens outside however, the error was found in both of cylindrical and spherical powers calculated by Gullstrand's formula when inserted as a opposite order. By comparing actual measurements of equivalent spherical power, the accuracy was higher and the possibility of over-correction was lower when inserted a spherical lens inside and a cylindrical lens outside. Conclusions: From the results, those were revealed that the combined refractive power is influenced by the factors other than the vertex distance and the refractive power varies in accordance with the insertion order of a spherical lens and a cylindrical lens. Thus, it can be suggested that the establishment of standard for these is neccesaty.

Study on Labeling Efficiency of $^{99m}Tc$-HMPAO ($^{99m}Tc$-HMPAO 표지효율에 대한 고찰)

  • Hyeon, Jun Ho;Lim, Hyeon Jin;Kim, Ha Kyun;Cho, Seong Uk;Kim, Jin Eui
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.16 no.2
    • /
    • pp.131-134
    • /
    • 2012
  • Purpose : The labeling efficiency of radiopharmaceuticals in nuclear medicine is important in terms of accuracy and reliability of the examination. Usually $^{99m}Tc$-HMPAO used for brain SPECT scan is chemically unstable since lots of impurities are existing. Therefore, occurrence of loss of labeling efficiency is easy to appear. In this paper, labeling and use of $^{99m}Tc$-HMPAO should be helpful through experiments on factors affecting the labeling efficiency of $^{99m}Tc$-HMPAO. Materials and Methods : Domestic HMPAO vials (Dong-A) used for brain SPECT scan were tested. Domestic Samyeong Generator 55.5 GBq (1,500 mCi), TLC measurement sets (ITLC-SG, butanone, saline, TLC chamber) and radio-TLC scanner (Advantest, Bioscan) were used. In the first experiment, after eluting generator at 1, 8, 16, 24, 28 hours apart, each eluted $^{99m}Tc$-pertechnetate were labeled with HMPAO and the labeling efficiency was measured. In the second experiment, after eluting $^{99m}Tc$-pertechnetate from a generator, $^{99m}Tc$-pertechnetate was drawn at 0, 1, 3, 6 hours. And each drawn $^{99m}Tc$-pertechnetate were labeled with HMPAO for measuring labeling efficiency. In the third experiment, labeling efficiency was measured at 0, 0.5, 3, 5, 7 hours after labeling $^{99m}Tc$-HMPAO. Results : In the first experiment, measured values were appeared 95.05, 94.64, 94.94, 95.64, 96.76% in passing order of time. In the second experiment, measured values were appeared 94.38, 94.23, 93.26, 91.03% in passing order of time. In the third experiment, measured values were appeared 95.76, 94.17, 88.19, 83.6, 76.86% in passing order of time. Conclusion : In the first experiment of this paper, labeling efficiency of $^{99m}Tc$-HMPAO labeled with $^{99m}Tc$-pertechnetate eluted after 24 hours from first elution. Additional experiments will be needed to discuss for usability. In the second experiment, the labeling efficiency was slightly decreased in chronological order, but it was measured higher than 90%. Also, additional experiments will be needed to discuss for usability. In the third experiment, the labeling efficiency was decreased considerably. Especially, within 3 hours after the labeling is recommended to use $^{99m}Tc$-HMPAO

  • PDF

Implementation of Man-made Tongue Immobilization Devices in Treating Head and Neck Cancer Patients (두 경부 암 환자의 방사선치료 시 자체 제작한 고정 기구 유용성의 고찰)

  • Baek, Jong-Geal;Kim, Joo-Ho;Lee, Sang-Kyu;Lee, Won-Joo;Yoon, Jong-Won;Cho, Jeong-Hee
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.20 no.1
    • /
    • pp.1-9
    • /
    • 2008
  • Purpose: For head and neck cancer patients treated with radiation therapy, proper immobilization of intra-oral structures is crucial in reproducing treatment positions and optimizing dose distribution. We produced a man-made tongue immobilization device for each patient subjected to this study. Reproducibility of treatment positions and dose distributions at air-and-tissue interface were compared using man-made tongue immobilization devices and conventional tongue-bites. Materials and Methods: Dental alginate and putty were used in producing man-made tongue immobilization devices. In order to evaluate reproducibility of treatment positions, all patients were CT-simulated, and linac-gram was repeated 5 times with each patient in the treatment position. An acrylic phantom was devised in order to evaluate safety of man-made tongue immobilization devices. Air, water, alginate and putty were placed in the phantom and dose distributions at air-and-tissue interface were calculated using Pinnacle (version 7.6c, Phillips, USA) and measured with EBT film. Two different field sizes (3$\times$3 cm and 5$\times$5 cm) were used for comparison. Results: Evaluation of linac grams showed reproducibility of a treatment position was 4 times more accurate with man-made tongue immobilization devices compared with conventional tongue bites. Patients felt more comfortable using customized tongue immobilization devices during radiation treatment. Air-and-tissue interface dose distributions calculated using Pinnacle were 7.78% and 0.56% for 3$\times$3 cm field and 5$\times$5 cm field respectively. Dose distributions measured with EBT (international specialty products, USA) film were 36.5% and 11.8% for 3$\times$3 cm field and 5$\times$5 cm field respectively. Values from EBT film were higher. Conclusion: Using man-made tongue immobilization devices made of dental alginate and putty in treatment of head and neck cancer patients showed higher reproducibility of treatment position compared with using conventional mouth pieces. Man-made immobilization devices can help optimizing air-and-tissue interface dose distributions and compensating limited accuracy of radiotherapy planning systems in calculating air-tissue interface dose distributions.

  • PDF