• Title/Summary/Keyword: Field Evaluation

Search Result 5,848, Processing Time 0.039 seconds

An Intelligent Decision Support System for Selecting Promising Technologies for R&D based on Time-series Patent Analysis (R&D 기술 선정을 위한 시계열 특허 분석 기반 지능형 의사결정지원시스템)

  • Lee, Choongseok;Lee, Suk Joo;Choi, Byounggu
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.79-96
    • /
    • 2012
  • As the pace of competition dramatically accelerates and the complexity of change grows, a variety of research have been conducted to improve firms' short-term performance and to enhance firms' long-term survival. In particular, researchers and practitioners have paid their attention to identify promising technologies that lead competitive advantage to a firm. Discovery of promising technology depends on how a firm evaluates the value of technologies, thus many evaluating methods have been proposed. Experts' opinion based approaches have been widely accepted to predict the value of technologies. Whereas this approach provides in-depth analysis and ensures validity of analysis results, it is usually cost-and time-ineffective and is limited to qualitative evaluation. Considerable studies attempt to forecast the value of technology by using patent information to overcome the limitation of experts' opinion based approach. Patent based technology evaluation has served as a valuable assessment approach of the technological forecasting because it contains a full and practical description of technology with uniform structure. Furthermore, it provides information that is not divulged in any other sources. Although patent information based approach has contributed to our understanding of prediction of promising technologies, it has some limitations because prediction has been made based on the past patent information, and the interpretations of patent analyses are not consistent. In order to fill this gap, this study proposes a technology forecasting methodology by integrating patent information approach and artificial intelligence method. The methodology consists of three modules : evaluation of technologies promising, implementation of technologies value prediction model, and recommendation of promising technologies. In the first module, technologies promising is evaluated from three different and complementary dimensions; impact, fusion, and diffusion perspectives. The impact of technologies refers to their influence on future technologies development and improvement, and is also clearly associated with their monetary value. The fusion of technologies denotes the extent to which a technology fuses different technologies, and represents the breadth of search underlying the technology. The fusion of technologies can be calculated based on technology or patent, thus this study measures two types of fusion index; fusion index per technology and fusion index per patent. Finally, the diffusion of technologies denotes their degree of applicability across scientific and technological fields. In the same vein, diffusion index per technology and diffusion index per patent are considered respectively. In the second module, technologies value prediction model is implemented using artificial intelligence method. This studies use the values of five indexes (i.e., impact index, fusion index per technology, fusion index per patent, diffusion index per technology and diffusion index per patent) at different time (e.g., t-n, t-n-1, t-n-2, ${\cdots}$) as input variables. The out variables are values of five indexes at time t, which is used for learning. The learning method adopted in this study is backpropagation algorithm. In the third module, this study recommends final promising technologies based on analytic hierarchy process. AHP provides relative importance of each index, leading to final promising index for technology. Applicability of the proposed methodology is tested by using U.S. patents in international patent class G06F (i.e., electronic digital data processing) from 2000 to 2008. The results show that mean absolute error value for prediction produced by the proposed methodology is lower than the value produced by multiple regression analysis in cases of fusion indexes. However, mean absolute error value of the proposed methodology is slightly higher than the value of multiple regression analysis. These unexpected results may be explained, in part, by small number of patents. Since this study only uses patent data in class G06F, number of sample patent data is relatively small, leading to incomplete learning to satisfy complex artificial intelligence structure. In addition, fusion index per technology and impact index are found to be important criteria to predict promising technology. This study attempts to extend the existing knowledge by proposing a new methodology for prediction technology value by integrating patent information analysis and artificial intelligence network. It helps managers who want to technology develop planning and policy maker who want to implement technology policy by providing quantitative prediction methodology. In addition, this study could help other researchers by proving a deeper understanding of the complex technological forecasting field.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

Major Class Recommendation System based on Deep learning using Network Analysis (네트워크 분석을 활용한 딥러닝 기반 전공과목 추천 시스템)

  • Lee, Jae Kyu;Park, Heesung;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.95-112
    • /
    • 2021
  • In university education, the choice of major class plays an important role in students' careers. However, in line with the changes in the industry, the fields of major subjects by department are diversifying and increasing in number in university education. As a result, students have difficulty to choose and take classes according to their career paths. In general, students choose classes based on experiences such as choices of peers or advice from seniors. This has the advantage of being able to take into account the general situation, but it does not reflect individual tendencies and considerations of existing courses, and has a problem that leads to information inequality that is shared only among specific students. In addition, as non-face-to-face classes have recently been conducted and exchanges between students have decreased, even experience-based decisions have not been made as well. Therefore, this study proposes a recommendation system model that can recommend college major classes suitable for individual characteristics based on data rather than experience. The recommendation system recommends information and content (music, movies, books, images, etc.) that a specific user may be interested in. It is already widely used in services where it is important to consider individual tendencies such as YouTube and Facebook, and you can experience it familiarly in providing personalized services in content services such as over-the-top media services (OTT). Classes are also a kind of content consumption in terms of selecting classes suitable for individuals from a set content list. However, unlike other content consumption, it is characterized by a large influence of selection results. For example, in the case of music and movies, it is usually consumed once and the time required to consume content is short. Therefore, the importance of each item is relatively low, and there is no deep concern in selecting. Major classes usually have a long consumption time because they have to be taken for one semester, and each item has a high importance and requires greater caution in choice because it affects many things such as career and graduation requirements depending on the composition of the selected classes. Depending on the unique characteristics of these major classes, the recommendation system in the education field supports decision-making that reflects individual characteristics that are meaningful and cannot be reflected in experience-based decision-making, even though it has a relatively small number of item ranges. This study aims to realize personalized education and enhance students' educational satisfaction by presenting a recommendation model for university major class. In the model study, class history data of undergraduate students at University from 2015 to 2017 were used, and students and their major names were used as metadata. The class history data is implicit feedback data that only indicates whether content is consumed, not reflecting preferences for classes. Therefore, when we derive embedding vectors that characterize students and classes, their expressive power is low. With these issues in mind, this study proposes a Net-NeuMF model that generates vectors of students, classes through network analysis and utilizes them as input values of the model. The model was based on the structure of NeuMF using one-hot vectors, a representative model using data with implicit feedback. The input vectors of the model are generated to represent the characteristic of students and classes through network analysis. To generate a vector representing a student, each student is set to a node and the edge is designed to connect with a weight if the two students take the same class. Similarly, to generate a vector representing the class, each class was set as a node, and the edge connected if any students had taken the classes in common. Thus, we utilize Node2Vec, a representation learning methodology that quantifies the characteristics of each node. For the evaluation of the model, we used four indicators that are mainly utilized by recommendation systems, and experiments were conducted on three different dimensions to analyze the impact of embedding dimensions on the model. The results show better performance on evaluation metrics regardless of dimension than when using one-hot vectors in existing NeuMF structures. Thus, this work contributes to a network of students (users) and classes (items) to increase expressiveness over existing one-hot embeddings, to match the characteristics of each structure that constitutes the model, and to show better performance on various kinds of evaluation metrics compared to existing methodologies.

Evaluation of the Usefulness of MapPHAN for the Verification of Volumetric Modulated Arc Therapy Planning (용적세기조절회전치료 치료계획 확인에 사용되는 MapPHAN의 유용성 평가)

  • Woo, Heon;Park, Jang Pil;Min, Jae Soon;Lee, Jae Hee;Yoo, Suk Hyun
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.25 no.2
    • /
    • pp.115-121
    • /
    • 2013
  • Purpose: Latest linear accelerator and the introduction of new measurement equipment to the agency that the introduction of this equipment in the future, by analyzing the process of confirming the usefulness of the preparation process for applying it in the clinical causes some problems, should be helpful. Materials and Methods: All measurements TrueBEAM STX (Varian, USA) was used, and a file specific to each energy, irradiation conditions, the dose distribution was calculated using a computerized treatment planning equipment (Eclipse ver 10.0.39, Varian, USA). Measuring performance and cause errors in MapCHECK 2 were analyzed and measured against. In order to verify the performance of the MapCHECK 2, 6X, 6X-FFF, 10X, 10X-FFF, 15X field size $10{\times}10$ cm, gantry $0^{\circ}$, $180^{\circ}$ direction was measured by the energy. IGRT couch of the CT values affect the measurements in order to confirm, CT number values : -800 (Carbon) & -950 (COUCH in the air), -100 & 6X-950 in the state for FFF, 15X of the energy field sizes $10{\times}10$, gantry $180^{\circ}$, $135^{\circ}$, $275^{\circ}$ directionwas measured at, MapPHAN allocated to confirm the value of HU were compared, using the treatment planning computer for, Measurement error problem by the sharp edges MapPHAN Learn gantry direction MapPHAN of dependence was measured in three ways. GANTRY $90^{\circ}$, $270^{\circ}$ in the direction of the vertically erected settings 6X-FFF, 15X respectively, and Setting the state established as a horizontal field sizes $10{\times}10$, $90^{\circ}$, $45^{\circ}$, $315^{\circ}$, $270^{\circ}$ of in the direction of the energy-6X-FFF, 15X, respectively, were measured. Without intensity modulated beam of the third open arc were investigated. Results: Of basic performance MapCHECK confirm the attenuation measured by Couch, measured from the measured HU values that are assigned to the MAP-PHAN, check for calculation accuracy for the angled edge of the MapPHAN all come in a range of valid measurement errors do not affect the could see. three ways for the Gantry direction dependence, the first of the meter built into the value of the Gantry $270^{\circ}$ (relative $0^{\circ}$), $90^{\circ}$ (relative $180^{\circ}$), 6X-FFF, 15X from each -1.51, 0.83% and -0.63, -0.22% was not affected by the AP/PA direction represented. Setting the meter horizontally Gantry $90^{\circ}$, $270^{\circ}$ from the couch, Energy 6X-FFF 4.37, 2.84%, 15X, -9.63, -13.32% the difference. By-side direction measurements MapPHAN in value is not within the valid range can not, because that could be confirmed as gamma pass rate 3% of the value is greater than the value shown. You can check the Open Arc 6X-FFF, 15X energy, field size $10{\times}10$ cm $360^{\circ}$ rotation of the dose distribution in the state to look at nearly 90% pass rate to emerge. Conclusion: Based on the above results, the MapPHAN gantry direction dependence by side in the direction of the beam relative dose distribution suitable for measuring the gamma value, but accurate measurement of the absolute dose can not be considered is. this paper, a more accurate treatment plan in order to confirm, Reduce the tolerance for VMAT, such as lateral rotation investigation in order to measure accurate absolute isodose using a combination of IMF (Isocentric Mounting Fixture) MapCHEK 2, will be able to minimize the impact due to the angular dependence.

  • PDF

Brief Introduction of Research Progresses in Control and Biocontrol of Clubroot Disease in China

  • He, Yueqiu;Wu, Yixin;He, Pengfei;Li, Xinyu
    • 한국균학회소식:학술대회논문집
    • /
    • 2015.05a
    • /
    • pp.45-46
    • /
    • 2015
  • Clubroot disease of crucifers has occurred since 1957. It has spread to the whole China, especially in the southwest and nourtheast where it causes 30-80% loss in some fields. The disease has being expanded in the recent years as seeds are imported and the floating seedling system practices. For its effective control, the Ministry of Agriculture of China set up a program in 2010 and a research team led by Dr. Yueqiu HE, Yunnan Agricultural University. The team includes 20 main reseachers of 11 universities and 5 institutions. After 5 years, the team has made a lot of progresses in disease occurrence regulation, resources collection, resistance identification and breeding, biological agent exploration, formulation, chemicals evaluation, and control strategy. About 1200 collections of local and commercial crucifers were identified in the field and by artificiall inoculation in the laboratories, 10 resistant cultivars were breeded including 7 Chinese cabbages and 3 cabbages. More than 800 antagostic strains were isolated including bacteria, stretomyces and fungi. Around 100 chemicals were evaluated in the field and greenhouse based on its control effect, among them, 6 showed high control effect, especially fluazinam and cyazofamid could control about 80% the disease. However, fluzinam has negative effect on soil microbes. Clubroot disease could not be controlled by bioagents and chemicals once when the pathogen Plasmodiophora brassicae infected its hosts and set up the parasitic relationship. We found the earlier the pathogent infected its host, the severer the disease was. Therefore, early control was the most effective. For Chinese cabbage, all controlling measures should be taken in the early 30 days because the new infection could not cause severe symptom after 30 days of seeding. For example, a biocontrol agent, Bacillus subtilis Strain XF-1 could control the disease 70%-85% averagely when it mixed with seedling substrate and was drenching 3 times after transplanting, i.e. immediately, 7 days, 14 days. XF-1 has been deeply researched in control mechanisms, its genome, and development and application of biocontrol formulate. It could produce antagonistic protein, enzyme, antibiotics and IAA, which promoted rhizogenesis and growth. Its The genome was sequenced by Illumina/Solexa Genome Analyzer to assembled into 20 scaffolds then the gaps between scaffolds were filled by long fragment PCR amplification to obtain complet genmone with 4,061,186 bp in size. The whole genome was found to have 43.8% GC, 108 tandem repeats with an average of 2.65 copies and 84 transposons. The CDSs were predicted as 3,853 in which 112 CDSs were predicted to secondary metabolite biosynthesis, transport and catabolism. Among those, five NRPS/PKS giant gene clusters being responsible for the biosynthesis of polyketide (pksABCDEFHJLMNRS in size 72.9 kb), surfactin(srfABCD, 26.148 kb, bacilysin(bacABCDE 5.903 kb), bacillibactin(dhbABCEF, 11.774 kb) and fengycin(ppsABCDE, 37.799 kb) have high homolgous to fuction confirmed biosynthesis gene in other strain. Moreover, there are many of key regulatory genes for secondary metabolites from XF-1, such as comABPQKX Z, degQ, sfp, yczE, degU, ycxABCD and ywfG. were also predicted. Therefore, XF-1 has potential of biosynthesis for secondary metabolites surfactin, fengycin, bacillibactin, bacilysin and Bacillaene. Thirty two compounds were detected from cell extracts of XF-1 by MALDI-TOF-MS, including one Macrolactin (m/z 441.06), two fusaricidin (m/z 850.493 and 968.515), one circulocin (m/z 852.509), nine surfactin (m/z 1044.656~1102.652), five iturin (m/z 1096.631~1150.57) and forty fengycin (m/z 1449.79~1543.805). The top three compositions types (contening 56.67% of total extract) are surfactin, iturin and fengycin, in which the most abundant is the surfactin type composition 30.37% of total extract and in second place is the fengycin with 23.28% content with rich diversity of chemical structure, and the smallest one is the iturin with 3.02% content. Moreover, the same main compositions were detected in Bacillus sp.355 which is also a good effects biocontol bacterial for controlling the clubroot of crucifer. Wherefore those compounds surfactin, iturin and fengycin maybe the main active compositions of XF-1 against P. brassicae. Twenty one fengycin type compounds were evaluate by LC-ESI-MS/MS with antifungal activities, including fengycin A $C_{16{\sim}C19}$, fengycin B $C_{14{\sim}C17}$, fengycin C $C_{15{\sim}C18}$, fengycin D $C_{15{\sim}C18}$ and fengycin S $C_{15{\sim}C18}$. Furthermore, one novel compound was identified as Dehydroxyfengycin $C_{17}$ according its MS, 1D and 2D NMR spectral data, which molecular weight is 1488.8480 Da and formula $C_{75}H_{116}N_{12}O_{19}$. The fengycin type compounds (FTCPs $250{\mu}g/mL$) were used to treat the resting spores of P. brassicae ($10^7/mL$) by detecting leakage of the cytoplasm components and cell destruction. After 12 h treatment, the absorbencies at 260 nm (A260) and at 280 nm (A280) increased gradually to approaching the maximum of absorbance, accompanying the collapse of P. brassicae resting spores, and nearly no complete cells were observed at 24 h treatment. The results suggested that the cells could be lyzed by the FTCPs of XF-1, and the diversity of FTCPs was mainly attributed to a mechanism of clubroot disease biocontrol. In the five selected medium MOLP, PSA, LB, Landy and LD, the most suitable for growth of strain medium is MOLP, and the least for strains longevity is the Landy sucrose medium. However, the lipopeptide highest yield is in Landy sucrose medium. The lipopeptides in five medium were analyzed with HPLC, and the results showed that lipopeptides component were same, while their contents from B. subtilis XF-1 fermented in five medium were different. We found that it is the lipopeptides content but ingredients of XF-1 could be impacted by medium and lacking of nutrition seems promoting lipopeptides secretion from XF-1. The volatile components with inhibition fungal Cylindrocarpon spp. activity which were collect in sealed vesel were detected with metheds of HS-SPME-GC-MS in eight biocontrol Bacillus species and four positive mutant strains of XF-1 mutagenized with chemical mutagens, respectively. They have same main volatile components including pyrazine, aldehydes, oxazolidinone and sulfide which are composed of 91.62% in XF-1, in which, the most abundant is the pyrazine type composition with 47.03%, and in second place is the aldehydes with 23.84%, and the third place is oxazolidinone with 15.68%, and the smallest ones is the sulfide with 5.07%.

  • PDF

The 1998, 1999 Patterns of Care Study for Breast Irradiation after Mastectomy in Korea (1998, 1999년도 우리나라에서 시행된 근치적 유방 전절제술 후 방사선치료 현황 조사)

  • Keum,, Ki-Chang;Shim, Su-Jung;Lee, Ik-Jae;Park, Won;Lee, Sang-Wook;Shin, Hyun-Soo;Chung, Eun-Ji;Chie, Eui-Kyu;Kim, Il-Han;Oh, Do-Hoon;Ha, Sung-Whan;Lee, Hyung-Sik;Ahn, Sung-Ja
    • Radiation Oncology Journal
    • /
    • v.25 no.1
    • /
    • pp.7-15
    • /
    • 2007
  • [ $\underline{Purpose}$ ]: To determine the patterns of evaluation and treatment in patients with breast cancer after mastectomy and treated with radiotherapy. A nationwide study was performed with the goal of improving radiotherapy treatment. $\underline{Materials\;and\;Methods}$: A web- based database system for the Korean Patterns of Care Study (PCS) for 6 common cancers was developed. Randomly selected records of 286 eligible patients treated between 1998 and 1999 from 17 hospitals were reviewed. $\underline{Results}$: The ages of the study patients ranged from 20 to 80 years (median age 44 years). The pathologic T stage by the AJCC was T1 in 9.7% of the cases, T2 in 59.2% of the cases, T3 in 25.6% of the cases, and T4 in 5.3% of the cases. For analysis of nodal involvement, N0 was 7.3%, N1 was 14%, N2 was 38.8%, and N3 was 38.5% of the cases. The AJCC stage was stage I in 0.7% of the cases, stage IIa in 3.8% of the cases, stage IIb in 9.8% of the cases, stage IIIa in 43% of the cases, stage IIIb in 2.8% of the cases, and IIIc in 38.5% of the cases. There were various sequences of chemotherapy and radiotherapy after mastectomy. Mastectomy and chemotherapy followed by radiotherapy was the most commonly performed sequence in 47% of the cases. Mastectomy, chemotherapy, and radiotherapy followed by additional chemotherapy was performed in 35% of the cases, and neoadjuvant chemoradiotherapy was performed in 12.5% of the cases. The radiotherapy volume was chest wall only in 5.6% of the cases. The volume was chest wall and supraclavicular fossa (SCL) in 20.3% of the cases; chest wall, SCL and internal mammary lymph node (IMN) in 27.6% of the cases; chest wall, SCL and posterior axillary lymph node in 25.9% of the cases; chest wall, SCL, IMN, and posterior axillary lymph node in 19.9% of the cases. Two patients received IMN only. The method of chest wall irradiation was tangential field in 57.3% of the cases and electron beam in 42% of the cases. A bolus for the chest wall was used in 54.8% of the tangential field cases and 52.5% of the electron beam cases. The radiation dose to the chest wall was $45{\sim}59.4\;Gy$ (median 50.4 Gy), to the SCL was $45{\sim}59.4\;Gy$ (median 50.4 Gy), and to the PAB was $4.8{\sim}38.8\;Gy$, (median 9 Gy) $\underline{Conclusion}$: Different and various treatment methods were used for radiotherapy of the breast cancer patients after mastectomy in each hospital. Most of treatment methods varied in the irradiation of the chest wall. A separate analysis for the details of radiotherapy planning also needs to be followed and the outcome of treatment is needed in order to evaluate the different processes.

The Evaluation of Radiation Dose to Embryo/Fetus and the Design of Shielding in the Treatment of Brain Tumors (임산부의 전뇌 방사선 치료에 있어서의 태아의 방사선량 측정 및 차폐 구조의 설계)

  • Cho, Woong;Huh, Soon-Nyung;Chie, Eui-Kyu;Ha, Sung-Whan;Park, Yang-Gyun;Park, Jong-Min;Park, Suk-Won
    • Journal of Radiation Protection and Research
    • /
    • v.31 no.4
    • /
    • pp.203-210
    • /
    • 2006
  • Purpose : To estimate the dose to the embryo/fetus of a pregnant patient with brain tumors, and to design an shielding device to keep the embryo/fetus dose under acceptable levels Materials and Methods : A shielding wall with the dimension of 1.55 m height, 0.9 m width, and 30 m thickness is fabricated with 4 trolleys under the wall. It is placed between a Patient and the treatment head of a linear accelerator to attenuate the leakage radiation effectively from the treatment head, and is placed 1 cm below the lower margin of the treatment field in order to minimize the dose to a patient from the treatment head. An anti-patient scattering neck supporters with 2 cm thick Cerrobend metal is designed to minimize the scattered radiation from the treatment fields, and it is divided into 2 section. They are installed around the patient neck by attach from right and left sides. A shielding bridge for anti-room scattered radiation is utilized to place 2 sheets of 3 mm lead plates above the abdomen to setup three detectors under the lead sheets. Humanoid phantom is irradiated with the same treatment parameters, and with and without shielding devices using TLD, and ionization chambers with and without a build-up cap. Results : The dose to the embryo/fetus without shielding was 3.20, 3.21, 1.44, 0.90 cGy at off-field distances of 30, 40, 50, and 60 cm. With shielding, the dose to embryo/fetus was reduced to 0.88, 0.60, 0.35, 0.25 cGy, and the ratio of the shielding effect varied from 70% to 80%. TLD results were 1.8, 1.2, 0.8, 1.2, and 0.8 cGy. The dose measured by the survey meter was 10.9 mR/h at the patient's surface of abdomen. The dose to the embryo/fetus was estimated to be about 1 cGy during the entire treatment. Conclusion : According to the AAPM Report No 50 regarding the dose limit of the embryo/fetus during the pregnancy, the dose to the embryo/fetus with little risk is less than 5 cGy. Our measurements satisfy the recommended values. Our shielding technique was proven to be acceptable.

Dose Planning of Forward Intensity Modulated Radiation Therapy for Nasopharyngeal Cancer using Compensating Filters (보상여과판을 이용한 비인강암의 전방위 강도변조 방사선치료계획)

  • Chu Sung Sil;Lee Sang-wook;Suh Chang Ok;Kim Gwi Eon
    • Radiation Oncology Journal
    • /
    • v.19 no.1
    • /
    • pp.53-65
    • /
    • 2001
  • Purpose : To improve the local control of patients with nasopharyngeal cancer, we have implemented 3-D conformal radiotherapy and forward intensity modulated radiation therapy (IMRT) to used of compensating filters. Three dimension conformal radiotherapy with intensity modulation is a new modality for cancer treatments. We designed 3-D treatment planning with 3-D RTP (radiation treatment planning system) and evaluation dose distribution with tumor control probability (TCP) and normal tissue complication probability (NTCP). Material and Methods : We have developed a treatment plan consisting four intensity modulated photon fields that are delivered through the compensating tilters and block transmission for critical organs. We get a full size CT imaging including head and neck as 3 mm slices, and delineating PTV (planning target volume) and surrounding critical organs, and reconstructed 3D imaging on the computer windows. In the planning stage, the planner specifies the number of beams and their directions including non-coplanar, and the prescribed doses for the target volume and the permissible dose of normal organs and the overlap regions. We designed compensating filter according to tissue deficit and PTV volume shape also dose weighting for each field to obtain adequate dose distribution, and shielding blocks weighting for transmission. Therapeutic gains were evaluated by numerical equation of tumor control probability and normal tissue complication probability. The TCP and NTCP by DVH (dose volume histogram) were compared with the 3-D conformal radiotherapy and forward intensity modulated conformal radiotherapy by compensator and blocks weighting. Optimization for the weight distribution was peformed iteration with initial guess weight or the even weight distribution. The TCP and NTCP by DVH were compared with the 3-D conformal radiotherapy and intensitiy modulated conformal radiotherapy by compensator and blocks weighting. Results : Using a four field IMRT plan, we have customized dose distribution to conform and deliver sufficient dose to the PTV. In addition, in the overlap regions between the PTV and the normal organs (spinal cord, salivary grand, pituitary, optic nerves), the dose is kept within the tolerance of the respective organs. We evaluated to obtain sufficient TCP value and acceptable NTCP using compensating filters. Quality assurance checks show acceptable agreement between the planned and the implemented MLC(multi-leaf collimator). Conclusion : IMRT provides a powerful and efficient solution for complex planning problems where the surrounding normal tissues place severe constraints on the prescription dose. The intensity modulated fields can be efficaciously and accurately delivered using compensating filters.

  • PDF

Agronomic Characteristics and Productivity of Winter Forage Crop in Sihwa Reclaimed Field (시화 간척지에서 월동 사료작물의 초종 및 품종에 따른 생육특성 및 생산성)

  • Kim, Jong Geun;Wei, Sheng Nan;Li, Yan Fen;Kim, Hak Jin;Kim, Meing Joong;Cheong, Eun Chan
    • Journal of The Korean Society of Grassland and Forage Science
    • /
    • v.40 no.1
    • /
    • pp.19-28
    • /
    • 2020
  • This study was conducted to compare the agronomic characteristics and productivity according to the species and varieties of winter forage crops in reclaimed land. Winter forage crops used in this study were developed in National Institute of Crop Science, RDA. Oats ('Samhan', 'Jopung', 'Taehan', 'Dakyung' and 'Hi-early'), forage barley ('Yeongyang', 'Yuyeon', 'Yujin', 'Dacheng' and 'Yeonho'), rye ('Gogu', 'Jogreen' and 'Daegokgreen') and triticale ('Shinyoung', 'Saeyoung', 'Choyoung', 'Sinseong', 'Minpung' and 'Gwangyoung') were planted in the reclaimed land of Sihwa district in Hwaseong, Gyeonggi-do in the autumn of 2018 and cultivated using each standard cultivation method, and harvested in May 2019(oat and rye: 8 May, barley and triticale: 20 May.) The emergency rate was the lowest in rye (84.4%), and forage barley, oat and triticale were in similar levels (92.8 to 98.8%). Triticale was the lowest (416 tiller/㎡) and oat was the highest (603 tiller/㎡) in tiller number. Rye was the earliest in the heading date (April 21), triticale was April 26, and oat and forage barley were in early May (May 2 and May 5). The plant height was the highest in rye (95.6 cm), and triticale and forage barley was similar (76.3 and 68.3cm) and oat was the lowest (54.2 cm). Dry matter(DM) content of rye was the highest in the average of 46.04% and the others were similar at 35.09~37.54%. Productivity was different among species and varieties, with the highest dry matter yield of forage barley (4,344 kg/ha), oat was similar to barley, and rye and triticale were lowest. 'Dakyoung' and 'Hi-early' were higher in DM yield (4,283 and 5,490 kg/ha), and forage barley were higher in 'Yeonho', 'Yujin' and 'Dacheng' varieties (4,888, 5,433 and 5,582 kg/ha). Crude protein content of oat (6.58%) tended to be the highest, and TDN(total digectible nutrient) content (63.61%) was higher than other varieties. In the RFV(relative feed value), oats averaged 119, while the other three species averaged 92~105. The weight of 1,000 grain was the highest in triticale (43.03 g) and the lowest in rye (31.61 g). In the evaluation of germination rate according to the salt concentration (salinity), the germination rate was maintained at about 80% from 0.2 to 0.4% salinity. The correlation coefficient between germination and salt concentration was high in oat and barley (-0.91 and -0.92) and lowest in rye (-0.66). In conclusion, forage barley and oats showed good productivity in reclaimed land. Adaptability is also different among varieties of forage crops. When growing forage crops in reclaimed land, the selection of highly adaptable species and varieties was recommended.

Performance analysis of Frequent Itemset Mining Technique based on Transaction Weight Constraints (트랜잭션 가중치 기반의 빈발 아이템셋 마이닝 기법의 성능분석)

  • Yun, Unil;Pyun, Gwangbum
    • Journal of Internet Computing and Services
    • /
    • v.16 no.1
    • /
    • pp.67-74
    • /
    • 2015
  • In recent years, frequent itemset mining for considering the importance of each item has been intensively studied as one of important issues in the data mining field. According to strategies utilizing the item importance, itemset mining approaches for discovering itemsets based on the item importance are classified as follows: weighted frequent itemset mining, frequent itemset mining using transactional weights, and utility itemset mining. In this paper, we perform empirical analysis with respect to frequent itemset mining algorithms based on transactional weights. The mining algorithms compute transactional weights by utilizing the weight for each item in large databases. In addition, these algorithms discover weighted frequent itemsets on the basis of the item frequency and weight of each transaction. Consequently, we can see the importance of a certain transaction through the database analysis because the weight for the transaction has higher value if it contains many items with high values. We not only analyze the advantages and disadvantages but also compare the performance of the most famous algorithms in the frequent itemset mining field based on the transactional weights. As a representative of the frequent itemset mining using transactional weights, WIS introduces the concept and strategies of transactional weights. In addition, there are various other state-of-the-art algorithms, WIT-FWIs, WIT-FWIs-MODIFY, and WIT-FWIs-DIFF, for extracting itemsets with the weight information. To efficiently conduct processes for mining weighted frequent itemsets, three algorithms use the special Lattice-like data structure, called WIT-tree. The algorithms do not need to an additional database scanning operation after the construction of WIT-tree is finished since each node of WIT-tree has item information such as item and transaction IDs. In particular, the traditional algorithms conduct a number of database scanning operations to mine weighted itemsets, whereas the algorithms based on WIT-tree solve the overhead problem that can occur in the mining processes by reading databases only one time. Additionally, the algorithms use the technique for generating each new itemset of length N+1 on the basis of two different itemsets of length N. To discover new weighted itemsets, WIT-FWIs performs the itemset combination processes by using the information of transactions that contain all the itemsets. WIT-FWIs-MODIFY has a unique feature decreasing operations for calculating the frequency of the new itemset. WIT-FWIs-DIFF utilizes a technique using the difference of two itemsets. To compare and analyze the performance of the algorithms in various environments, we use real datasets of two types (i.e., dense and sparse) in terms of the runtime and maximum memory usage. Moreover, a scalability test is conducted to evaluate the stability for each algorithm when the size of a database is changed. As a result, WIT-FWIs and WIT-FWIs-MODIFY show the best performance in the dense dataset, and in sparse dataset, WIT-FWI-DIFF has mining efficiency better than the other algorithms. Compared to the algorithms using WIT-tree, WIS based on the Apriori technique has the worst efficiency because it requires a large number of computations more than the others on average.