• Title/Summary/Keyword: Predictive system

Search Result 1,209, Processing Time 0.029 seconds

A study on design process for public space by users behavioral characteristics (이용자 행태 특성에 의한 공용공간의 디자인 프로세스 연구)

  • 김개천;김범중
    • Archives of design research
    • /
    • v.17 no.1
    • /
    • pp.89-98
    • /
    • 2004
  • A systemic approach to behavior on the basis of human psychology is needed for behavior-centered space design. Also, the recognition that human and environment, in all, have complementarity is needed- human and space shall be understood as a general phenomenon, supposing interaction. Design of behavior-oriented space means configuration and coordination of physical subjects as well as understanding, analysis and reflection of psychological and behavioral phenomena. It is analysis of a private individual as well as understanding of interaction between human groups, as well. In respect of space recognition, analysis not on material movement but on energy circulation and variable is important. It means that the understanding of user's behavior and psychology does not orient reasonable purpose just for convenience. That is, such understanding intends to understand behavioral patterns and psychological phenomena between space and human beyond the decomposition of structure of human and space into physical elements and the design based on standardized data. Thereby, more human-oriented space design might be implemented by the understanding of behavioral essence. Also, a user-centered design process from another viewpoint might be created, and the general amenity among man, space and environment - better environmental quality - might be produced. For this, the consciousness of human activity that is, activity system shall be ahead of it, and the approaches for design shall be implemented into a process not in predictive ideas but in semi-scientific system. On the basis of the above view, this study was attempted to investigate the orientation of design to recognize space as another life, and explore a process where it is drawn into a design language on the basis of human behavior. If the essence of space behavior and the activity system are analyzed through user observation and it is reflected upon a space design program and then developed into a formative language, a new design process on human and environment might be produced. In conclusion, the reflection of user's behavior and psychology into design, contrary to existing public space design based on physical data, can orient quality improvement of human life and ultimately be helpful to the proposition, 'humanization of space'.

  • PDF

Clickstream Big Data Mining for Demographics based Digital Marketing (인구통계특성 기반 디지털 마케팅을 위한 클릭스트림 빅데이터 마이닝)

  • Park, Jiae;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.3
    • /
    • pp.143-163
    • /
    • 2016
  • The demographics of Internet users are the most basic and important sources for target marketing or personalized advertisements on the digital marketing channels which include email, mobile, and social media. However, it gradually has become difficult to collect the demographics of Internet users because their activities are anonymous in many cases. Although the marketing department is able to get the demographics using online or offline surveys, these approaches are very expensive, long processes, and likely to include false statements. Clickstream data is the recording an Internet user leaves behind while visiting websites. As the user clicks anywhere in the webpage, the activity is logged in semi-structured website log files. Such data allows us to see what pages users visited, how long they stayed there, how often they visited, when they usually visited, which site they prefer, what keywords they used to find the site, whether they purchased any, and so forth. For such a reason, some researchers tried to guess the demographics of Internet users by using their clickstream data. They derived various independent variables likely to be correlated to the demographics. The variables include search keyword, frequency and intensity for time, day and month, variety of websites visited, text information for web pages visited, etc. The demographic attributes to predict are also diverse according to the paper, and cover gender, age, job, location, income, education, marital status, presence of children. A variety of data mining methods, such as LSA, SVM, decision tree, neural network, logistic regression, and k-nearest neighbors, were used for prediction model building. However, this research has not yet identified which data mining method is appropriate to predict each demographic variable. Moreover, it is required to review independent variables studied so far and combine them as needed, and evaluate them for building the best prediction model. The objective of this study is to choose clickstream attributes mostly likely to be correlated to the demographics from the results of previous research, and then to identify which data mining method is fitting to predict each demographic attribute. Among the demographic attributes, this paper focus on predicting gender, age, marital status, residence, and job. And from the results of previous research, 64 clickstream attributes are applied to predict the demographic attributes. The overall process of predictive model building is compose of 4 steps. In the first step, we create user profiles which include 64 clickstream attributes and 5 demographic attributes. The second step performs the dimension reduction of clickstream variables to solve the curse of dimensionality and overfitting problem. We utilize three approaches which are based on decision tree, PCA, and cluster analysis. We build alternative predictive models for each demographic variable in the third step. SVM, neural network, and logistic regression are used for modeling. The last step evaluates the alternative models in view of model accuracy and selects the best model. For the experiments, we used clickstream data which represents 5 demographics and 16,962,705 online activities for 5,000 Internet users. IBM SPSS Modeler 17.0 was used for our prediction process, and the 5-fold cross validation was conducted to enhance the reliability of our experiments. As the experimental results, we can verify that there are a specific data mining method well-suited for each demographic variable. For example, age prediction is best performed when using the decision tree based dimension reduction and neural network whereas the prediction of gender and marital status is the most accurate by applying SVM without dimension reduction. We conclude that the online behaviors of the Internet users, captured from the clickstream data analysis, could be well used to predict their demographics, thereby being utilized to the digital marketing.

Development and application of prediction model of hyperlipidemia using SVM and meta-learning algorithm (SVM과 meta-learning algorithm을 이용한 고지혈증 유병 예측모형 개발과 활용)

  • Lee, Seulki;Shin, Taeksoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.111-124
    • /
    • 2018
  • This study aims to develop a classification model for predicting the occurrence of hyperlipidemia, one of the chronic diseases. Prior studies applying data mining techniques for predicting disease can be classified into a model design study for predicting cardiovascular disease and a study comparing disease prediction research results. In the case of foreign literatures, studies predicting cardiovascular disease were predominant in predicting disease using data mining techniques. Although domestic studies were not much different from those of foreign countries, studies focusing on hypertension and diabetes were mainly conducted. Since hypertension and diabetes as well as chronic diseases, hyperlipidemia, are also of high importance, this study selected hyperlipidemia as the disease to be analyzed. We also developed a model for predicting hyperlipidemia using SVM and meta learning algorithms, which are already known to have excellent predictive power. In order to achieve the purpose of this study, we used data set from Korea Health Panel 2012. The Korean Health Panel produces basic data on the level of health expenditure, health level and health behavior, and has conducted an annual survey since 2008. In this study, 1,088 patients with hyperlipidemia were randomly selected from the hospitalized, outpatient, emergency, and chronic disease data of the Korean Health Panel in 2012, and 1,088 nonpatients were also randomly extracted. A total of 2,176 people were selected for the study. Three methods were used to select input variables for predicting hyperlipidemia. First, stepwise method was performed using logistic regression. Among the 17 variables, the categorical variables(except for length of smoking) are expressed as dummy variables, which are assumed to be separate variables on the basis of the reference group, and these variables were analyzed. Six variables (age, BMI, education level, marital status, smoking status, gender) excluding income level and smoking period were selected based on significance level 0.1. Second, C4.5 as a decision tree algorithm is used. The significant input variables were age, smoking status, and education level. Finally, C4.5 as a decision tree algorithm is used. In SVM, the input variables selected by genetic algorithms consisted of 6 variables such as age, marital status, education level, economic activity, smoking period, and physical activity status, and the input variables selected by genetic algorithms in artificial neural network consist of 3 variables such as age, marital status, and education level. Based on the selected parameters, we compared SVM, meta learning algorithm and other prediction models for hyperlipidemia patients, and compared the classification performances using TP rate and precision. The main results of the analysis are as follows. First, the accuracy of the SVM was 88.4% and the accuracy of the artificial neural network was 86.7%. Second, the accuracy of classification models using the selected input variables through stepwise method was slightly higher than that of classification models using the whole variables. Third, the precision of artificial neural network was higher than that of SVM when only three variables as input variables were selected by decision trees. As a result of classification models based on the input variables selected through the genetic algorithm, classification accuracy of SVM was 88.5% and that of artificial neural network was 87.9%. Finally, this study indicated that stacking as the meta learning algorithm proposed in this study, has the best performance when it uses the predicted outputs of SVM and MLP as input variables of SVM, which is a meta classifier. The purpose of this study was to predict hyperlipidemia, one of the representative chronic diseases. To do this, we used SVM and meta-learning algorithms, which is known to have high accuracy. As a result, the accuracy of classification of hyperlipidemia in the stacking as a meta learner was higher than other meta-learning algorithms. However, the predictive performance of the meta-learning algorithm proposed in this study is the same as that of SVM with the best performance (88.6%) among the single models. The limitations of this study are as follows. First, various variable selection methods were tried, but most variables used in the study were categorical dummy variables. In the case with a large number of categorical variables, the results may be different if continuous variables are used because the model can be better suited to categorical variables such as decision trees than general models such as neural networks. Despite these limitations, this study has significance in predicting hyperlipidemia with hybrid models such as met learning algorithms which have not been studied previously. It can be said that the result of improving the model accuracy by applying various variable selection techniques is meaningful. In addition, it is expected that our proposed model will be effective for the prevention and management of hyperlipidemia.

Bankruptcy Forecasting Model using AdaBoost: A Focus on Construction Companies (적응형 부스팅을 이용한 파산 예측 모형: 건설업을 중심으로)

  • Heo, Junyoung;Yang, Jin Yong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.35-48
    • /
    • 2014
  • According to the 2013 construction market outlook report, the liquidation of construction companies is expected to continue due to the ongoing residential construction recession. Bankruptcies of construction companies have a greater social impact compared to other industries. However, due to the different nature of the capital structure and debt-to-equity ratio, it is more difficult to forecast construction companies' bankruptcies than that of companies in other industries. The construction industry operates on greater leverage, with high debt-to-equity ratios, and project cash flow focused on the second half. The economic cycle greatly influences construction companies. Therefore, downturns tend to rapidly increase the bankruptcy rates of construction companies. High leverage, coupled with increased bankruptcy rates, could lead to greater burdens on banks providing loans to construction companies. Nevertheless, the bankruptcy prediction model concentrated mainly on financial institutions, with rare construction-specific studies. The bankruptcy prediction model based on corporate finance data has been studied for some time in various ways. However, the model is intended for all companies in general, and it may not be appropriate for forecasting bankruptcies of construction companies, who typically have high liquidity risks. The construction industry is capital-intensive, operates on long timelines with large-scale investment projects, and has comparatively longer payback periods than in other industries. With its unique capital structure, it can be difficult to apply a model used to judge the financial risk of companies in general to those in the construction industry. Diverse studies of bankruptcy forecasting models based on a company's financial statements have been conducted for many years. The subjects of the model, however, were general firms, and the models may not be proper for accurately forecasting companies with disproportionately large liquidity risks, such as construction companies. The construction industry is capital-intensive, requiring significant investments in long-term projects, therefore to realize returns from the investment. The unique capital structure means that the same criteria used for other industries cannot be applied to effectively evaluate financial risk for construction firms. Altman Z-score was first published in 1968, and is commonly used as a bankruptcy forecasting model. It forecasts the likelihood of a company going bankrupt by using a simple formula, classifying the results into three categories, and evaluating the corporate status as dangerous, moderate, or safe. When a company falls into the "dangerous" category, it has a high likelihood of bankruptcy within two years, while those in the "safe" category have a low likelihood of bankruptcy. For companies in the "moderate" category, it is difficult to forecast the risk. Many of the construction firm cases in this study fell in the "moderate" category, which made it difficult to forecast their risk. Along with the development of machine learning using computers, recent studies of corporate bankruptcy forecasting have used this technology. Pattern recognition, a representative application area in machine learning, is applied to forecasting corporate bankruptcy, with patterns analyzed based on a company's financial information, and then judged as to whether the pattern belongs to the bankruptcy risk group or the safe group. The representative machine learning models previously used in bankruptcy forecasting are Artificial Neural Networks, Adaptive Boosting (AdaBoost) and, the Support Vector Machine (SVM). There are also many hybrid studies combining these models. Existing studies using the traditional Z-Score technique or bankruptcy prediction using machine learning focus on companies in non-specific industries. Therefore, the industry-specific characteristics of companies are not considered. In this paper, we confirm that adaptive boosting (AdaBoost) is the most appropriate forecasting model for construction companies by based on company size. We classified construction companies into three groups - large, medium, and small based on the company's capital. We analyzed the predictive ability of AdaBoost for each group of companies. The experimental results showed that AdaBoost has more predictive ability than the other models, especially for the group of large companies with capital of more than 50 billion won.

A study on the prediction of korean NPL market return (한국 NPL시장 수익률 예측에 관한 연구)

  • Lee, Hyeon Su;Jeong, Seung Hwan;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.123-139
    • /
    • 2019
  • The Korean NPL market was formed by the government and foreign capital shortly after the 1997 IMF crisis. However, this market is short-lived, as the bad debt has started to increase after the global financial crisis in 2009 due to the real economic recession. NPL has become a major investment in the market in recent years when the domestic capital market's investment capital began to enter the NPL market in earnest. Although the domestic NPL market has received considerable attention due to the overheating of the NPL market in recent years, research on the NPL market has been abrupt since the history of capital market investment in the domestic NPL market is short. In addition, decision-making through more scientific and systematic analysis is required due to the decline in profitability and the price fluctuation due to the fluctuation of the real estate business. In this study, we propose a prediction model that can determine the achievement of the benchmark yield by using the NPL market related data in accordance with the market demand. In order to build the model, we used Korean NPL data from December 2013 to December 2017 for about 4 years. The total number of things data was 2291. As independent variables, only the variables related to the dependent variable were selected for the 11 variables that indicate the characteristics of the real estate. In order to select the variables, one to one t-test and logistic regression stepwise and decision tree were performed. Seven independent variables (purchase year, SPC (Special Purpose Company), municipality, appraisal value, purchase cost, OPB (Outstanding Principle Balance), HP (Holding Period)). The dependent variable is a bivariate variable that indicates whether the benchmark rate is reached. This is because the accuracy of the model predicting the binomial variables is higher than the model predicting the continuous variables, and the accuracy of these models is directly related to the effectiveness of the model. In addition, in the case of a special purpose company, whether or not to purchase the property is the main concern. Therefore, whether or not to achieve a certain level of return is enough to make a decision. For the dependent variable, we constructed and compared the predictive model by calculating the dependent variable by adjusting the numerical value to ascertain whether 12%, which is the standard rate of return used in the industry, is a meaningful reference value. As a result, it was found that the hit ratio average of the predictive model constructed using the dependent variable calculated by the 12% standard rate of return was the best at 64.60%. In order to propose an optimal prediction model based on the determined dependent variables and 7 independent variables, we construct a prediction model by applying the five methodologies of discriminant analysis, logistic regression analysis, decision tree, artificial neural network, and genetic algorithm linear model we tried to compare them. To do this, 10 sets of training data and testing data were extracted using 10 fold validation method. After building the model using this data, the hit ratio of each set was averaged and the performance was compared. As a result, the hit ratio average of prediction models constructed by using discriminant analysis, logistic regression model, decision tree, artificial neural network, and genetic algorithm linear model were 64.40%, 65.12%, 63.54%, 67.40%, and 60.51%, respectively. It was confirmed that the model using the artificial neural network is the best. Through this study, it is proved that it is effective to utilize 7 independent variables and artificial neural network prediction model in the future NPL market. The proposed model predicts that the 12% return of new things will be achieved beforehand, which will help the special purpose companies make investment decisions. Furthermore, we anticipate that the NPL market will be liquidated as the transaction proceeds at an appropriate price.

A Study on Searching for Export Candidate Countries of the Korean Food and Beverage Industry Using Node2vec Graph Embedding and Light GBM Link Prediction (Node2vec 그래프 임베딩과 Light GBM 링크 예측을 활용한 식음료 산업의 수출 후보국가 탐색 연구)

  • Lee, Jae-Seong;Jun, Seung-Pyo;Seo, Jinny
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.4
    • /
    • pp.73-95
    • /
    • 2021
  • This study uses Node2vec graph embedding method and Light GBM link prediction to explore undeveloped export candidate countries in Korea's food and beverage industry. Node2vec is the method that improves the limit of the structural equivalence representation of the network, which is known to be relatively weak compared to the existing link prediction method based on the number of common neighbors of the network. Therefore, the method is known to show excellent performance in both community detection and structural equivalence of the network. The vector value obtained by embedding the network in this way operates under the condition of a constant length from an arbitrarily designated starting point node. Therefore, it has the advantage that it is easy to apply the sequence of nodes as an input value to the model for downstream tasks such as Logistic Regression, Support Vector Machine, and Random Forest. Based on these features of the Node2vec graph embedding method, this study applied the above method to the international trade information of the Korean food and beverage industry. Through this, we intend to contribute to creating the effect of extensive margin diversification in Korea in the global value chain relationship of the industry. The optimal predictive model derived from the results of this study recorded a precision of 0.95 and a recall of 0.79, and an F1 score of 0.86, showing excellent performance. This performance was shown to be superior to that of the binary classifier based on Logistic Regression set as the baseline model. In the baseline model, a precision of 0.95 and a recall of 0.73 were recorded, and an F1 score of 0.83 was recorded. In addition, the light GBM-based optimal prediction model derived from this study showed superior performance than the link prediction model of previous studies, which is set as a benchmarking model in this study. The predictive model of the previous study recorded only a recall rate of 0.75, but the proposed model of this study showed better performance which recall rate is 0.79. The difference in the performance of the prediction results between benchmarking model and this study model is due to the model learning strategy. In this study, groups were classified by the trade value scale, and prediction models were trained differently for these groups. Specific methods are (1) a method of randomly masking and learning a model for all trades without setting specific conditions for trade value, (2) arbitrarily masking a part of the trades with an average trade value or higher and using the model method, and (3) a method of arbitrarily masking some of the trades with the top 25% or higher trade value and learning the model. As a result of the experiment, it was confirmed that the performance of the model trained by randomly masking some of the trades with the above-average trade value in this method was the best and appeared stably. It was found that most of the results of potential export candidates for Korea derived through the above model appeared appropriate through additional investigation. Combining the above, this study could suggest the practical utility of the link prediction method applying Node2vec and Light GBM. In addition, useful implications could be derived for weight update strategies that can perform better link prediction while training the model. On the other hand, this study also has policy utility because it is applied to trade transactions that have not been performed much in the research related to link prediction based on graph embedding. The results of this study support a rapid response to changes in the global value chain such as the recent US-China trade conflict or Japan's export regulations, and I think that it has sufficient usefulness as a tool for policy decision-making.

Development of Market Growth Pattern Map Based on Growth Model and Self-organizing Map Algorithm: Focusing on ICT products (자기조직화 지도를 활용한 성장모형 기반의 시장 성장패턴 지도 구축: ICT제품을 중심으로)

  • Park, Do-Hyung;Chung, Jaekwon;Chung, Yeo Jin;Lee, Dongwon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.1-23
    • /
    • 2014
  • Market forecasting aims to estimate the sales volume of a product or service that is sold to consumers for a specific selling period. From the perspective of the enterprise, accurate market forecasting assists in determining the timing of new product introduction, product design, and establishing production plans and marketing strategies that enable a more efficient decision-making process. Moreover, accurate market forecasting enables governments to efficiently establish a national budget organization. This study aims to generate a market growth curve for ICT (information and communication technology) goods using past time series data; categorize products showing similar growth patterns; understand markets in the industry; and forecast the future outlook of such products. This study suggests the useful and meaningful process (or methodology) to identify the market growth pattern with quantitative growth model and data mining algorithm. The study employs the following methodology. At the first stage, past time series data are collected based on the target products or services of categorized industry. The data, such as the volume of sales and domestic consumption for a specific product or service, are collected from the relevant government ministry, the National Statistical Office, and other relevant government organizations. For collected data that may not be analyzed due to the lack of past data and the alteration of code names, data pre-processing work should be performed. At the second stage of this process, an optimal model for market forecasting should be selected. This model can be varied on the basis of the characteristics of each categorized industry. As this study is focused on the ICT industry, which has more frequent new technology appearances resulting in changes of the market structure, Logistic model, Gompertz model, and Bass model are selected. A hybrid model that combines different models can also be considered. The hybrid model considered for use in this study analyzes the size of the market potential through the Logistic and Gompertz models, and then the figures are used for the Bass model. The third stage of this process is to evaluate which model most accurately explains the data. In order to do this, the parameter should be estimated on the basis of the collected past time series data to generate the models' predictive value and calculate the root-mean squared error (RMSE). The model that shows the lowest average RMSE value for every product type is considered as the best model. At the fourth stage of this process, based on the estimated parameter value generated by the best model, a market growth pattern map is constructed with self-organizing map algorithm. A self-organizing map is learning with market pattern parameters for all products or services as input data, and the products or services are organized into an $N{\times}N$ map. The number of clusters increase from 2 to M, depending on the characteristics of the nodes on the map. The clusters are divided into zones, and the clusters with the ability to provide the most meaningful explanation are selected. Based on the final selection of clusters, the boundaries between the nodes are selected and, ultimately, the market growth pattern map is completed. The last step is to determine the final characteristics of the clusters as well as the market growth curve. The average of the market growth pattern parameters in the clusters is taken to be a representative figure. Using this figure, a growth curve is drawn for each cluster, and their characteristics are analyzed. Also, taking into consideration the product types in each cluster, their characteristics can be qualitatively generated. We expect that the process and system that this paper suggests can be used as a tool for forecasting demand in the ICT and other industries.

EEPERF(Experiential Education PERFormance): An Instrument for Measuring Service Quality in Experiential Education (체험형 교육 서비스 품질 측정 항목에 관한 연구: 창의적 체험활동을 중심으로)

  • Park, Ky-Yoon;Kim, Hyun-Sik
    • Journal of Distribution Science
    • /
    • v.10 no.2
    • /
    • pp.43-52
    • /
    • 2012
  • As experiential education services are growing, the need for proper management is increasing. Considering that adequate measures are an essential factor for achieving success in managing something, it is important for managers to use a proper system of metrics to measure the performance of experiential education services. However, in spite of this need, little research has been done to develop a valid and reliable set of metrics for assessing the quality of experiential education services. The current study aims to develop a multi-item instrument for assessing the service quality of experiential education. The specific procedure is as follows. First, we generated a pool of possible metrics based on diverse literature on service quality. We elicited possiblemetric items not only from general service quality metrics such as SERVQUAL and SERVPERF but also from educational service quality metrics such as HEdPERF and PESPERF. Second, specialist teachers in the experiential education area screened the initial metrics to boost face validity. Third, we proceeded with multiple rounds of empirical validation of those metrics. Based on this processes, we refined the metrics to determine the final metrics to be used. Fourth, we examined predictive validity by checking the well-established positive relationship between each dimension of metrics and customer satisfaction. In sum, starting with the initial pool of scale items elicited from the previous literature and purifying them empirically through the surveying method, we developed a four-dimensional systemized scale to measure the superiority of experiential education and named it "Experiential Education PERFormance" (EEPERF). Our findings indicate that students (consumers) perceive the superiority of the experiential education (EE) service in the following four dimensions: EE-empathy, EE-reliability, EE-outcome, and EE-landscape. EE-empathy is a judgment in response to the question, "How empathetically does the experiential educational service provider interact with me?" Principal measures are "How well does the service provider understand my needs?," and "How well does the service provider listen to my voice?" Next, EE-reliability is a judgment in response to the question, "How reliably does the experiential educational service provider interact with me?" Major measures are "How reliable is the schedule here?," and "How credible is the service provider?" EE-outcome is a judgmentin response to the question, "What results could I get from this experiential educational service encounter?" Representative measures are "How good is the information that I will acquire form this service encounter?," and "How useful is this service encounter in helping me develop creativity?" Finally, EE-landscape is a judgment about the physical environment. Essential measures are "How convenient is the access to the service encounter?,"and "How well managed are the facilities?" We showed the reliability and validity of the system of metrics. All four dimensions influence customer satisfaction significantly. Practitioners may use the results in planning experiential educational service programs and evaluating each service encounter. The current study isexpected to act as a stepping-stone for future scale improvement. In this case, researchers may use the experience quality paradigm that has recently arisen.

  • PDF

Radiation-induced Pulmonary Toxicity following Adjuvant Radiotherapy for Breast Cancer (유방암 환자에서 보조적 방사선치료 후의 폐 손상)

  • Moon, Sung-Ho;Kim, Tae-Jung;Eom, Keun-Young;Kim, Jee-Hyun;Kim, Sung-Won;Kim, Jae-Sung;Kim, In-Ah
    • Radiation Oncology Journal
    • /
    • v.25 no.2
    • /
    • pp.109-117
    • /
    • 2007
  • [ $\underline{Purpose}$ ]: To evaluate the incidences and potential predictive factors for symptomatic radiation pneumonitis (SRP) and radiographic pulmonary toxicity (RPT) following adjuvant radiotherapy (RT) for patients with breast cancer. A particular focus was made to correlate RPT with the dose volume histogram (DVH) parameters based on three-dimensional RT planning (3D-RTP) data. $\underline{Materials\;and\;Methods}$: From September 2003 through February 2006, 171 patients with breast cancer were treated with adjuvant RT following breast surgery. A radiation dose of 50.4 Gy was delivered with tangential photon fields on the whole breast or chest wall. A single anterior oblique photon field for supraclavicular (SCL) nodes was added if indicated. Serial follow-up chest radiographs were reviewed by a chest radiologist. Radiation Therapy Oncology Group (RTOG) toxicity criteria were used for grading SRP and a modified World Health Organization (WHO) grading system was used to evaluate RPT. The overall percentage of the ipsilateral lung volume that received ${\geq}15\;Gy\;(V_{15}),\;20\;Gy\;(V_{20})$, and $30\;Gy\;(V_{30})$ and the mean lung dose (MLD) were calculated. We divided the ipsilateral lung into two territories, and defined separate DVH parameters, i.e., $V_{15\;TNGT},\;V_{20\;TNGT},\;V_{30\;TNGT},\;MLD_{TNGT}$, and $V_{15\;SCL},\;V_{20\;SCL},\;V_{30SCL},\;MLD_{SCL}$ to assess the relationship between these parameters and RPT. $\underline{Results}$: Four patients (2.1%) developed SRP (three with grade 3 and one with grade 2, respectively). There was no significant association of SRP with clinical parameters such as, age, pre-existing lung disease, smoking, chemotherapy, hormonal therapy and regional RT. When 137 patients treated with 3D-RTP were evaluated, 13.9% developed RPT in the tangent (TNGT) territory and 49.2% of 59 patients with regional RT developed RPT in the SCL territory. Regional RT (p<0.001) and age (p=0.039) was significantly correlated with RPT. All DVH parameters except for $V_{15\;TNGT}$ showed a significant correlation with RPT (p<0.05). $MLD_{TNGT}$ was a better predictor for RPT for the TNGT territory than $V_{15\;SCL}$ for the SCL territory. $\underline{Conclusion}$: The incidence of SRP was acceptable with the RT technique that was used. Age and regional RT were significant factors to predict RPT. The DVH parameter was good predictor for RPT for the SCL territory while $MLD_{TNGT}$ was a better predictor for RPT for the TNGT territory.

Comparison of Dose Distributions Calculated by Anisotropic Analytical Algorithm and Pencil Beam Convolution Algorithm at Tumors Located in Liver Dome Site (간원개에 위치한 종양에 대한 Anisotropic Analyticalal Algorithm과 Pencil Beam Convolution 알고리즘에 따른 전달선량 비교)

  • Park, Byung-Do;Jung, Sang-Hoon;Park, Sung-Ho;Kwak, Jeong-Won;Kim, Jong-Hoon;Yoon, Sang-Min;Ahn, Seung-Do
    • Progress in Medical Physics
    • /
    • v.23 no.2
    • /
    • pp.106-113
    • /
    • 2012
  • The purpose of this study is to evaluate the variation of radiation dose distribution for liver tumor located in liver dome and for the interest organs(normal liver, kidney, stomach) with the pencil beam convolution (PBC) algorithm versus anisotropic Analyticalal algorithm (AAA) of the Varian Eclipse treatment planning system, The target volumes from 20 liver cancer patients were used to create treatment plans. Treatment plans for 10 patients were performed in Stereotactic Body Radiation Therapy (SBRT) plan and others were performed in 3 Dimensional Conformal Radiation Therapy (3DCRT) plan. dose calculation was recalculated by AAA algorithm after dose calculation was performed by PBC algorithm for 20 patients. Plans were optimized to 100% of the PTV by the Prescription Isodose in Dose Calculation with the PBC algorithm. Plans were recalculated with the AAA, retaining identical beam arrangements, monitor units, field weighting and collimator condition. In this study, Total PTV was to be statistically significant (SRS: p=0.018, 3DCRT: p=0.006) between PBC and AAA algorithm. and in the case of PTV, ITV in liver dome, plans for 3DCRT were to be statistically significant respectively (p=0.013, p=0.024). normal liver and kidney were to be statistically significant (p=0.009, p=0.037). For the predictive index of dose variation, CVF ratio was to be statistically significant for PTV in the liver dome versus PTV (SRS r=0.684, 3DCRT r=0.732, p<0.01) and CVF ratio for Tumor size was to be statistically significant (SRS r=-0.193, p=0.017, 3DCRT r=0.237, p=0.023).