• Title/Summary/Keyword: Gradient-based optimization

Search Result 275, Processing Time 0.019 seconds

Rapid Optimization of Multiple Isocenters Using Computer Search for Linear Accelerator-based Stereotactic Radiosurgery (Multiple isocenter를 이용한 뇌정위적 방사선 수술시 컴퓨터 자동 추적 방법에 의한 고속의 선량 최적화)

  • Suh Tae-suk;Park Charn Il;Ha Sung Whan;Yoon Sei Chul;Kim Moon Chan;Bahk Yong Whee;Shinn Kyung Sub
    • Radiation Oncology Journal
    • /
    • v.12 no.1
    • /
    • pp.109-115
    • /
    • 1994
  • The purpose of this paper is to develop an efficient method for the quick determination of multiple isocenters plans to provide optimal dose distribution in sterotactic radiosurgery. A Spherical dose model was developed through the use of fit to the exact dose data calculated in a 18cm diameter of spherical head phantom. It computes dose quickly for each spherical part and is useful to estimate dose distribution for multiple isocenters. An automatic computer search algorithm was developed using the relationship between the isocenter move and the change of dose shape, and adapted with a spherical dose model to determine isocenter separation and cellimator sizes quickly and automatically. A spheric81 dose model shows a comparable isodose distribution with exact dose data and permits rapid calculations of 3-D isodoses. the computer search can provide reasonable isocenter settings more quickly than trial and error types of plans, while producing steep dose gradient around target boundary. A spherical dose model can be used for the quick determination of the multiple isocenter plans with 3 computer automatic search. Our guideline is useful to determine the initial multiple isocenter plans.

  • PDF

Tensile Force Estimation of Externally Prestressed Tendon Using SI technique Based on Differential Evolutionary Algorithm (차분 진화 알고리즘 기반의 SI기법을 이용한 외부 긴장된 텐던의 장력추정)

  • Noh, Myung-Hyun;Jang, Han-Taek;Lee, Sang-Youl;Park, Taehyo
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.29 no.1A
    • /
    • pp.9-18
    • /
    • 2009
  • This paper introduces the application of DE (Differential Evolutionary) method for the estimation of tensile force of the externally prestressed tendon. The proposed technique, a SI (System Identification) method using the DE algorithm, can make global solution search possible as opposed to classical gradient-based optimization techniques. The numerical tests show that the proposed technique employing DE algorithm is a useful method which can detect the effective nominal diameters as well as estimate the exact tensile forces of the externally prestressed tendon with an estimation error less than 1% although there is no a priori information about the identification variables. In addition, the validity of the proposed technique is experimentally proved using a scale-down model test considering the serviceability state condition without and with the loss of the prestressed force. The test results prove that the technique is a feasible and effective method that can not only estimate the exact tensile forces and detect the effective nominal diameters but also inspect the damping properties of test model irrespective of the loss of the prestressed force. The 2% error of the estimated effective nominal diameter is due to the difference between the real tendon diameter with a wired section and the FE model diameter with a full-section. Finally, The accuracy and superiority of the proposed technique using the DE algorithm are verified through the comparative study with the existing theories.

Improving Generalization Performance of Neural Networks using Natural Pruning and Bayesian Selection (자연 프루닝과 베이시안 선택에 의한 신경회로망 일반화 성능 향상)

  • 이현진;박혜영;이일병
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.3_4
    • /
    • pp.326-338
    • /
    • 2003
  • The objective of a neural network design and model selection is to construct an optimal network with a good generalization performance. However, training data include noises, and the number of training data is not sufficient, which results in the difference between the true probability distribution and the empirical one. The difference makes the teaming parameters to over-fit only to training data and to deviate from the true distribution of data, which is called the overfitting phenomenon. The overfilled neural network shows good approximations for the training data, but gives bad predictions to untrained new data. As the complexity of the neural network increases, this overfitting phenomenon also becomes more severe. In this paper, by taking statistical viewpoint, we proposed an integrative process for neural network design and model selection method in order to improve generalization performance. At first, by using the natural gradient learning with adaptive regularization, we try to obtain optimal parameters that are not overfilled to training data with fast convergence. By adopting the natural pruning to the obtained optimal parameters, we generate several candidates of network model with different sizes. Finally, we select an optimal model among candidate models based on the Bayesian Information Criteria. Through the computer simulation on benchmark problems, we confirm the generalization and structure optimization performance of the proposed integrative process of teaming and model selection.

A Study on Risk Parity Asset Allocation Model with XGBoos (XGBoost를 활용한 리스크패리티 자산배분 모형에 관한 연구)

  • Kim, Younghoon;Choi, HeungSik;Kim, SunWoong
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.135-149
    • /
    • 2020
  • Artificial intelligences are changing world. Financial market is also not an exception. Robo-Advisor is actively being developed, making up the weakness of traditional asset allocation methods and replacing the parts that are difficult for the traditional methods. It makes automated investment decisions with artificial intelligence algorithms and is used with various asset allocation models such as mean-variance model, Black-Litterman model and risk parity model. Risk parity model is a typical risk-based asset allocation model which is focused on the volatility of assets. It avoids investment risk structurally. So it has stability in the management of large size fund and it has been widely used in financial field. XGBoost model is a parallel tree-boosting method. It is an optimized gradient boosting model designed to be highly efficient and flexible. It not only makes billions of examples in limited memory environments but is also very fast to learn compared to traditional boosting methods. It is frequently used in various fields of data analysis and has a lot of advantages. So in this study, we propose a new asset allocation model that combines risk parity model and XGBoost machine learning model. This model uses XGBoost to predict the risk of assets and applies the predictive risk to the process of covariance estimation. There are estimated errors between the estimation period and the actual investment period because the optimized asset allocation model estimates the proportion of investments based on historical data. these estimated errors adversely affect the optimized portfolio performance. This study aims to improve the stability and portfolio performance of the model by predicting the volatility of the next investment period and reducing estimated errors of optimized asset allocation model. As a result, it narrows the gap between theory and practice and proposes a more advanced asset allocation model. In this study, we used the Korean stock market price data for a total of 17 years from 2003 to 2019 for the empirical test of the suggested model. The data sets are specifically composed of energy, finance, IT, industrial, material, telecommunication, utility, consumer, health care and staple sectors. We accumulated the value of prediction using moving-window method by 1,000 in-sample and 20 out-of-sample, so we produced a total of 154 rebalancing back-testing results. We analyzed portfolio performance in terms of cumulative rate of return and got a lot of sample data because of long period results. Comparing with traditional risk parity model, this experiment recorded improvements in both cumulative yield and reduction of estimated errors. The total cumulative return is 45.748%, about 5% higher than that of risk parity model and also the estimated errors are reduced in 9 out of 10 industry sectors. The reduction of estimated errors increases stability of the model and makes it easy to apply in practical investment. The results of the experiment showed improvement of portfolio performance by reducing the estimated errors of the optimized asset allocation model. Many financial models and asset allocation models are limited in practical investment because of the most fundamental question of whether the past characteristics of assets will continue into the future in the changing financial market. However, this study not only takes advantage of traditional asset allocation models, but also supplements the limitations of traditional methods and increases stability by predicting the risks of assets with the latest algorithm. There are various studies on parametric estimation methods to reduce the estimated errors in the portfolio optimization. We also suggested a new method to reduce estimated errors in optimized asset allocation model using machine learning. So this study is meaningful in that it proposes an advanced artificial intelligence asset allocation model for the fast-developing financial markets.

A Recidivism Prediction Model Based on XGBoost Considering Asymmetric Error Costs (비대칭 오류 비용을 고려한 XGBoost 기반 재범 예측 모델)

  • Won, Ha-Ram;Shim, Jae-Seung;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.127-137
    • /
    • 2019
  • Recidivism prediction has been a subject of constant research by experts since the early 1970s. But it has become more important as committed crimes by recidivist steadily increase. Especially, in the 1990s, after the US and Canada adopted the 'Recidivism Risk Assessment Report' as a decisive criterion during trial and parole screening, research on recidivism prediction became more active. And in the same period, empirical studies on 'Recidivism Factors' were started even at Korea. Even though most recidivism prediction studies have so far focused on factors of recidivism or the accuracy of recidivism prediction, it is important to minimize the prediction misclassification cost, because recidivism prediction has an asymmetric error cost structure. In general, the cost of misrecognizing people who do not cause recidivism to cause recidivism is lower than the cost of incorrectly classifying people who would cause recidivism. Because the former increases only the additional monitoring costs, while the latter increases the amount of social, and economic costs. Therefore, in this paper, we propose an XGBoost(eXtream Gradient Boosting; XGB) based recidivism prediction model considering asymmetric error cost. In the first step of the model, XGB, being recognized as high performance ensemble method in the field of data mining, was applied. And the results of XGB were compared with various prediction models such as LOGIT(logistic regression analysis), DT(decision trees), ANN(artificial neural networks), and SVM(support vector machines). In the next step, the threshold is optimized to minimize the total misclassification cost, which is the weighted average of FNE(False Negative Error) and FPE(False Positive Error). To verify the usefulness of the model, the model was applied to a real recidivism prediction dataset. As a result, it was confirmed that the XGB model not only showed better prediction accuracy than other prediction models but also reduced the cost of misclassification most effectively.