A Study on Risk Parity Asset Allocation Model with XGBoos (XGBoost를 활용한 리스크패리티 자산배분 모형에 관한 연구)
-
- Journal of Intelligence and Information Systems
- /
- v.26 no.1
- /
- pp.135-149
- /
- 2020
Artificial intelligences are changing world. Financial market is also not an exception. Robo-Advisor is actively being developed, making up the weakness of traditional asset allocation methods and replacing the parts that are difficult for the traditional methods. It makes automated investment decisions with artificial intelligence algorithms and is used with various asset allocation models such as mean-variance model, Black-Litterman model and risk parity model. Risk parity model is a typical risk-based asset allocation model which is focused on the volatility of assets. It avoids investment risk structurally. So it has stability in the management of large size fund and it has been widely used in financial field. XGBoost model is a parallel tree-boosting method. It is an optimized gradient boosting model designed to be highly efficient and flexible. It not only makes billions of examples in limited memory environments but is also very fast to learn compared to traditional boosting methods. It is frequently used in various fields of data analysis and has a lot of advantages. So in this study, we propose a new asset allocation model that combines risk parity model and XGBoost machine learning model. This model uses XGBoost to predict the risk of assets and applies the predictive risk to the process of covariance estimation. There are estimated errors between the estimation period and the actual investment period because the optimized asset allocation model estimates the proportion of investments based on historical data. these estimated errors adversely affect the optimized portfolio performance. This study aims to improve the stability and portfolio performance of the model by predicting the volatility of the next investment period and reducing estimated errors of optimized asset allocation model. As a result, it narrows the gap between theory and practice and proposes a more advanced asset allocation model. In this study, we used the Korean stock market price data for a total of 17 years from 2003 to 2019 for the empirical test of the suggested model. The data sets are specifically composed of energy, finance, IT, industrial, material, telecommunication, utility, consumer, health care and staple sectors. We accumulated the value of prediction using moving-window method by 1,000 in-sample and 20 out-of-sample, so we produced a total of 154 rebalancing back-testing results. We analyzed portfolio performance in terms of cumulative rate of return and got a lot of sample data because of long period results. Comparing with traditional risk parity model, this experiment recorded improvements in both cumulative yield and reduction of estimated errors. The total cumulative return is 45.748%, about 5% higher than that of risk parity model and also the estimated errors are reduced in 9 out of 10 industry sectors. The reduction of estimated errors increases stability of the model and makes it easy to apply in practical investment. The results of the experiment showed improvement of portfolio performance by reducing the estimated errors of the optimized asset allocation model. Many financial models and asset allocation models are limited in practical investment because of the most fundamental question of whether the past characteristics of assets will continue into the future in the changing financial market. However, this study not only takes advantage of traditional asset allocation models, but also supplements the limitations of traditional methods and increases stability by predicting the risks of assets with the latest algorithm. There are various studies on parametric estimation methods to reduce the estimated errors in the portfolio optimization. We also suggested a new method to reduce estimated errors in optimized asset allocation model using machine learning. So this study is meaningful in that it proposes an advanced artificial intelligence asset allocation model for the fast-developing financial markets.
In order to find out the possibility of predicting fertilizer K requirement from the K supplying capacity of soil, the relative K activity ratio, Kas/kai, the potential buffering capacity of
Background: The aims of this study are to verify the result of the surgical treatment of ALCAPA and to identify the postoperative changes of left ventricular dimensions and mitral regurgitation (MR), Material and Method: Fifteen patients operated on since 1985 were included in the study. The patients operated on before 1998 (n=9) showed heterogeneous properties with various surgical strategies and cardiopulmonary bypass techniques. However, six patients were operated on with the established surgical strategy since 1998; 1) Dual perfusion and dual cardioplegic solution delivery through ascending aorta and main pulmonary artery, 2) Coronary transfer by rolled-conduit made of pulmonary artery wall flap, and 3) Additional mitral valvular procedure was not peformed. Result: Median age of the study group was 6 months (1 month to 34 years). The operative methods were left subclavian artery to left coronary artery anastomosis in 1, simple ligation in 2, Takeuchi operation in 2, and coronary reimplantation in 10 patients. The mean follow up period was 5.5<5.8 years (2 months 14 years), There were one early death (6.7%) and one late death. Overall 5-year survival rate was 85.6
The cause that the increase of animality fat intakes, under exercise, fatness, adding the stress, advanced age etc., the occurrence rate of the circulation system disease has been increased. And the thrombosis importantly came to the front as the risk factor of these circulation system's disease. Nowadays, the ischemic disease has especially discussed, for example the angina or myocardial infarction, originated in thrombosis that came from the platelet aggregation. In the western medicine, as the cure and prevention, using the aspirin or ticlopidine for platelet aggregation suppressant. But in the
The 3D printing technology, which started from the patent registration in 1986, was a technology that did not attract attention other than from some companies, due to the lack of awareness at the time. However, today, as expiring patents are appearing after the passage of 20 years, the price of 3D printers have decreased to the level of allowing purchase by individuals and the technology is attracting attention from industries, in addition to the general public, such as by naturally accepting 3D and to share 3D data, based on the generalization of online information exchange and improvement of computer performance. The production capability of 3D printers, which is based on digital data enabling digital transmission and revision and supplementation or production manufacturing not requiring molding, may provide a groundbreaking change to the process of manufacturing, and may attain the same effect in the character merchandise sector. Using a 3D printer is becoming a necessity in various figure merchandise productions which are in the forefront of the kidult culture that is recently gaining attention, and when predicting the demand by the industrial sites related to such character merchandise and when considering the more inexpensive price due to the expiration of patents and sharing of technology, expanding opportunities and sectors of employment and cultivating manpower that are able to engage in further creative work seems as a must, by introducing education courses cultivating manpower that can utilize 3D printers at the education field. However, there are limits in the information that can be obtained when seeking to introduce 3D printers in school education. Because the press or information media only mentions general information, such as the growth of the industrial size or prosperous future value of 3D printers, the research level of the academic world also remains at the level of organizing contents in an introductory level, such as by analyzing data on industrial size, analyzing the applicable scope in the industry, or introducing the printing technology. Such lack of information gives rise to problems at the education site. There would be no choice but to incur temporal and opportunity expenses, since the technology would only be able to be used after going through trials and errors, by first introducing the technology without examining the actual information, such as through comparing the strengths and weaknesses. In particular, if an expensive equipment introduced does not suit the features of school education, the loss costs would be significant. This research targeted general users without a technology-related basis, instead of specialists. By comparing the strengths and weaknesses and analyzing the problems and matters requiring notice upon use, pursuant to the representative technologies, instead of merely introducing the 3D printer technology as had been done previously, this research sought to explain the types of features that a 3D printer should have, in particular, when required in education relating to the development of figure merchandise as an optional cultural contents at cartoon-related departments, and sought to provide information that can be of practical help when seeking to provide education using 3D printers in the future. In the main body, the technologies were explained by making a classification based on a new perspective, such as the buttress method, types of materials, two-dimensional printing method, and three-dimensional printing method. The reason for selecting such different classification method was to easily allow mutual comparison of the practical problems upon use. In conclusion, the most suitable 3D printer was selected as the printer in the FDM method, which is comparatively cheap and requires low repair and maintenance cost and low materials expenses, although rather insufficient in the quality of outputs, and a recommendation was made, in addition, to select an entity that is supportive in providing technical support.
Recommender system has become one of the most important technologies in e-commerce in these days. The ultimate reason to shop online, for many consumers, is to reduce the efforts for information search and purchase. Recommender system is a key technology to serve these needs. Many of the past studies about recommender systems have been devoted to developing and improving recommendation algorithms and collaborative filtering (CF) is known to be the most successful one. Despite its success, however, CF has several shortcomings such as cold-start, sparsity, gray sheep problems. In order to be able to generate recommendations, ordinary CF algorithms require evaluations or preference information directly from users. For new users who do not have any evaluations or preference information, therefore, CF cannot come up with recommendations (Cold-star problem). As the numbers of products and customers increase, the scale of the data increases exponentially and most of the data cells are empty. This sparse dataset makes computation for recommendation extremely hard (Sparsity problem). Since CF is based on the assumption that there are groups of users sharing common preferences or tastes, CF becomes inaccurate if there are many users with rare and unique tastes (Gray sheep problem). This study proposes a new algorithm that utilizes Social Network Analysis (SNA) techniques to resolve the gray sheep problem. We utilize 'degree centrality' in SNA to identify users with unique preferences (gray sheep). Degree centrality in SNA refers to the number of direct links to and from a node. In a network of users who are connected through common preferences or tastes, those with unique tastes have fewer links to other users (nodes) and they are isolated from other users. Therefore, gray sheep can be identified by calculating degree centrality of each node. We divide the dataset into two, gray sheep and others, based on the degree centrality of the users. Then, different similarity measures and recommendation methods are applied to these two datasets. More detail algorithm is as follows: Step 1: Convert the initial data which is a two-mode network (user to item) into an one-mode network (user to user). Step 2: Calculate degree centrality of each node and separate those nodes having degree centrality values lower than the pre-set threshold. The threshold value is determined by simulations such that the accuracy of CF for the remaining dataset is maximized. Step 3: Ordinary CF algorithm is applied to the remaining dataset. Step 4: Since the separated dataset consist of users with unique tastes, an ordinary CF algorithm cannot generate recommendations for them. A 'popular item' method is used to generate recommendations for these users. The F measures of the two datasets are weighted by the numbers of nodes and summed to be used as the final performance metric. In order to test performance improvement by this new algorithm, an empirical study was conducted using a publically available dataset - the MovieLens data by GroupLens research team. We used 100,000 evaluations by 943 users on 1,682 movies. The proposed algorithm was compared with an ordinary CF algorithm utilizing 'Best-N-neighbors' and 'Cosine' similarity method. The empirical results show that F measure was improved about 11% on average when the proposed algorithm was used