• Title/Summary/Keyword: Memory management

Search Result 1,055, Processing Time 0.025 seconds

A Folksonomy Ranking Framework: A Semantic Graph-based Approach (폭소노미 사이트를 위한 랭킹 프레임워크 설계: 시맨틱 그래프기반 접근)

  • Park, Hyun-Jung;Rho, Sang-Kyu
    • Asia pacific journal of information systems
    • /
    • v.21 no.2
    • /
    • pp.89-116
    • /
    • 2011
  • In collaborative tagging systems such as Delicious.com and Flickr.com, users assign keywords or tags to their uploaded resources, such as bookmarks and pictures, for their future use or sharing purposes. The collection of resources and tags generated by a user is called a personomy, and the collection of all personomies constitutes the folksonomy. The most significant need of the folksonomy users Is to efficiently find useful resources or experts on specific topics. An excellent ranking algorithm would assign higher ranking to more useful resources or experts. What resources are considered useful In a folksonomic system? Does a standard superior to frequency or freshness exist? The resource recommended by more users with mere expertise should be worthy of attention. This ranking paradigm can be implemented through a graph-based ranking algorithm. Two well-known representatives of such a paradigm are Page Rank by Google and HITS(Hypertext Induced Topic Selection) by Kleinberg. Both Page Rank and HITS assign a higher evaluation score to pages linked to more higher-scored pages. HITS differs from PageRank in that it utilizes two kinds of scores: authority and hub scores. The ranking objects of these pages are limited to Web pages, whereas the ranking objects of a folksonomic system are somewhat heterogeneous(i.e., users, resources, and tags). Therefore, uniform application of the voting notion of PageRank and HITS based on the links to a folksonomy would be unreasonable, In a folksonomic system, each link corresponding to a property can have an opposite direction, depending on whether the property is an active or a passive voice. The current research stems from the Idea that a graph-based ranking algorithm could be applied to the folksonomic system using the concept of mutual Interactions between entitles, rather than the voting notion of PageRank or HITS. The concept of mutual interactions, proposed for ranking the Semantic Web resources, enables the calculation of importance scores of various resources unaffected by link directions. The weights of a property representing the mutual interaction between classes are assigned depending on the relative significance of the property to the resource importance of each class. This class-oriented approach is based on the fact that, in the Semantic Web, there are many heterogeneous classes; thus, applying a different appraisal standard for each class is more reasonable. This is similar to the evaluation method of humans, where different items are assigned specific weights, which are then summed up to determine the weighted average. We can check for missing properties more easily with this approach than with other predicate-oriented approaches. A user of a tagging system usually assigns more than one tags to the same resource, and there can be more than one tags with the same subjectivity and objectivity. In the case that many users assign similar tags to the same resource, grading the users differently depending on the assignment order becomes necessary. This idea comes from the studies in psychology wherein expertise involves the ability to select the most relevant information for achieving a goal. An expert should be someone who not only has a large collection of documents annotated with a particular tag, but also tends to add documents of high quality to his/her collections. Such documents are identified by the number, as well as the expertise, of users who have the same documents in their collections. In other words, there is a relationship of mutual reinforcement between the expertise of a user and the quality of a document. In addition, there is a need to rank entities related more closely to a certain entity. Considering the property of social media that ensures the popularity of a topic is temporary, recent data should have more weight than old data. We propose a comprehensive folksonomy ranking framework in which all these considerations are dealt with and that can be easily customized to each folksonomy site for ranking purposes. To examine the validity of our ranking algorithm and show the mechanism of adjusting property, time, and expertise weights, we first use a dataset designed for analyzing the effect of each ranking factor independently. We then show the ranking results of a real folksonomy site, with the ranking factors combined. Because the ground truth of a given dataset is not known when it comes to ranking, we inject simulated data whose ranking results can be predicted into the real dataset and compare the ranking results of our algorithm with that of a previous HITS-based algorithm. Our semantic ranking algorithm based on the concept of mutual interaction seems to be preferable to the HITS-based algorithm as a flexible folksonomy ranking framework. Some concrete points of difference are as follows. First, with the time concept applied to the property weights, our algorithm shows superior performance in lowering the scores of older data and raising the scores of newer data. Second, applying the time concept to the expertise weights, as well as to the property weights, our algorithm controls the conflicting influence of expertise weights and enhances overall consistency of time-valued ranking. The expertise weights of the previous study can act as an obstacle to the time-valued ranking because the number of followers increases as time goes on. Third, many new properties and classes can be included in our framework. The previous HITS-based algorithm, based on the voting notion, loses ground in the situation where the domain consists of more than two classes, or where other important properties, such as "sent through twitter" or "registered as a friend," are added to the domain. Forth, there is a big difference in the calculation time and memory use between the two kinds of algorithms. While the matrix multiplication of two matrices, has to be executed twice for the previous HITS-based algorithm, this is unnecessary with our algorithm. In our ranking framework, various folksonomy ranking policies can be expressed with the ranking factors combined and our approach can work, even if the folksonomy site is not implemented with Semantic Web languages. Above all, the time weight proposed in this paper will be applicable to various domains, including social media, where time value is considered important.

Development of a Real-Time Mobile GIS using the HBR-Tree (HBR-Tree를 이용한 실시간 모바일 GIS의 개발)

  • Lee, Ki-Yamg;Yun, Jae-Kwan;Han, Ki-Joon
    • Journal of Korea Spatial Information System Society
    • /
    • v.6 no.1 s.11
    • /
    • pp.73-85
    • /
    • 2004
  • Recently, as the growth of the wireless Internet, PDA and HPC, the focus of research and development related with GIS(Geographic Information System) has been changed to the Real-Time Mobile GIS to service LBS. To offer LBS efficiently, there must be the Real-Time GIS platform that can deal with dynamic status of moving objects and a location index which can deal with the characteristics of location data. Location data can use the same data type(e.g., point) of GIS, but the management of location data is very different. Therefore, in this paper, we studied the Real-Time Mobile GIS using the HBR-tree to manage mass of location data efficiently. The Real-Time Mobile GIS which is developed in this paper consists of the HBR-tree and the Real-Time GIS Platform HBR-tree. we proposed in this paper, is a combined index type of the R-tree and the spatial hash Although location data are updated frequently, update operations are done within the same hash table in the HBR-tree, so it costs less than other tree-based indexes Since the HBR-tree uses the same search mechanism of the R-tree, it is possible to search location data quickly. The Real-Time GIS platform consists of a Real-Time GIS engine that is extended from a main memory database system. a middleware which can transfer spatial, aspatial data to clients and receive location data from clients, and a mobile client which operates on the mobile devices. Especially, this paper described the performance evaluation conducted with practical tests if the HBR-tree and the Real-Time GIS engine respectively.

  • PDF

Quality of Life and Related Factors in Caregivers of Attention Deficit Hyperactivity Disorder Patients (주의력결핍 과잉행동장애 환아 보호자의 삶의 질과 관련요인)

  • Jeong, Jong-Hyun;Hong, Seung-Chul;Han, Jin-Hee;Lee, Sung-Pil
    • Korean Journal of Psychosomatic Medicine
    • /
    • v.13 no.2
    • /
    • pp.102-111
    • /
    • 2005
  • Objective : The purpose of this study was to investigate the quality of life and it's related factors in caregivers of attention deficit hyperactivity disorder patients. Methods : The subjects were 38 attention deficit hyperactivity disorder patients' caregivers(mean age : $37.5{\pm}6.5$, 38 women). Patients were diagnosed with DSM-IV-TR ADHD criteria. Korean version of WHOQOL-BREF(World Health Organization Quality of Life assessment instrument Abbreviated Version) was used for assessment. Results : 1) No significant differences were found in the score of WHOQOL-BREF, overall QOL, physical health domain, psychological domain, social relationships domain and environmental domain between caregiver and control group. 2) The score of Activity of daily living facet$(3.0{\pm}0.7\;vs.\;3.6{\pm}0.7)(p=0.008)$ and self-esteem facet $(2.8{\pm}0.7\;vs.\;3.3{\pm}0.7)(p=0.049)$ were significantly decreased in caregivers of ADHD. 3) Total score of WHOQOL-BREF(r=0.437, p=0.007) and physical health domain(r=0.370, p=0.024) were correlated with caregiver's educational age. 4) In the psychological domain, the score of self-esteem facet(r=-0.337, p=0.039) and thinking, learning, memory & concentration facet(r=-.341, p=0.036) were decreased with caregiver's age. 5) The score of environmental domain were significantly increased with caregiver's educational age (r=0.482, p=0.003), but decreased with patient's age(r=0.328, p=0.044). Conclusion : Although the quality of life in caregivers of ADHD patient had not significantly decreased than control, the quality of lift were positively correlated with educational age of caregives, and negatively correlated with chronological age of caregivers and children. Above results suggest that physicians should consider integrated approaches for caregiver's subjective quality of life in the management of ADHD.

  • PDF

A Case of Adult-onset Type II Citrullinemia Confirmed by Mutation of SLC25A13 (SLC25A13 유전자 돌연변이로 확진된 성인형 제 2형 시트룰린혈증 1례)

  • Jeung, Min Sub;Yang, Aram;Kim, Jinsup;Park, Hyung-Doo;Lee, Heon Ju;Jin, Dong-Kyu;Cho, Sung Yoon
    • Journal of The Korean Society of Inherited Metabolic disease
    • /
    • v.16 no.1
    • /
    • pp.34-41
    • /
    • 2016
  • Adult-onset type II citrullinemia (CTLN2) is characterized by episodes of neurologic symptoms associated with hyperammonemia leading to disorientation, irritability, seizures, and coma. CTLN2 is distinct from classical citrullinemia, which is caused by a mutation of the argininosuccinic acid synthetase (ASS) gene. The serum citrulline level is elevated, while the activity of ASS in liver tissue is decreased. CTLN2 is known to have a poor prognosis if the proper treatment is not taken. We reported a female aged 37 years who developed recurrent attacks of altered consciousness, aberrant behavior, and vomiting. We initially suspected the patient had CTLN2 because of the signs of hyperammonemic encephalopathy, such as altered mentality, memory disturbance, and aberrant behaviors provoked by exercise-induced stress and excessive intravenous amino acid administration. Through her peculiar diet preferences and laboratory findings that included hyperammonemia and citrullinemia, we diagnosed the patient as CTLN2, and SLC25A13 sequencing revealed known compound heterozygous mutations (IVS11+1G>A, c.674C> A). Her parents were heterozygous carriers, and we identified that her older sister had the same mutations. The older sister had not experienced any episodes of hyperammonemia, but she had peculiar diet preferences. The patient and her sister have been well with conservative management. When considering the clinical course of CTLN2, it was meaningful that the older sister could be diagnosed early in an asymptomatic period and that preemptive treatment was employed. Through this case, CTLN2 should be considered in adults who present symptoms of hyperammonemic encephalopathy without a definite etiology. Because of its rare incidence and similar clinical features, CTLN2 is frequently misdiagnosed as hepatic encephalopathy, and it shows a poor prognosis due to the lack of early diagnosis and proper treatment. A high-carbohydrate diet, which is usually used to treat other urea cycle defects, can also exaggerate the clinical course of CTLN2, so proper metabolic screening tests and genetic studies should be performed.

  • PDF

An Analysis of the Roles of Experience in Information System Continuance (정보시스템의 지속적 사용에서 경험의 역할에 대한 분석)

  • Lee, Woong-Kyu
    • Asia pacific journal of information systems
    • /
    • v.21 no.4
    • /
    • pp.45-62
    • /
    • 2011
  • The notion of information systems (IS) continuance has recently emerged as one of the most important research issues in the field of IS. A great deal of research has been conducted thus far on the basis of theories adapted from various disciplines including consumer behaviors and social psychology, in addition to theories regarding information technology (IT) acceptance. This previous body of knowledge provides a robust research framework that can already account for the determination of IS continuance; however, this research points to other, thus-far-unelucidated determinant factors such as habit, which were not included in traditional IT acceptance frameworks, and also re-emphasizes the importance of emotion-related constructs such as satisfaction in addition to conscious intention with rational beliefs such as usefulness. Experiences should also be considered one of the most important factors determining the characteristics of information system (IS) continuance and the features distinct from those determining IS acceptance, because more experienced users may have more opportunities for IS use, which would allow them more frequent use than would be available to less experienced or non-experienced users. Interestingly, experience has dual features that may contradictorily influence IS use. On one hand, attitudes predicated on direct experience have been shown to predict behavior better than attitudes from indirect experience or without experience; as more information is available, direct experience may render IS use a more salient behavior, and may also make IS use more accessible via memory. Therefore, experience may serve to intensify the relationship between IS use and conscious intention with evaluations, On the other hand, experience may culminate in the formation of habits: greater experience may also imply more frequent performance of the behavior, which may lead to the formation of habits, Hence, like experience, users' activation of an IS may be more dependent on habit-that is, unconscious automatic use without deliberation regarding the IS-and less dependent on conscious intentions, Furthermore, experiences can provide basic information necessary for satisfaction with the use of a specific IS, thus spurring the formation of both conscious intentions and unconscious habits, Whereas IT adoption Is a one-time decision, IS continuance may be a series of users' decisions and evaluations based on satisfaction with IS use. Moreover. habits also cannot be formed without satisfaction, even when a behavior is carried out repeatedly. Thus, experiences also play a critical role in satisfaction, as satisfaction is the consequence of direct experiences of actual behaviors. In particular, emotional experiences such as enjoyment can become as influential on IS use as are utilitarian experiences such as usefulness; this is especially true in light of the modern increase in membership-based hedonic systems - including online games, web-based social network services (SNS), blogs, and portals-all of which attempt to provide users with self-fulfilling value. Therefore, in order to understand more clearly the role of experiences in IS continuance, analysis must be conducted under a research framework that includes intentions, habits, and satisfaction, as experience may not only have duration-based moderating effects on the relationship between both intention and habit and the activation of IS use, but may also have content-based positive effects on satisfaction. This is consistent with the basic assumptions regarding the determining factors in IS continuance as suggested by Oritz de Guinea and Markus: consciousness, emotion, and habit. The principal objective of this study was to explore and assess the effects of experiences in IS continuance, with special consideration given to conscious intentions and unconscious habits, as well as satisfaction. IN service of this goal, along with a review of the relevant literature regarding the effects of experiences and habit on continuous IS use, this study suggested a research model that represents the roles of experience: its moderating role in the relationships of IS continuance with both conscious intention and unconscious habit, and its antecedent role in the development of satisfaction. For the validation of this research model. Korean university student users of 'Cyworld', one of the most influential social network services in South Korea, were surveyed, and the data were analyzed via partial least square (PLS) analysis to assess the implications of this study. In result most hypotheses in our research model were statistically supported with the exception of one. Although one hypothesis was not supported, the study's findings provide us with some important implications. First the role of experience in IS continuance differs from its role in IS acceptance. Second, the use of IS was explained by the dynamic balance between habit and intention. Third, the importance of satisfaction was confirmed from the perspective of IS continuance with experience.

Predicting the Performance of Recommender Systems through Social Network Analysis and Artificial Neural Network (사회연결망분석과 인공신경망을 이용한 추천시스템 성능 예측)

  • Cho, Yoon-Ho;Kim, In-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.159-172
    • /
    • 2010
  • The recommender system is one of the possible solutions to assist customers in finding the items they would like to purchase. To date, a variety of recommendation techniques have been developed. One of the most successful recommendation techniques is Collaborative Filtering (CF) that has been used in a number of different applications such as recommending Web pages, movies, music, articles and products. CF identifies customers whose tastes are similar to those of a given customer, and recommends items those customers have liked in the past. Numerous CF algorithms have been developed to increase the performance of recommender systems. Broadly, there are memory-based CF algorithms, model-based CF algorithms, and hybrid CF algorithms which combine CF with content-based techniques or other recommender systems. While many researchers have focused their efforts in improving CF performance, the theoretical justification of CF algorithms is lacking. That is, we do not know many things about how CF is done. Furthermore, the relative performances of CF algorithms are known to be domain and data dependent. It is very time-consuming and expensive to implement and launce a CF recommender system, and also the system unsuited for the given domain provides customers with poor quality recommendations that make them easily annoyed. Therefore, predicting the performances of CF algorithms in advance is practically important and needed. In this study, we propose an efficient approach to predict the performance of CF. Social Network Analysis (SNA) and Artificial Neural Network (ANN) are applied to develop our prediction model. CF can be modeled as a social network in which customers are nodes and purchase relationships between customers are links. SNA facilitates an exploration of the topological properties of the network structure that are implicit in data for CF recommendations. An ANN model is developed through an analysis of network topology, such as network density, inclusiveness, clustering coefficient, network centralization, and Krackhardt's efficiency. While network density, expressed as a proportion of the maximum possible number of links, captures the density of the whole network, the clustering coefficient captures the degree to which the overall network contains localized pockets of dense connectivity. Inclusiveness refers to the number of nodes which are included within the various connected parts of the social network. Centralization reflects the extent to which connections are concentrated in a small number of nodes rather than distributed equally among all nodes. Krackhardt's efficiency characterizes how dense the social network is beyond that barely needed to keep the social group even indirectly connected to one another. We use these social network measures as input variables of the ANN model. As an output variable, we use the recommendation accuracy measured by F1-measure. In order to evaluate the effectiveness of the ANN model, sales transaction data from H department store, one of the well-known department stores in Korea, was used. Total 396 experimental samples were gathered, and we used 40%, 40%, and 20% of them, for training, test, and validation, respectively. The 5-fold cross validation was also conducted to enhance the reliability of our experiments. The input variable measuring process consists of following three steps; analysis of customer similarities, construction of a social network, and analysis of social network patterns. We used Net Miner 3 and UCINET 6.0 for SNA, and Clementine 11.1 for ANN modeling. The experiments reported that the ANN model has 92.61% estimated accuracy and 0.0049 RMSE. Thus, we can know that our prediction model helps decide whether CF is useful for a given application with certain data characteristics.

Value and Prosect of individual diary as research materials : Based on the "The 12th May Diaries Collection" (개인 일기의 연구 자료로서의 가치와 전망 "5월12일 일기컬렉션"을 중심으로)

  • Choi, Hyo Jin;Yim, Jin Hee
    • The Korean Journal of Archival Studies
    • /
    • no.46
    • /
    • pp.95-152
    • /
    • 2015
  • "Archives of Everyday Life" refers to an organization or facility which collects, appraises, selects and preserves the document from the memory of individuals, groups, or a society through categorizing and classifying lives and cultures of ordinary people. The document includes materials such as diaries, autobiography, letters, and notes. It also covers any digital files or hypertext like posts from blogs and online communities, or photos uploaded on Social Network Services. Many research fields including the Records Management Studies has continuously claimed the necessity of collection and preservation of ordinary people's records on daily life produced every moment. Especially diary is a written record reflecting the facts experienced by an individual and his self-examination. Its originality, individuality and uniqueness are considered truly valuable as a document regardless of the era. Lately many diaries have been discovered and presented to the historical research communities, and diverse researchers in human and social studies have embarked more in-depth research on diaries, their authors, and social background of the time. Furthermore, researchers from linguistics, educational studies, and psychology analyze linguistic behaviors, status of cultural assimilation, and emotional or psychological changes of an author. In this study, we are conducting a metastudy from various research on diaries in order to reaffirm the value of "The 12th May Diaries Collection" as everyday life archives. "The 12th May Diaries Collection" consists of diaries produced and donated directly by citizens on the 12th May every year. It was only 2013 when Digital Archiving Institute in Univ. of Myungji organized the first "Annual call for the 12th May". Now more than 2,000 items were collected including hand writing diaries, digital documents, photos, audio and video files, etc. The age of participants also varies from children to senior citizens. In this study, quantitative analysis will be made on the diaries collected as well as more profound discoveries on the detailed contents of each item. It is not difficult to see stories about family and friends, school life, concerns over career path, daily life and feelings of citizens ranging all different generations, regions, and professions. Based on keyword and descriptors of each item, more comprehensive examination will be further made. Additionally this study will also provide suggestions to examine future research opportunities of these diaries for different fields such as linguistics, educational studies, historical studies or humanities considering diverse formats and contents of diaries. Finally this study will also discuss necessary tasks and challenges for "the 12th May Diaries Collection" to be continuously collected and preserved as Everyday Life Archives.

Ideological Impacts and Change in the Recognition of Korean Cultural Heritage during the 20th Century (20세기 한국 문화재 인식의 이데올로기적 영향과 변화)

  • Oh, Chunyoung
    • Korean Journal of Heritage: History & Science
    • /
    • v.53 no.4
    • /
    • pp.60-77
    • /
    • 2020
  • An assumption can be made that, as a start point for the recognition and utilization of cultural heritage, the "choice" of such would reflect the cultural ideology of the ruling power at that time. This has finally been proved by the case of Korea in the 20th century. First, in the late Korean Empire (1901-1910), the prevailing cultural ideology had been inherited from the Joseon Dynasty. The main objects that the Joseon Dynasty tried to protect were royal tombs and archives. During this time, an investigation by the Japanese into Korean historic sites began in earnest. Stung by this, enlightened intellectuals attempted to recognize them as constituting independent cultural heritage, but these attempts failed to be institutionalized. During the 1910-1945 Japanese occupation, the Japanese led investigations to institutionalize Korean cultural heritage, which formed the beginning of the current cultural heritage management system. At that time, the historical investigation, designation, protection, and enhancement activities led by the Japanese Government-General of Korea not only rationalized their colonial occupation of Korea but also illustrated their colonial perspective. Korean nationalists processed the campaign for the love of historical remains on an enlightening level, but they had their limits in that the campaign had been based on the outcome of research planned by the Japanese. During the 1945-2000 period following liberation from Japan, cultural heritage restoration projects took places that were based on nationalist ideology. People intended to consolidate the regime's legitimacy through these projects, and the enactment of the 'Cultural Heritage Charter' in 1997 represented an ideology in itself that stretched beyond a means of promoting nationalist ideology. During the past 20 centuries, cultural heritage content changed depending on the whims of those with political power. Such choices reflected the cultural ideology that the powers at any given time held with regard to cultural heritage. In the background of this cultural heritage choice mechanism, there have been working trade-off relationships formed between terminology and society, as well as the ideological characteristics of collective memories. The ruling party has tried to implant their ideology on their subjects, and we could consider that it wanted to achieve this by being involved in collective memories related to traditional culture, so called-choice, and utilization of cultural heritage.

Development of Deep-Learning-Based Models for Predicting Groundwater Levels in the Middle-Jeju Watershed, Jeju Island (딥러닝 기법을 이용한 제주도 중제주수역 지하수위 예측 모델개발)

  • Park, Jaesung;Jeong, Jiho;Jeong, Jina;Kim, Ki-Hong;Shin, Jaehyeon;Lee, Dongyeop;Jeong, Saebom
    • The Journal of Engineering Geology
    • /
    • v.32 no.4
    • /
    • pp.697-723
    • /
    • 2022
  • Data-driven models to predict groundwater levels 30 days in advance were developed for 12 groundwater monitoring stations in the middle-Jeju watershed, Jeju Island. Stacked long short-term memory (stacked-LSTM), a deep learning technique suitable for time series forecasting, was used for model development. Daily time series data from 2001 to 2022 for precipitation, groundwater usage amount, and groundwater level were considered. Various models were proposed that used different combinations of the input data types and varying lengths of previous time series data for each input variable. A general procedure for deep-learning-based model development is suggested based on consideration of the comparative validation results of the tested models. A model using precipitation, groundwater usage amount, and previous groundwater level data as input variables outperformed any model neglecting one or more of these data categories. Using extended sequences of these past data improved the predictions, possibly owing to the long delay time between precipitation and groundwater recharge, which results from the deep groundwater level in Jeju Island. However, limiting the range of considered groundwater usage data that significantly affected the groundwater level fluctuation (rather than using all the groundwater usage data) improved the performance of the predictive model. The developed models can predict the future groundwater level based on the current amount of precipitation and groundwater use. Therefore, the models provide information on the soundness of the aquifer system, which will help to prepare management plans to maintain appropriate groundwater quantities.

A Study on Risk Parity Asset Allocation Model with XGBoos (XGBoost를 활용한 리스크패리티 자산배분 모형에 관한 연구)

  • Kim, Younghoon;Choi, HeungSik;Kim, SunWoong
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.135-149
    • /
    • 2020
  • Artificial intelligences are changing world. Financial market is also not an exception. Robo-Advisor is actively being developed, making up the weakness of traditional asset allocation methods and replacing the parts that are difficult for the traditional methods. It makes automated investment decisions with artificial intelligence algorithms and is used with various asset allocation models such as mean-variance model, Black-Litterman model and risk parity model. Risk parity model is a typical risk-based asset allocation model which is focused on the volatility of assets. It avoids investment risk structurally. So it has stability in the management of large size fund and it has been widely used in financial field. XGBoost model is a parallel tree-boosting method. It is an optimized gradient boosting model designed to be highly efficient and flexible. It not only makes billions of examples in limited memory environments but is also very fast to learn compared to traditional boosting methods. It is frequently used in various fields of data analysis and has a lot of advantages. So in this study, we propose a new asset allocation model that combines risk parity model and XGBoost machine learning model. This model uses XGBoost to predict the risk of assets and applies the predictive risk to the process of covariance estimation. There are estimated errors between the estimation period and the actual investment period because the optimized asset allocation model estimates the proportion of investments based on historical data. these estimated errors adversely affect the optimized portfolio performance. This study aims to improve the stability and portfolio performance of the model by predicting the volatility of the next investment period and reducing estimated errors of optimized asset allocation model. As a result, it narrows the gap between theory and practice and proposes a more advanced asset allocation model. In this study, we used the Korean stock market price data for a total of 17 years from 2003 to 2019 for the empirical test of the suggested model. The data sets are specifically composed of energy, finance, IT, industrial, material, telecommunication, utility, consumer, health care and staple sectors. We accumulated the value of prediction using moving-window method by 1,000 in-sample and 20 out-of-sample, so we produced a total of 154 rebalancing back-testing results. We analyzed portfolio performance in terms of cumulative rate of return and got a lot of sample data because of long period results. Comparing with traditional risk parity model, this experiment recorded improvements in both cumulative yield and reduction of estimated errors. The total cumulative return is 45.748%, about 5% higher than that of risk parity model and also the estimated errors are reduced in 9 out of 10 industry sectors. The reduction of estimated errors increases stability of the model and makes it easy to apply in practical investment. The results of the experiment showed improvement of portfolio performance by reducing the estimated errors of the optimized asset allocation model. Many financial models and asset allocation models are limited in practical investment because of the most fundamental question of whether the past characteristics of assets will continue into the future in the changing financial market. However, this study not only takes advantage of traditional asset allocation models, but also supplements the limitations of traditional methods and increases stability by predicting the risks of assets with the latest algorithm. There are various studies on parametric estimation methods to reduce the estimated errors in the portfolio optimization. We also suggested a new method to reduce estimated errors in optimized asset allocation model using machine learning. So this study is meaningful in that it proposes an advanced artificial intelligence asset allocation model for the fast-developing financial markets.