• Title/Summary/Keyword: performance metric

Search Result 519, Processing Time 0.024 seconds

Effect of Supplementary Feeding of Concentrate on Nutrient Utilization and Production Performance of Ewes Grazing on Community Rangeland during Late Gestation and Early Lactation

  • Chaturvedi, O.H.;Bhatta, Raghavendra;Santra, A.;Mishra, A.S.;Mann, J.S.
    • Asian-Australasian Journal of Animal Sciences
    • /
    • v.16 no.7
    • /
    • pp.983-987
    • /
    • 2003
  • Malpura and Kheri ewes (76) in their late gestation, weighing $34.40{\pm}0.95kg$ were randomly selected and divided into 4 groups of 19 each (G1, G2, G3 and G4). Ewes in all the groups were grazed on natural rangeland from 07.00 h to 18.00 h. Ewes in G1were maintained on sole grazing while ewes in G2, G3 and G4, in addition to grazing received concentrate mixture at the rate of 1% of their body weight during late gestation, early lactation and entire last quarter of pregnancy to early quarter of lactation, respectively. The herbage yield of the community rangeland was 0.82 metric ton dry matter/hectare. The diet consisted of (%) Guar (Cyamopsis tetragonoloba) bhusa, (59.2), Babool pods and leaves (17.2), Bajra (Pennisetum typhoides) stubbles (8.8), Doob (5.3), Aak (4.2) and others (5.3). The nutrient intake and its digestibility were higher (p<0.01) in G2, G3 and G4 as compared to G1 because of concentrate supplementation. The intakes of DM ($g/kg\;W{^0.75}$), DCP ($g/kg\;W{^0.75}$) and ME ($MJ/kg\;W{^0.75}$) were 56.7, 5.3 and 0.83; 82.7, 12.2 and 1.16; 82.7, 12.1 and 1.17 and 83.1, 12.3 and 1.18 in G1, G2, G3 and G4, respectively. The per cent digestibility of DM, OM, CP, NDF, ADF and cellulose was 57.9, 68.8, 68.7, 52.3, 37.5 and 68.4; 67.6, 76.1, 82.3, 60.6, 44.5 and 73.4; 67.6, 76.1, 81.5, 60.6, 44.8 and 74.5 and 67.6, 76.1, 82.3, 60.6, 44.7 and 73.3 in G1, G2, G3 and G4, respectively. The nutrient intake of G2, G3 and G4 ewes was sufficient to meet their requirements. The ewes raised on sole grazing lost weight at lambing in comparison to advanced pregnancy. However, ewes raised on supplementary feeding gained 1.9-2.5 kg at lambing. The birth weight of lambs in G2 (3.92) and G4 (4.07) was higher (p<0.01) than G1 (2.98), where as in G1 and G3 it was similar. The weight of lambs at 15, 45 and 60 days of age were higher in G2, G3 and G4 than in G1. Similarly, the average daily gain (ADG) after 60 days was also higher in G2, G3 and G4 than in G1. The milk-yield of lactating ewes in G2, G3 and G4 increased up to 150-250 g per day in comparison to G1. The birth weight, weight at 15, 30, 45 and 60 days, weight gain and ADG at 30 or 60 days was similar both in male and female lambs. It is concluded from this study that the biomass yield of the community rangeland is low and insufficient to meet the nutrient requirements of ewes during late gestation and early lactation. Therefore, it is recommended concentrate supplementation at the rate of 1% of body weight to ewes during these critical stages to enhance their production performance, general condition as well as birth weight and growth rate of lambs.

A New Item Recommendation Procedure Using Preference Boundary

  • Kim, Hyea-Kyeong;Jang, Moon-Kyoung;Kim, Jae-Kyeong;Cho, Yoon-Ho
    • Asia pacific journal of information systems
    • /
    • v.20 no.1
    • /
    • pp.81-99
    • /
    • 2010
  • Lately, in consumers' markets the number of new items is rapidly increasing at an overwhelming rate while consumers have limited access to information about those new products in making a sensible, well-informed purchase. Therefore, item providers and customers need a system which recommends right items to right customers. Also, whenever new items are released, for instance, the recommender system specializing in new items can help item providers locate and identify potential customers. Currently, new items are being added to an existing system without being specially noted to consumers, making it difficult for consumers to identify and evaluate new products introduced in the markets. Most of previous approaches for recommender systems have to rely on the usage history of customers. For new items, this content-based (CB) approach is simply not available for the system to recommend those new items to potential consumers. Although collaborative filtering (CF) approach is not directly applicable to solve the new item problem, it would be a good idea to use the basic principle of CF which identifies similar customers, i,e. neighbors, and recommend items to those customers who have liked the similar items in the past. This research aims to suggest a hybrid recommendation procedure based on the preference boundary of target customer. We suggest the hybrid recommendation procedure using the preference boundary in the feature space for recommending new items only. The basic principle is that if a new item belongs within the preference boundary of a target customer, then it is evaluated to be preferred by the customer. Customers' preferences and characteristics of items including new items are represented in a feature space, and the scope or boundary of the target customer's preference is extended to those of neighbors'. The new item recommendation procedure consists of three steps. The first step is analyzing the profile of items, which are represented as k-dimensional feature values. The second step is to determine the representative point of the target customer's preference boundary, the centroid, based on a personal information set. To determine the centroid of preference boundary of a target customer, three algorithms are developed in this research: one is using the centroid of a target customer only (TC), the other is using centroid of a (dummy) big target customer that is composed of a target customer and his/her neighbors (BC), and another is using centroids of a target customer and his/her neighbors (NC). The third step is to determine the range of the preference boundary, the radius. The suggested algorithm Is using the average distance (AD) between the centroid and all purchased items. We test whether the CF-based approach to determine the centroid of the preference boundary improves the recommendation quality or not. For this purpose, we develop two hybrid algorithms, BC and NC, which use neighbors when deciding centroid of the preference boundary. To test the validity of hybrid algorithms, BC and NC, we developed CB-algorithm, TC, which uses target customers only. We measured effectiveness scores of suggested algorithms and compared them through a series of experiments with a set of real mobile image transaction data. We spilt the period between 1st June 2004 and 31st July and the period between 1st August and 31st August 2004 as a training set and a test set, respectively. The training set Is used to make the preference boundary, and the test set is used to evaluate the performance of the suggested hybrid recommendation procedure. The main aim of this research Is to compare the hybrid recommendation algorithm with the CB algorithm. To evaluate the performance of each algorithm, we compare the purchased new item list in test period with the recommended item list which is recommended by suggested algorithms. So we employ the evaluation metric to hit the ratio for evaluating our algorithms. The hit ratio is defined as the ratio of the hit set size to the recommended set size. The hit set size means the number of success of recommendations in our experiment, and the test set size means the number of purchased items during the test period. Experimental test result shows the hit ratio of BC and NC is bigger than that of TC. This means using neighbors Is more effective to recommend new items. That is hybrid algorithm using CF is more effective when recommending to consumers new items than the algorithm using only CB. The reason of the smaller hit ratio of BC than that of NC is that BC is defined as a dummy or virtual customer who purchased all items of target customers' and neighbors'. That is centroid of BC often shifts from that of TC, so it tends to reflect skewed characters of target customer. So the recommendation algorithm using NC shows the best hit ratio, because NC has sufficient information about target customers and their neighbors without damaging the information about the target customers.

A Study on Developing a VKOSPI Forecasting Model via GARCH Class Models for Intelligent Volatility Trading Systems (지능형 변동성트레이딩시스템개발을 위한 GARCH 모형을 통한 VKOSPI 예측모형 개발에 관한 연구)

  • Kim, Sun-Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.2
    • /
    • pp.19-32
    • /
    • 2010
  • Volatility plays a central role in both academic and practical applications, especially in pricing financial derivative products and trading volatility strategies. This study presents a novel mechanism based on generalized autoregressive conditional heteroskedasticity (GARCH) models that is able to enhance the performance of intelligent volatility trading systems by predicting Korean stock market volatility more accurately. In particular, we embedded the concept of the volatility asymmetry documented widely in the literature into our model. The newly developed Korean stock market volatility index of KOSPI 200, VKOSPI, is used as a volatility proxy. It is the price of a linear portfolio of the KOSPI 200 index options and measures the effect of the expectations of dealers and option traders on stock market volatility for 30 calendar days. The KOSPI 200 index options market started in 1997 and has become the most actively traded market in the world. Its trading volume is more than 10 million contracts a day and records the highest of all the stock index option markets. Therefore, analyzing the VKOSPI has great importance in understanding volatility inherent in option prices and can afford some trading ideas for futures and option dealers. Use of the VKOSPI as volatility proxy avoids statistical estimation problems associated with other measures of volatility since the VKOSPI is model-free expected volatility of market participants calculated directly from the transacted option prices. This study estimates the symmetric and asymmetric GARCH models for the KOSPI 200 index from January 2003 to December 2006 by the maximum likelihood procedure. Asymmetric GARCH models include GJR-GARCH model of Glosten, Jagannathan and Runke, exponential GARCH model of Nelson and power autoregressive conditional heteroskedasticity (ARCH) of Ding, Granger and Engle. Symmetric GARCH model indicates basic GARCH (1, 1). Tomorrow's forecasted value and change direction of stock market volatility are obtained by recursive GARCH specifications from January 2007 to December 2009 and are compared with the VKOSPI. Empirical results indicate that negative unanticipated returns increase volatility more than positive return shocks of equal magnitude decrease volatility, indicating the existence of volatility asymmetry in the Korean stock market. The point value and change direction of tomorrow VKOSPI are estimated and forecasted by GARCH models. Volatility trading system is developed using the forecasted change direction of the VKOSPI, that is, if tomorrow VKOSPI is expected to rise, a long straddle or strangle position is established. A short straddle or strangle position is taken if VKOSPI is expected to fall tomorrow. Total profit is calculated as the cumulative sum of the VKOSPI percentage change. If forecasted direction is correct, the absolute value of the VKOSPI percentage changes is added to trading profit. It is subtracted from the trading profit if forecasted direction is not correct. For the in-sample period, the power ARCH model best fits in a statistical metric, Mean Squared Prediction Error (MSPE), and the exponential GARCH model shows the highest Mean Correct Prediction (MCP). The power ARCH model best fits also for the out-of-sample period and provides the highest probability for the VKOSPI change direction tomorrow. Generally, the power ARCH model shows the best fit for the VKOSPI. All the GARCH models provide trading profits for volatility trading system and the exponential GARCH model shows the best performance, annual profit of 197.56%, during the in-sample period. The GARCH models present trading profits during the out-of-sample period except for the exponential GARCH model. During the out-of-sample period, the power ARCH model shows the largest annual trading profit of 38%. The volatility clustering and asymmetry found in this research are the reflection of volatility non-linearity. This further suggests that combining the asymmetric GARCH models and artificial neural networks can significantly enhance the performance of the suggested volatility trading system, since artificial neural networks have been shown to effectively model nonlinear relationships.

Testing for Measurement Invariance of Fashion Brand Equity (패션브랜드 자산 측정모델의 등치테스트에 관한 연구)

  • Kim Haejung;Lim Sook Ja;Crutsinger Christy;Knight Dee
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.28 no.12 s.138
    • /
    • pp.1583-1595
    • /
    • 2004
  • Simon and Sullivan(l993) estimated that clothing and textile related brand equity had the highest magnitude comparing any other industry category. It reflects that fashion brands reinforce the symbolic, social values and emotional characteristics being different from generic brands. Recently, Kim and Lim(2002) developed a fashion brand equity scale to measure a brand's psychometric properties. However, they suggested that additional psychometric tests were needed to compare the relative magnitude of each brand's equity. The purpose of this study was to recognize the psychometric constructs of fashion brand equity and validate Kim and Lim's fashion brand equity scale using the measurement invariance test of cross-group comparison. First, we identified the constructs of fashion brand equity using confirmatory factor analysis through structural equation modeling. Second, we compared the relative magnitude of two brands' equity using the measurement invariance test of multi-group simultaneous factor analysis. Data were collected at six major universities in Seoul, Korea. There were 696 usable surveys for data analysis. The results showed that fashion brand equity was comprised of 16 items representing six dimensions: customer-brand resonance, customer feeling, customer judgment, brand imagery, brand performance and brand awareness. Also, we could support the measurement invariance of two brands' equities by configural and metric invariance tests. There were significant differences in five constructs' mean values. The greatest difference was in customer feeling; the smallest, in customer judgment.

An Efficient Estimation of Place Brand Image Power Based on Text Mining Technology (텍스트마이닝 기반의 효율적인 장소 브랜드 이미지 강도 측정 방법)

  • Choi, Sukjae;Jeon, Jongshik;Subrata, Biswas;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.113-129
    • /
    • 2015
  • Location branding is a very important income making activity, by giving special meanings to a specific location while producing identity and communal value which are based around the understanding of a place's location branding concept methodology. Many other areas, such as marketing, architecture, and city construction, exert an influence creating an impressive brand image. A place brand which shows great recognition to both native people of S. Korea and foreigners creates significant economic effects. There has been research on creating a strategically and detailed place brand image, and the representative research has been carried out by Anholt who surveyed two million people from 50 different countries. However, the investigation, including survey research, required a great deal of effort from the workforce and required significant expense. As a result, there is a need to make more affordable, objective and effective research methods. The purpose of this paper is to find a way to measure the intensity of the image of the brand objective and at a low cost through text mining purposes. The proposed method extracts the keyword and the factors constructing the location brand image from the related web documents. In this way, we can measure the brand image intensity of the specific location. The performance of the proposed methodology was verified through comparison with Anholt's 50 city image consistency index ranking around the world. Four methods are applied to the test. First, RNADOM method artificially ranks the cities included in the experiment. HUMAN method firstly makes a questionnaire and selects 9 volunteers who are well acquainted with brand management and at the same time cities to evaluate. Then they are requested to rank the cities and compared with the Anholt's evaluation results. TM method applies the proposed method to evaluate the cities with all evaluation criteria. TM-LEARN, which is the extended method of TM, selects significant evaluation items from the items in every criterion. Then the method evaluates the cities with all selected evaluation criteria. RMSE is used to as a metric to compare the evaluation results. Experimental results suggested by this paper's methodology are as follows: Firstly, compared to the evaluation method that targets ordinary people, this method appeared to be more accurate. Secondly, compared to the traditional survey method, the time and the cost are much less because in this research we used automated means. Thirdly, this proposed methodology is very timely because it can be evaluated from time to time. Fourthly, compared to Anholt's method which evaluated only for an already specified city, this proposed methodology is applicable to any location. Finally, this proposed methodology has a relatively high objectivity because our research was conducted based on open source data. As a result, our city image evaluation text mining approach has found validity in terms of accuracy, cost-effectiveness, timeliness, scalability, and reliability. The proposed method provides managers with clear guidelines regarding brand management in public and private sectors. As public sectors such as local officers, the proposed method could be used to formulate strategies and enhance the image of their places in an efficient manner. Rather than conducting heavy questionnaires, the local officers could monitor the current place image very shortly a priori, than may make decisions to go over the formal place image test only if the evaluation results from the proposed method are not ordinary no matter what the results indicate opportunity or threat to the place. Moreover, with co-using the morphological analysis, extracting meaningful facets of place brand from text, sentiment analysis and more with the proposed method, marketing strategy planners or civil engineering professionals may obtain deeper and more abundant insights for better place rand images. In the future, a prototype system will be implemented to show the feasibility of the idea proposed in this paper.

Ontology-based User Customized Search Service Considering User Intention (온톨로지 기반의 사용자 의도를 고려한 맞춤형 검색 서비스)

  • Kim, Sukyoung;Kim, Gunwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.129-143
    • /
    • 2012
  • Recently, the rapid progress of a number of standardized web technologies and the proliferation of web users in the world bring an explosive increase of producing and consuming information documents on the web. In addition, most companies have produced, shared, and managed a huge number of information documents that are needed to perform their businesses. They also have discretionally raked, stored and managed a number of web documents published on the web for their business. Along with this increase of information documents that should be managed in the companies, the need of a solution to locate information documents more accurately among a huge number of information sources have increased. In order to satisfy the need of accurate search, the market size of search engine solution market is becoming increasingly expended. The most important functionality among much functionality provided by search engine is to locate accurate information documents from a huge information sources. The major metric to evaluate the accuracy of search engine is relevance that consists of two measures, precision and recall. Precision is thought of as a measure of exactness, that is, what percentage of information considered as true answer are actually such, whereas recall is a measure of completeness, that is, what percentage of true answer are retrieved as such. These two measures can be used differently according to the applied domain. If we need to exhaustively search information such as patent documents and research papers, it is better to increase the recall. On the other hand, when the amount of information is small scale, it is better to increase precision. Most of existing web search engines typically uses a keyword search method that returns web documents including keywords which correspond to search words entered by a user. This method has a virtue of locating all web documents quickly, even though many search words are inputted. However, this method has a fundamental imitation of not considering search intention of a user, thereby retrieving irrelevant results as well as relevant ones. Thus, it takes additional time and effort to set relevant ones out from all results returned by a search engine. That is, keyword search method can increase recall, while it is difficult to locate web documents which a user actually want to find because it does not provide a means of understanding the intention of a user and reflecting it to a progress of searching information. Thus, this research suggests a new method of combining ontology-based search solution with core search functionalities provided by existing search engine solutions. The method enables a search engine to provide optimal search results by inferenceing the search intention of a user. To that end, we build an ontology which contains concepts and relationships among them in a specific domain. The ontology is used to inference synonyms of a set of search keywords inputted by a user, thereby making the search intention of the user reflected into the progress of searching information more actively compared to existing search engines. Based on the proposed method we implement a prototype search system and test the system in the patent domain where we experiment on searching relevant documents associated with a patent. The experiment shows that our system increases the both recall and precision in accuracy and augments the search productivity by using improved user interface that enables a user to interact with our search system effectively. In the future research, we will study a means of validating the better performance of our prototype system by comparing other search engine solution and will extend the applied domain into other domains for searching information such as portal.

Applicability of American and European Spirometry Repeatability Criteria to Korean Adults (한국 성인을 대상으로 한 미국 및 유럽 폐활량 검사 재현성 기준의 유용성)

  • Park, Byung Hoon;Park, Moo Suk;Jung, Woo Young;Byun, Min Kwang;Park, Seon Cheol;Shin, Sang Yun;Jeon, Han Ho;Jung, Kyung Soo;Moon, Ji Ae;Kim, Se Kyu;Chang, Joon;Kim, Sung Kyu;Ahn, Song Vogue;Oh, Yeon-Mok;Lee, Sang Do;Kim, Young Sam
    • Tuberculosis and Respiratory Diseases
    • /
    • v.63 no.5
    • /
    • pp.405-411
    • /
    • 2007
  • Background: The objective of this study was to evaluate the clinical applicability of the repeatability criteria recommended by the American Thoracic Society/European Respiratory Society (ATS/ERS) spirometry guidelines and to determine which factors affect the repeatability of spirometry in Korean adults. Methods: We reviewed the spirometry data of 4,663 Korean adults from the Korean National Health and Nutritional Examination Survey (KNHANES) Chronic Obstructive Pulmonary Disease Cohort (COPD cohort) and the Community-based Cohort Study VI-Fishing village/Islands (community cohort). We measured the anthropometric factors and differences between the highest and second-highest FVC (dFVC) and $FEV_1$ ($dFEV_1$) from prebronchodilator spirometry. Analyses included the distribution of dFVC and $dFEV_1$, comparison of the values meeting the 1994 ATS repeatability criteria with the values meeting the 2005 ATS/ERS repeatability criteria, and the performance of linear regression for evaluating the influence of subject characteristics and the change of criteria on the spiro-metric variability. Results: About 95% of subjects were able to reproduce FVC and $FEV_1$ within 150 ml. The KNHANES based on the 1994 ATS guidelines showed poorer repeatability than the COPD cohort and community cohort based on the 2005 ATS/ERS guidelines. Demographic and anthropometric factors had little effect on repeatability, explaining only 0.5 to 3%. Conclusion: We conclude that the new spirometry repeatability criteria recommended by the 2005 ATS/ERS guidelines is also applicable to Korean adults. The repeatability of spirometry depends little on individual characteristics when an experienced technician performs testing. Therefore, we suggest that sustained efforts for public awareness of new repeatability criteria, quality control of spirograms, and education of personnel are needed for reliable spirometric results.

Evaluation of Image Quality in Micro-CT System Using Constrained Total Variation (TV) Minimization (Micro-CT 시스템에서 제한된 조건의 Total Variation (TV) Minimization을 이용한 영상화질 평가)

  • Jo, Byung-Du;Choi, Jong-Hwa;Kim, Yun-Hwan;Lee, Kyung-Ho;Kim, Dae-Hong;Kim, Hee-Joung
    • Progress in Medical Physics
    • /
    • v.23 no.4
    • /
    • pp.252-260
    • /
    • 2012
  • The reduction of radiation dose from x-ray is a main concern in computed tomography (CT) imaging due to the side-effect of the dose on human body. Recently, the various methods for dose reduction have been studied in CT and one of the method is a iterative reconstruction based on total variation (TV) minimization at few-views data. In this paper, we evaluated the image quality between total variation (TV) minimization algorithm and Feldkam-Davis-kress (FDK) algorithm in micro computed tomography (CT). To evaluate the effect of TV minimization algorithm, we produced a cylindrical phantom including contrast media, water, air inserts. We can acquire maximum 400 projection views per rotation of the x-ray tube and detector. 20, 50, 90, 180 projection data were chosen for evaluating the level of image restoration by TV minimization. The phantom and mouse image reconstructed with FDK algorithm at 400 projection data used as a reference image for comparing with TV minimization and FDK algorithm at few-views. Contrast-to-noise ratio (CNR), Universal quality index (UQI) were used as a image evaluation metric. When projection data are not insufficient, our results show that the image quality of reconstructed with TV minimization is similar to reconstructed image with FDK at 400 view. In the cylindrical phantom study, the CNR of TV image was 5.86, FDK image was 5.65 and FDK-reference was 5.98 at 90-views. The CNR of TV image 0.21 higher than FDK image CNR at 90-views. UQI of TV image was 0.99 and FDK image was 0.81 at 90-views. where, the number of projection is 90, the UQI of TV image 0.18 higher than FDK image at 90-views. In the mouse study UQI of TV image was 0.91, FDK was 0.83 at 90-views. the UQI of TV image 0.08 higher than FDK image at 90-views. In cylindrical phantom image and mouse image study, TV minimization algorithm shows the best performance in artifact reduction and preserving edges at few view data. Therefore, TV minimization can potentially be expected to reduce patient dose in clinics.

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.