• Title/Summary/Keyword: data-based model

Search Result 21,105, Processing Time 0.058 seconds

Regionality and Variability of Net Primary Productivity and Rice Yield in Korea (우리 나라의 순1차생산력 및 벼 수량의 지역성과 변이성)

  • JUNG YEONG-SANG;BANG JUNG-HO;HAYASHI YOSEI
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.1 no.1
    • /
    • pp.1-11
    • /
    • 1999
  • Rice yield and primary productivity (NPP) are dependent upon the variability of climate and soil. The variability and regionality of the rice yield and net primary productivity were evaluated with the meteorological data collected from Korea Meteorology Administration and the actual rice yield data from the Ministration of Agriculture and Forestry, Korea. The estimated NPP using the three models, dependent upon temperature(NPP-T), precipitation(NPP-P) and net radiation(NPP-R), ranged from 10.87 to 17.52 Mg ha$^{-1}$ with average of 14.69 Mg ha$^{-1}$ in the South Korea and was ranged 6.47 to 15.58 Mg ha$^{-1}$ with average of 12.59 Mg ha$^{-1}$ in the North Korea. The primary limiting factor of NPP in Korea was net radiation, and the secondary limiting factor was temperature. Spectral analysis on the long term change in air temperature in July and August showed periodicity. The short periodicity was 3 to 7 years and the long periodicity was 15 to 43 years. The coefficient of variances, CV, of the rice yield from 1989 to 1998 ranged 3.23 percents to 12.37 percents which were lower than past decades. The CV's in Kangwon and Kyeongbuk were high while that in Chonbuk was the lowest. The prediction model based on th e yield index and yield response to temperature obtain ed from the field crop situation showed reasonable results and thus the spatial distributions of rice yield and predicted yield could be expressed in the maps. The predicted yields was well fitted with the actual yield except Kyungbuk. For better prediction, modification should be made considering radiation factor in further development.

  • PDF

An Evaluation Model on Supply Factors of Urban Park (도시공원의 공급인자 평가모형)

  • Chang, Byung-Moon
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.38 no.1
    • /
    • pp.1-11
    • /
    • 2010
  • The purpose of this paper is to evaluate supply factors of urban parks to answer the research question: What are the causal effects of supply factors of urban parks on visitor satisfaction? After reviewing the literature and the Korean park planning process, we constructed a conceptual framework and have formulated the hypothesis of this research. We had obtained data through a questionnaire, which surveyed 452 visitors at 8 urban parks in Daegu Metropolitan City in 2008, based on a stratified sampling method. After the elimination of 96 unsuitable samples, we have analyzed the data using descriptive statistical methods, Pearson's correlation analysis and a path analysis method. We have found that: 1) While the direct and indirect effect of accessibility(ACC) on visitor satisfaction(VS) turned out to be 0.184 and 0.220, respectively, the indirect effect of information(IFM) and promotion(PRM) on VS turned out to be 0.101 and 0.177, respectively. 2) While the direct and indirect effect of service(SVR) on VS turned out to be 0.130 and 0.236, respectively, the direct effect of ACC turned out to be 0.698. 3) While the direct effect of ACC, SVR and attraction(ATT) on VS turned out to be 0.184, 0.130 and 0.698, respectively, composing 67.96% of causal effect, the indirect effect of ACC, IFM, PRM and SVR on VS turned out to be 0.220, 0.101, 0.177 and 0.236, respectively, composing 42.04% of causal effect. 4) The magnitude of causal effect of supply factors on VS turned out to be ATT(39.98%), ACC(23.14%), SVR(20.96%), PRM(10.14%) and IFM(5.78%) in order, and 5) the causal effect of external supply factors of ACC, IFM and PRM compose 39.06% of the causal effect while that of the internal supply factors of SVR and ATT is 69.94%. The research results suggest that: 1) Planning for park marketing strategy and remedial directions for existing urban parks, in order to increase visitor satisfaction, be focused on IFM and PRM, especially. 2) The research approach and path analysis method adopted by this research be valid and highly useful for planning and evaluation of other recreation areas. It is recommended that: 1) Structural Equation Model on supply factors of urban parks be established in the future. 2) Evaluation of supply factors by type of urban park be performed.

A Study on the Distribution of Startups and Influencing Factors by Generation in Seoul: Focusing on the Comparison of Young and Middle-aged (서울시 세대별 창업 분포와 영향 요인에 대한 연구: 청년층과 중년층의 비교를 중심으로)

  • Hong, Sungpyo;Lim, Hanryeo
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.16 no.3
    • /
    • pp.13-29
    • /
    • 2021
  • The purpose of this study was to analyze the spatial distribution and location factors of startups by generation (young and middle-aged) in Seoul. To this end, a research model was established that included factors of industry, population, and startup institutions by generation in 424 administrative districts using the Seoul Business Enterprise Survey(2018), which includes data on the age group of entrepreneurs. As an analysis method, descriptive statistics were conducted to confirm the frequency, average and standard deviation of startups by generation and major variables in the administrative districts of Seoul, and spatial distribution and characteristics of startups by generation were analyzed through global and local spatial autocorrelation analysis. In particular, the spatial distribution of startups in Seoul was confirmed in-depth by categorizing and analyzing startups by major industries. Afterwards, an appropriate spatial regression analysis model was selected through the Lagrange test, and based on this, the location factors affecting startups by generation were analyzed. The main results derived from the research results are as follows. First, there was a significant difference in the spatial distribution of young and middle-aged startups. The young people started to startups in the belt-shaped area that connects Seocho·Gangnam-Yongsan-Mapo-Gangseo, while middle-aged people were relatively active in the southeastern region represented by Seocho, Gangnam, Songpa, and Gangdong. Second, startups by generation in Seoul showed various spatial distributions according to the type of business. In the knowledge high-tech industries(ICT, professional services) in common, Seocho, Gangnam, Mapo, Guro, and Geumcheon were the centers, and the manufacturing industry was focused on existing clusters. On the other hand, in the case of the life service industry, young people were active in startups near universities and cultural centers, while middle-aged people were concentrated on new towns. Third, there was a difference in factors that influenced the startup location of each generation in Seoul. For young people, high-tech industries, universities, cultural capital, and densely populated areas were significant factors for startup, and for middle-aged people, professional service areas, low average age, and the level of concentration of start-up support institutions had a significant influence on startup. Also, these location factors had different influences for each industry. The implications suggested through the study are as follows. First, it is necessary to support systematic startups considering the characteristics of each region, industry, and generation in Seoul. As there are significant differences in startup regions and industries by generation, it is necessary to strengthen a customized startup support system that takes into account these regional and industrial characteristics. Second, in terms of research methods, a follow-up study is needed that comprehensively considers culture and finance at the large districts(Gu) level through data accumulation.

Habitat Quality Analysis and Evaluation of InVEST Model Using QGIS - Conducted in 21 National Parks of Korea - (QGIS를 이용한 InVEST 모델 서식지질 분석 및 평가 - 21개 국립공원을 대상으로 -)

  • Jang, Jung-Eun;Kwon, Hye-Yeon;Shin, Hae-seon;Lee, Sang-Cheol;Yu, Byeong-hyeok;Jang, Jin;Choi, Song-Hyun
    • Korean Journal of Environment and Ecology
    • /
    • v.36 no.1
    • /
    • pp.102-111
    • /
    • 2022
  • Among protected areas, National Parks are rich in biodiversity, and the benefits of ecosystem services provided to human are higher than the others. Ecosystem service evaluation is being used to manage the value of national parks based on objective and scientific data. Ecosystem services are classified into four services: supporting, provisioning, regulating and cultural. The purpose of this study is to evaluate habitat quality among supporting services. Habitat Quality Model of InVEST was used to analyze. The coefficients of sensitivity and habitat initial value were reset by reflecting prior studies and the actual conditions of protected areas. Habitat quality of 21 national parks except Hallasan National Park was analyzed and mapped. The value of habitat quality was evaluated to be between 0 and 1, and the closer it is to 1, the more natural it is. As a result of habitat quality analysis, Seoraksan and Taebaeksan National Parks (0.90), Jirisan and Odaesan National Parks (0.89), and Sobaeksan National Park (0.88) were found to be the highest in the order. As a result of comparing the area and habitat quality of 18 national parks except for coastal-marine national parks, the larger the area, the higher the overall habitat quality. Comparing the value of habitat quality of each zone, the value of habitat quality was high in the order of the park nature preservation zone, the park nature environmental zone, the park cultural heritage zone, and the park village zone. Considering both the analysis of habitat quality and the legal regulations for each zone of use, it is judged that the more artificial acts are restricted, the higher the habitat quality. This study is meaningful in analyzing habitat quality of 21 National Parks by readjusting the parameters according to the situation of protected areas in Korea. It is expected to be easy to intuitively understand through accurate data and mapping, and will be useful in making policy decisions regarding the development and preservation of protected areas in the future.

Observation of Methane Flux in Rice Paddies Using a Portable Gas Analyzer and an Automatic Opening/Closing Chamber (휴대용 기체분석기와 자동 개폐 챔버를 활용한 벼논에서의 메탄 플럭스 관측)

  • Sung-Won Choi;Minseok Kang;Jongho Kim;Seungwon Sohn;Sungsik Cho;Juhan Park
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.25 no.4
    • /
    • pp.436-445
    • /
    • 2023
  • Methane (CH4) emissions from rice paddies are mainly observed using the closed chamber method or the eddy covariance method. In this study, a new observation technique combining a portable gas analyzer (Model LI-7810, LI-COR, Inc., USA) and an automatic opening/closing chamber (Model Smart Chamber, LI-COR, Inc., USA) was introduced based on the strengths and weaknesses of the existing measurement methods. A cylindrical collar was manufactured according to the maximum growth height of rice and used as an auxiliary measurement tool. All types of measured data can be monitored in real time, and CH4 flux is also calculated simultaneously during the measurement. After the measurement is completed, all the related data can be checked using the software called 'SoilFluxPro'. The biggest advantage of the new observation technique is that time-series changes in greenhouse gas concentrations can be immediately confirmed in the field. It can also be applied to small areas with various treatment conditions, and it is simpler to use and requires less effort for installation and maintenance than the eddy covariance system. However, there are also disadvantages in that the observation system is still expensive, requires specialized knowledge to operate, and requires a lot of manpower to install multiple collars in various observation areas and travel around them to take measurements. It is expected that the new observation technique can make a significant contribution to understanding the CH4 emission pathways from rice paddies and quantifying the emissions from those pathways.

Estimation of Chlorophyll-a Concentration in Nakdong River Using Machine Learning-Based Satellite Data and Water Quality, Hydrological, and Meteorological Factors (머신러닝 기반 위성영상과 수질·수문·기상 인자를 활용한 낙동강의 Chlorophyll-a 농도 추정)

  • Soryeon Park;Sanghun Son;Jaegu Bae;Doi Lee;Dongju Seo;Jinsoo Kim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_1
    • /
    • pp.655-667
    • /
    • 2023
  • Algal bloom outbreaks are frequently reported around the world, and serious water pollution problems arise every year in Korea. It is necessary to protect the aquatic ecosystem through continuous management and rapid response. Many studies using satellite images are being conducted to estimate the concentration of chlorophyll-a (Chl-a), an indicator of algal bloom occurrence. However, machine learning models have recently been used because it is difficult to accurately calculate Chl-a due to the spectral characteristics and atmospheric correction errors that change depending on the water system. It is necessary to consider the factors affecting algal bloom as well as the satellite spectral index. Therefore, this study constructed a dataset by considering water quality, hydrological and meteorological factors, and sentinel-2 images in combination. Representative ensemble models random forest and extreme gradient boosting (XGBoost) were used to predict the concentration of Chl-a in eight weirs located on the Nakdong river over the past five years. R-squared score (R2), root mean square errors (RMSE), and mean absolute errors (MAE) were used as model evaluation indicators, and it was confirmed that R2 of XGBoost was 0.80, RMSE was 6.612, and MAE was 4.457. Shapley additive expansion analysis showed that water quality factors, suspended solids, biochemical oxygen demand, dissolved oxygen, and the band ratio using red edge bands were of high importance in both models. Various input data were confirmed to help improve model performance, and it seems that it can be applied to domestic and international algal bloom detection.

The Relationship between Financial Constraints and Investment Activities : Evidenced from Korean Logistics Firms (우리나라 물류기업의 재무제약 수준과 투자활동과의 관련성에 관한 연구)

  • Lee, Sung-Yhun
    • Journal of Korea Port Economic Association
    • /
    • v.40 no.2
    • /
    • pp.65-78
    • /
    • 2024
  • This study investigates the correlation between financial constraints and investment activities in Korean logistics firms. A sample of 340 companies engaged in the transportation sector, as per the 2021 KSIC, was selected for analysis. Financial data obtained from the DART were used to compile a panel dataset spanning from 1996 to 2021, totaling 6,155 observations. The research model was validated, and tests for heteroscedasticity and autocorrelation in the error terms were conducted considering the panel data structure. The relationship between investment activities in the previous period and current investment activities was analyzed using panel Generalized Method of Moments(GMM). The validation results of the research indicate that Korean logistics firms tend to increase investment activities as their level of financial constraints improves. Specifically, a positive relationship between the level of financial constraints and investment activities was consistently observed across all models. These findings suggest that investment decision-making varies based on the financial constraints faced by companies, aligning with previous research indicating that investment activities of constrained firms are subdued. Moreover, while the results from the model examining whether investment activities in the previous period affect current investment activities indicated an influence of investment activities from the previous period on current investment activities, the investment activities from two periods ago did not show a significant relationship with current investment activities. Among the control variables, firm size and cash flow variables exhibited positive relationships, while debt size and asset diversification variables showed negative relationships. Thus, larger firm size and smoother cash flows were associated with more proactive investment activities, while high debt levels and extensive asset diversification appeared to constrain investment activities in logistics companies. These results interpret that under financial constraints, internal funding sources such as cash flows exhibit positive relationships, whereas external capital sources such as debt demonstrate negative relationships, consistent with empirical findings from previous research.

A Ranking Algorithm for Semantic Web Resources: A Class-oriented Approach (시맨틱 웹 자원의 랭킹을 위한 알고리즘: 클래스중심 접근방법)

  • Rho, Sang-Kyu;Park, Hyun-Jung;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.17 no.4
    • /
    • pp.31-59
    • /
    • 2007
  • We frequently use search engines to find relevant information in the Web but still end up with too much information. In order to solve this problem of information overload, ranking algorithms have been applied to various domains. As more information will be available in the future, effectively and efficiently ranking search results will become more critical. In this paper, we propose a ranking algorithm for the Semantic Web resources, specifically RDF resources. Traditionally, the importance of a particular Web page is estimated based on the number of key words found in the page, which is subject to manipulation. In contrast, link analysis methods such as Google's PageRank capitalize on the information which is inherent in the link structure of the Web graph. PageRank considers a certain page highly important if it is referred to by many other pages. The degree of the importance also increases if the importance of the referring pages is high. Kleinberg's algorithm is another link-structure based ranking algorithm for Web pages. Unlike PageRank, Kleinberg's algorithm utilizes two kinds of scores: the authority score and the hub score. If a page has a high authority score, it is an authority on a given topic and many pages refer to it. A page with a high hub score links to many authoritative pages. As mentioned above, the link-structure based ranking method has been playing an essential role in World Wide Web(WWW), and nowadays, many people recognize the effectiveness and efficiency of it. On the other hand, as Resource Description Framework(RDF) data model forms the foundation of the Semantic Web, any information in the Semantic Web can be expressed with RDF graph, making the ranking algorithm for RDF knowledge bases greatly important. The RDF graph consists of nodes and directional links similar to the Web graph. As a result, the link-structure based ranking method seems to be highly applicable to ranking the Semantic Web resources. However, the information space of the Semantic Web is more complex than that of WWW. For instance, WWW can be considered as one huge class, i.e., a collection of Web pages, which has only a recursive property, i.e., a 'refers to' property corresponding to the hyperlinks. However, the Semantic Web encompasses various kinds of classes and properties, and consequently, ranking methods used in WWW should be modified to reflect the complexity of the information space in the Semantic Web. Previous research addressed the ranking problem of query results retrieved from RDF knowledge bases. Mukherjea and Bamba modified Kleinberg's algorithm in order to apply their algorithm to rank the Semantic Web resources. They defined the objectivity score and the subjectivity score of a resource, which correspond to the authority score and the hub score of Kleinberg's, respectively. They concentrated on the diversity of properties and introduced property weights to control the influence of a resource on another resource depending on the characteristic of the property linking the two resources. A node with a high objectivity score becomes the object of many RDF triples, and a node with a high subjectivity score becomes the subject of many RDF triples. They developed several kinds of Semantic Web systems in order to validate their technique and showed some experimental results verifying the applicability of their method to the Semantic Web. Despite their efforts, however, there remained some limitations which they reported in their paper. First, their algorithm is useful only when a Semantic Web system represents most of the knowledge pertaining to a certain domain. In other words, the ratio of links to nodes should be high, or overall resources should be described in detail, to a certain degree for their algorithm to properly work. Second, a Tightly-Knit Community(TKC) effect, the phenomenon that pages which are less important but yet densely connected have higher scores than the ones that are more important but sparsely connected, remains as problematic. Third, a resource may have a high score, not because it is actually important, but simply because it is very common and as a consequence it has many links pointing to it. In this paper, we examine such ranking problems from a novel perspective and propose a new algorithm which can solve the problems under the previous studies. Our proposed method is based on a class-oriented approach. In contrast to the predicate-oriented approach entertained by the previous research, a user, under our approach, determines the weights of a property by comparing its relative significance to the other properties when evaluating the importance of resources in a specific class. This approach stems from the idea that most queries are supposed to find resources belonging to the same class in the Semantic Web, which consists of many heterogeneous classes in RDF Schema. This approach closely reflects the way that people, in the real world, evaluate something, and will turn out to be superior to the predicate-oriented approach for the Semantic Web. Our proposed algorithm can resolve the TKC(Tightly Knit Community) effect, and further can shed lights on other limitations posed by the previous research. In addition, we propose two ways to incorporate data-type properties which have not been employed even in the case when they have some significance on the resource importance. We designed an experiment to show the effectiveness of our proposed algorithm and the validity of ranking results, which was not tried ever in previous research. We also conducted a comprehensive mathematical analysis, which was overlooked in previous research. The mathematical analysis enabled us to simplify the calculation procedure. Finally, we summarize our experimental results and discuss further research issues.

Perceptional Change of a New Product, DMB Phone

  • Kim, Ju-Young;Ko, Deok-Im
    • Journal of Global Scholars of Marketing Science
    • /
    • v.18 no.3
    • /
    • pp.59-88
    • /
    • 2008
  • Digital Convergence means integration between industry, technology, and contents, and in marketing, it usually comes with creation of new types of product and service under the base of digital technology as digitalization progress in electro-communication industries including telecommunication, home appliance, and computer industries. One can see digital convergence not only in instruments such as PC, AV appliances, cellular phone, but also in contents, network, service that are required in production, modification, distribution, re-production of information. Convergence in contents started around 1990. Convergence in network and service begins as broadcasting and telecommunication integrates and DMB(digital multimedia broadcasting), born in May, 2005 is the symbolic icon in this trend. There are some positive and negative expectations about DMB. The reason why two opposite expectations exist is that DMB does not come out from customer's need but from technology development. Therefore, customers might have hard time to interpret the real meaning of DMB. Time is quite critical to a high tech product, like DMB because another product with same function from different technology can replace the existing product within short period of time. If DMB does not positioning well to customer's mind quickly, another products like Wibro, IPTV, or HSPDA could replace it before it even spreads out. Therefore, positioning strategy is critical for success of DMB product. To make correct positioning strategy, one needs to understand how consumer interprets DMB and how consumer's interpretation can be changed via communication strategy. In this study, we try to investigate how consumer perceives a new product, like DMB and how AD strategy change consumer's perception. More specifically, the paper segment consumers into sub-groups based on their DMB perceptions and compare their characteristics in order to understand how they perceive DMB. And, expose them different printed ADs that have messages guiding consumer think DMB in specific ways, either cellular phone or personal TV. Research Question 1: Segment consumers according to perceptions about DMB and compare characteristics of segmentations. Research Question 2: Compare perceptions about DMB after AD that induces categorization of DMB in direction for each segment. If one understand and predict a direction in which consumer perceive a new product, firm can select target customers easily. We segment consumers according to their perception and analyze characteristics in order to find some variables that can influence perceptions, like prior experience, usage, or habit. And then, marketing people can use this variables to identify target customers and predict their perceptions. If one knows how customer's perception is changed via AD message, communication strategy could be constructed properly. Specially, information from segmented customers helps to develop efficient AD strategy for segment who has prior perception. Research framework consists of two measurements and one treatment, O1 X O2. First observation is for collecting information about consumer's perception and their characteristics. Based on first observation, the paper segment consumers into two groups, one group perceives DMB similar to Cellular phone and the other group perceives DMB similar to TV. And compare characteristics of two segments in order to find reason why they perceive DMB differently. Next, we expose two kinds of AD to subjects. One AD describes DMB as Cellular phone and the other Ad describes DMB as personal TV. When two ADs are exposed to subjects, consumers don't know their prior perception of DMB, in other words, which subject belongs 'similar-to-Cellular phone' segment or 'similar-to-TV' segment? However, we analyze the AD's effect differently for each segment. In research design, final observation is for investigating AD effect. Perception before AD is compared with perception after AD. Comparisons are made for each segment and for each AD. For the segment who perceives DMB similar to TV, AD that describes DMB as cellular phone could change the prior perception. And AD that describes DMB as personal TV, could enforce the prior perception. For data collection, subjects are selected from undergraduate students because they have basic knowledge about most digital equipments and have open attitude about a new product and media. Total number of subjects is 240. In order to measure perception about DMB, we use indirect measurement, comparison with other similar digital products. To select similar digital products, we pre-survey students and then finally select PDA, Car-TV, Cellular Phone, MP3 player, TV, and PSP. Quasi experiment is done at several classes under instructor's allowance. After brief introduction, prior knowledge, awareness, and usage about DMB as well as other digital instruments is asked and their similarities and perceived characteristics are measured. And then, two kinds of manipulated color-printed AD are distributed and similarities and perceived characteristics for DMB are re-measured. Finally purchase intension, AD attitude, manipulation check, and demographic variables are asked. Subjects are given small gift for participation. Stimuli are color-printed advertising. Their actual size is A4 and made after several pre-test from AD professionals and students. As results, consumers are segmented into two subgroups based on their perceptions of DMB. Similarity measure between DMB and cellular phone and similarity measure between DMB and TV are used to classify consumers. If subject whose first measure is less than the second measure, she is classified into segment A and segment A is characterized as they perceive DMB like TV. Otherwise, they are classified as segment B, who perceives DMB like cellular phone. Discriminant analysis on these groups with their characteristics of usage and attitude shows that Segment A knows much about DMB and uses a lot of digital instrument. Segment B, who thinks DMB as cellular phone doesn't know well about DMB and not familiar with other digital instruments. So, consumers with higher knowledge perceive DMB similar to TV because launching DMB advertising lead consumer think DMB as TV. Consumers with less interest on digital products don't know well about DMB AD and then think DMB as cellular phone. In order to investigate perceptions of DMB as well as other digital instruments, we apply Proxscal analysis, Multidimensional Scaling technique at SPSS statistical package. At first step, subjects are presented 21 pairs of 7 digital instruments and evaluate similarity judgments on 7 point scale. And for each segment, their similarity judgments are averaged and similarity matrix is made. Secondly, Proxscal analysis of segment A and B are done. At third stage, get similarity judgment between DMB and other digital instruments after AD exposure. Lastly, similarity judgments of group A-1, A-2, B-1, and B-2 are named as 'after DMB' and put them into matrix made at the first stage. Then apply Proxscal analysis on these matrixes and check the positional difference of DMB and after DMB. The results show that map of segment A, who perceives DMB similar as TV, shows that DMB position closer to TV than to Cellular phone as expected. Map of segment B, who perceive DMB similar as cellular phone shows that DMB position closer to Cellular phone than to TV as expected. Stress value and R-square is acceptable. And, change results after stimuli, manipulated Advertising show that AD makes DMB perception bent toward Cellular phone when Cellular phone-like AD is exposed, and that DMB positioning move towards Car-TV which is more personalized one when TV-like AD is exposed. It is true for both segment, A and B, consistently. Furthermore, the paper apply correspondence analysis to the same data and find almost the same results. The paper answers two main research questions. The first one is that perception about a new product is made mainly from prior experience. And the second one is that AD is effective in changing and enforcing perception. In addition to above, we extend perception change to purchase intention. Purchase intention is high when AD enforces original perception. AD that shows DMB like TV makes worst intention. This paper has limitations and issues to be pursed in near future. Methodologically, current methodology can't provide statistical test on the perceptual change, since classical MDS models, like Proxscal and correspondence analysis are not probability models. So, a new probability MDS model for testing hypothesis about configuration needs to be developed. Next, advertising message needs to be developed more rigorously from theoretical and managerial perspective. Also experimental procedure could be improved for more realistic data collection. For example, web-based experiment and real product stimuli and multimedia presentation could be employed. Or, one can display products together in simulated shop. In addition, demand and social desirability threats of internal validity could influence on the results. In order to handle the threats, results of the model-intended advertising and other "pseudo" advertising could be compared. Furthermore, one can try various level of innovativeness in order to check whether it make any different results (cf. Moon 2006). In addition, if one can create hypothetical product that is really innovative and new for research, it helps to make a vacant impression status and then to study how to form impression in more rigorous way.

  • PDF

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.