• Title/Summary/Keyword: Vector Flow

Search Result 675, Processing Time 0.029 seconds

The Macroeconomic Impacts of Korean Elections and Their Future Consequences (선거(選擧)의 거시경제적(巨視經濟的) 충격(衝擊)과 파급효과(波及效果))

  • Shim, Sang-dal;Lee, Hang-yong
    • KDI Journal of Economic Policy
    • /
    • v.14 no.1
    • /
    • pp.147-165
    • /
    • 1992
  • This paper analyzes the macroeconomic effects of elections on the Korean economy and their future ramifications. It measures the shocks to the Korean economy caused by elections by taking the average of sample forecast errors from four major elections held in the 1980s. The seven variables' Bayesian Vector Autoregression Model which includes the Monetary Base, Industrial Production, Consumption, Consumer Price, Exports, and Investment is based on the quarterly time series data starting from 1970 and is updated every quarter before forecasts are made for the next quarter. Because of this updating of coefficients, which reflects in part the rapid structural changes of the Korean economy, this study can capture the shock effect of elections, which is not possible when using election dummies with a fixed coefficient model. In past elections, especially the elections held in the 1980s, $M_2$ did not show any particular movement, but the currency and base money increased during the quarter of the election was held and the increment was partly recalled in the next quarter. The liquidity of interest rates as measured by corporate bond yields fell during the quarter the election and then rose in the following quarter, which is somewhat contrary to the general concern that interest rates will increase during election periods. Manufacturing employment fell in the quarter of the election because workers turned into campaigners. This decline in employment combined with voting holiday produce a sizeable decline in industrial production during the quarter in which elections are held, but production catches up in the next quarter and sometimes more than offsets the disruption caused during the election quarter. The major shocks to price occur in the previous quarter, reflecting the expectational effect and the relaxation of government price control before the election when we simulate the impulse responses of the VAR model, imposing the same shocks that was measured in the past elections for each election to be held in 1992 and assuming that the elections in 1992 will affect the economy in the same manner as in the 1980s elections, 1992 is expected to see a sizeable increase in monetary base due to election and prices increase pressure will be amplified substantially. On the other hand, the consumption increase due to election is expected to be relatively small and the production will not decrease. Despite increased liquidity, a large portion of liquidity in circulation being used as election funds will distort the flow of funds and aggravate the fund shortage causing investments in plant and equipment and construction activities to stagnate. These effects will be greatly amplified if elections for the head of local government are going to be held this year. If mayoral and gubernatorial elections are held after National Assembly elections, their effect on prices and investment will be approximately double what they normally will have been have only congressional and presidential elections been held. Even when mayoral and gubernatorial elections are held at the same time as congressional elections, the elections of local government heads are shown to add substantial effects to the economy for the year. The above results are based on the assumption that this year's elections will shock the economy in the same manner as in past elections. However, elections in consecutive quarters do not give the economy a chance to pause and recuperate from past elections. This year's elections may have greater effects on prices and production than shown in the model's simulations because campaigners' return to industry may be delayed. Therefore, we may not see a rapid recall of money after elections. In view of the surge in the monetary base and price escalation in the periods before and after elections, economic management in 1992 should place its first priority on controlling the monetary aggregate, in particular, stabilizing the growth of the monetary base.

  • PDF

The Analysis on the Relationship between Firms' Exposures to SNS and Stock Prices in Korea (기업의 SNS 노출과 주식 수익률간의 관계 분석)

  • Kim, Taehwan;Jung, Woo-Jin;Lee, Sang-Yong Tom
    • Asia pacific journal of information systems
    • /
    • v.24 no.2
    • /
    • pp.233-253
    • /
    • 2014
  • Can the stock market really be predicted? Stock market prediction has attracted much attention from many fields including business, economics, statistics, and mathematics. Early research on stock market prediction was based on random walk theory (RWT) and the efficient market hypothesis (EMH). According to the EMH, stock market are largely driven by new information rather than present and past prices. Since it is unpredictable, stock market will follow a random walk. Even though these theories, Schumaker [2010] asserted that people keep trying to predict the stock market by using artificial intelligence, statistical estimates, and mathematical models. Mathematical approaches include Percolation Methods, Log-Periodic Oscillations and Wavelet Transforms to model future prices. Examples of artificial intelligence approaches that deals with optimization and machine learning are Genetic Algorithms, Support Vector Machines (SVM) and Neural Networks. Statistical approaches typically predicts the future by using past stock market data. Recently, financial engineers have started to predict the stock prices movement pattern by using the SNS data. SNS is the place where peoples opinions and ideas are freely flow and affect others' beliefs on certain things. Through word-of-mouth in SNS, people share product usage experiences, subjective feelings, and commonly accompanying sentiment or mood with others. An increasing number of empirical analyses of sentiment and mood are based on textual collections of public user generated data on the web. The Opinion mining is one domain of the data mining fields extracting public opinions exposed in SNS by utilizing data mining. There have been many studies on the issues of opinion mining from Web sources such as product reviews, forum posts and blogs. In relation to this literatures, we are trying to understand the effects of SNS exposures of firms on stock prices in Korea. Similarly to Bollen et al. [2011], we empirically analyze the impact of SNS exposures on stock return rates. We use Social Metrics by Daum Soft, an SNS big data analysis company in Korea. Social Metrics provides trends and public opinions in Twitter and blogs by using natural language process and analysis tools. It collects the sentences circulated in the Twitter in real time, and breaks down these sentences into the word units and then extracts keywords. In this study, we classify firms' exposures in SNS into two groups: positive and negative. To test the correlation and causation relationship between SNS exposures and stock price returns, we first collect 252 firms' stock prices and KRX100 index in the Korea Stock Exchange (KRX) from May 25, 2012 to September 1, 2012. We also gather the public attitudes (positive, negative) about these firms from Social Metrics over the same period of time. We conduct regression analysis between stock prices and the number of SNS exposures. Having checked the correlation between the two variables, we perform Granger causality test to see the causation direction between the two variables. The research result is that the number of total SNS exposures is positively related with stock market returns. The number of positive mentions of has also positive relationship with stock market returns. Contrarily, the number of negative mentions has negative relationship with stock market returns, but this relationship is statistically not significant. This means that the impact of positive mentions is statistically bigger than the impact of negative mentions. We also investigate whether the impacts are moderated by industry type and firm's size. We find that the SNS exposures impacts are bigger for IT firms than for non-IT firms, and bigger for small sized firms than for large sized firms. The results of Granger causality test shows change of stock price return is caused by SNS exposures, while the causation of the other way round is not significant. Therefore the correlation relationship between SNS exposures and stock prices has uni-direction causality. The more a firm is exposed in SNS, the more is the stock price likely to increase, while stock price changes may not cause more SNS mentions.

Predicting stock movements based on financial news with systematic group identification (시스템적인 군집 확인과 뉴스를 이용한 주가 예측)

  • Seong, NohYoon;Nam, Kihwan
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.1-17
    • /
    • 2019
  • Because stock price forecasting is an important issue both academically and practically, research in stock price prediction has been actively conducted. The stock price forecasting research is classified into using structured data and using unstructured data. With structured data such as historical stock price and financial statements, past studies usually used technical analysis approach and fundamental analysis. In the big data era, the amount of information has rapidly increased, and the artificial intelligence methodology that can find meaning by quantifying string information, which is an unstructured data that takes up a large amount of information, has developed rapidly. With these developments, many attempts with unstructured data are being made to predict stock prices through online news by applying text mining to stock price forecasts. The stock price prediction methodology adopted in many papers is to forecast stock prices with the news of the target companies to be forecasted. However, according to previous research, not only news of a target company affects its stock price, but news of companies that are related to the company can also affect the stock price. However, finding a highly relevant company is not easy because of the market-wide impact and random signs. Thus, existing studies have found highly relevant companies based primarily on pre-determined international industry classification standards. However, according to recent research, global industry classification standard has different homogeneity within the sectors, and it leads to a limitation that forecasting stock prices by taking them all together without considering only relevant companies can adversely affect predictive performance. To overcome the limitation, we first used random matrix theory with text mining for stock prediction. Wherever the dimension of data is large, the classical limit theorems are no longer suitable, because the statistical efficiency will be reduced. Therefore, a simple correlation analysis in the financial market does not mean the true correlation. To solve the issue, we adopt random matrix theory, which is mainly used in econophysics, to remove market-wide effects and random signals and find a true correlation between companies. With the true correlation, we perform cluster analysis to find relevant companies. Also, based on the clustering analysis, we used multiple kernel learning algorithm, which is an ensemble of support vector machine to incorporate the effects of the target firm and its relevant firms simultaneously. Each kernel was assigned to predict stock prices with features of financial news of the target firm and its relevant firms. The results of this study are as follows. The results of this paper are as follows. (1) Following the existing research flow, we confirmed that it is an effective way to forecast stock prices using news from relevant companies. (2) When looking for a relevant company, looking for it in the wrong way can lower AI prediction performance. (3) The proposed approach with random matrix theory shows better performance than previous studies if cluster analysis is performed based on the true correlation by removing market-wide effects and random signals. The contribution of this study is as follows. First, this study shows that random matrix theory, which is used mainly in economic physics, can be combined with artificial intelligence to produce good methodologies. This suggests that it is important not only to develop AI algorithms but also to adopt physics theory. This extends the existing research that presented the methodology by integrating artificial intelligence with complex system theory through transfer entropy. Second, this study stressed that finding the right companies in the stock market is an important issue. This suggests that it is not only important to study artificial intelligence algorithms, but how to theoretically adjust the input values. Third, we confirmed that firms classified as Global Industrial Classification Standard (GICS) might have low relevance and suggested it is necessary to theoretically define the relevance rather than simply finding it in the GICS.

Estimation of Mean Surface Current and Current Variability in the East Sea using Surface Drifter Data from 1991 to 2017 (1991년부터 2017년까지 표층 뜰개 자료를 이용하여 계산한 동해의 평균 표층 해류와 해류 변동성)

  • PARK, JU-EUN;KIM, SOO-YUN;CHOI, BYOUNG-JU;BYUN, DO-SEONG
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.24 no.2
    • /
    • pp.208-225
    • /
    • 2019
  • To understand the mean surface circulation and surface currents in the East Sea, trajectories of surface drifters passed through the East Sea from 1991 to 2017 were analyzed. By analyzing the surface drifter trajectory data, the main paths of surface ocean currents were grouped and the variation in each main current path was investigated. The East Korea Warm Current (EKWC) heading northward separates from the coast at $36{\sim}38^{\circ}N$ and flows to the northeast until $131^{\circ}E$. In the middle (from $131^{\circ}E$ to $137^{\circ}E$) of the East Sea, the average latitude of the currents flowing eastward ranges from 36 to $40^{\circ}N$ and the currents meander with large amplitude. When the average latitude of the surface drifter paths was in the north (south) of $37.5^{\circ}N$, the meandering amplitude was about 50 (100) km. The most frequent route of surface drifters in the middle of the East Sea was the path along $37.5-38.5^{\circ}N$. The surface drifters, which were deployed off the coast of Vladivostok in the north of the East Sea, moved to the southwest along the coast and were separated from the coast to flow southeastward along the cyclonic circulation around the Japan Basin. And, then, the drifters moved to the east along $39-40^{\circ}N$. The mean surface current vector and mean speed were calculated in each lattice with $0.25^{\circ}$ grid spacing using the velocity data of surface drifters which passed through each lattice. The current variance ellipses were calculated with $0.5^{\circ}$ grid spacing. Because the path of the EKWC changes every year in the western part of the Ulleung Basin and the current paths in the Yamato Basin keep changing with many eddies, the current variance ellipses are relatively large in these region. We present a schematic map of the East Sea surface current based on the surface drifter data. The significance of this study is that the surface ocean circulation of the East Sea, which has been mainly studied by numerical model simulations and the sea surface height data obtained from satellite altimeters, was analyzed based on in-situ Lagrangian observational current data.

Bankruptcy Forecasting Model using AdaBoost: A Focus on Construction Companies (적응형 부스팅을 이용한 파산 예측 모형: 건설업을 중심으로)

  • Heo, Junyoung;Yang, Jin Yong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.35-48
    • /
    • 2014
  • According to the 2013 construction market outlook report, the liquidation of construction companies is expected to continue due to the ongoing residential construction recession. Bankruptcies of construction companies have a greater social impact compared to other industries. However, due to the different nature of the capital structure and debt-to-equity ratio, it is more difficult to forecast construction companies' bankruptcies than that of companies in other industries. The construction industry operates on greater leverage, with high debt-to-equity ratios, and project cash flow focused on the second half. The economic cycle greatly influences construction companies. Therefore, downturns tend to rapidly increase the bankruptcy rates of construction companies. High leverage, coupled with increased bankruptcy rates, could lead to greater burdens on banks providing loans to construction companies. Nevertheless, the bankruptcy prediction model concentrated mainly on financial institutions, with rare construction-specific studies. The bankruptcy prediction model based on corporate finance data has been studied for some time in various ways. However, the model is intended for all companies in general, and it may not be appropriate for forecasting bankruptcies of construction companies, who typically have high liquidity risks. The construction industry is capital-intensive, operates on long timelines with large-scale investment projects, and has comparatively longer payback periods than in other industries. With its unique capital structure, it can be difficult to apply a model used to judge the financial risk of companies in general to those in the construction industry. Diverse studies of bankruptcy forecasting models based on a company's financial statements have been conducted for many years. The subjects of the model, however, were general firms, and the models may not be proper for accurately forecasting companies with disproportionately large liquidity risks, such as construction companies. The construction industry is capital-intensive, requiring significant investments in long-term projects, therefore to realize returns from the investment. The unique capital structure means that the same criteria used for other industries cannot be applied to effectively evaluate financial risk for construction firms. Altman Z-score was first published in 1968, and is commonly used as a bankruptcy forecasting model. It forecasts the likelihood of a company going bankrupt by using a simple formula, classifying the results into three categories, and evaluating the corporate status as dangerous, moderate, or safe. When a company falls into the "dangerous" category, it has a high likelihood of bankruptcy within two years, while those in the "safe" category have a low likelihood of bankruptcy. For companies in the "moderate" category, it is difficult to forecast the risk. Many of the construction firm cases in this study fell in the "moderate" category, which made it difficult to forecast their risk. Along with the development of machine learning using computers, recent studies of corporate bankruptcy forecasting have used this technology. Pattern recognition, a representative application area in machine learning, is applied to forecasting corporate bankruptcy, with patterns analyzed based on a company's financial information, and then judged as to whether the pattern belongs to the bankruptcy risk group or the safe group. The representative machine learning models previously used in bankruptcy forecasting are Artificial Neural Networks, Adaptive Boosting (AdaBoost) and, the Support Vector Machine (SVM). There are also many hybrid studies combining these models. Existing studies using the traditional Z-Score technique or bankruptcy prediction using machine learning focus on companies in non-specific industries. Therefore, the industry-specific characteristics of companies are not considered. In this paper, we confirm that adaptive boosting (AdaBoost) is the most appropriate forecasting model for construction companies by based on company size. We classified construction companies into three groups - large, medium, and small based on the company's capital. We analyzed the predictive ability of AdaBoost for each group of companies. The experimental results showed that AdaBoost has more predictive ability than the other models, especially for the group of large companies with capital of more than 50 billion won.